id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
53545894
pes2o/s2orc
v3-fos-license
Method to Assess the Level of Implementation of Productivity Practices on Construction Projects It is unfortunate that the productivity in the construction industry has lagged behind the manufacturing industry for the last several decades. The research presented in this thesis aims to improve productivity in the infrastructure sector of the construction industry by developing Best Productivity Practices Implementation Index (BPPII) for Construction projects. The BPPII Infrastructure is a check list of practices that are considered to have a positive influence on labour productivity at the project level for infrastructure projects. These practices have been identified through a literature review and consultation with industry experts, and have been anecdotally proven to positively affect productivity. These practices have been grouped together into a formalized set of BPPII’s categories, sections, and elements. Its planning and implementation levels have been defined. Each practice in the index has been assigned a relative weight based on its importance in affecting labor productivity. In total, there are 31 elements and 6 categories. The six categories of the BPPII Infrastructure are: (1) Materials Management; (2) Construction Machinery and Equipment Logistics; (3) Execution Approach; (4) Human Resources Management; (5) Construction Methods; and (6) Health and Safety. The PIL method is used to find maximum score of each category and each element. Further the category are ranked on the basis of the score obtained by the PIL. The categories Execution Approach score the highest in the entire category with the score of 146. Further the top 10 element are ranked according to their score. Introduction This chapter introduces the basic definition of productivity and Best Productivity Practices Implementation Index which is develop by Construction Industry Institute"s. Construction Labour productivity is one of the most prevalent problems affecting construction industry. General Over the last few decades, the construction industry has made significant progress through advances in heavy equipment, tools, and materials. Infrastructure projects are vital for the economic, social, and environmental well-being of a country. It is widely accepted that the quality of our infrastructure directly impacts the quality of life and economic prosperity of our communities. The construction industry is a major contributor to the economy of a nation, and infrastructure projects account for a major portion of the construction business. The volume of infrastructure construction in the United States of America is much higher than India. These amounts are in addition to that required for the maintenance and rehabilitation of the existing infrastructure, the majority of which has reached its service or design life. All together, the volume of infrastructure construction will grow and with it the need to improve the construction productivity in order to optimize current and future construction expenditures. Construction productivity directly affects the prices of construction projects and the robustness of the national economy. It also affects the outcomes of the national efforts for the renewal of existing infrastructure systems, for building new infrastructure, and competitiveness in the global market In a study undertaken by the Construction Industry Institute"s (USA) Improving productivity is a management issue, and the use of new technologies and techniques may be helpful. Analysis by the Construction Industry Institute"s (CII) Research Teams 240 and 252 of CII Benchmarking and Metrics data has clearly shown that productivity typically deviates 25% more or less from the norm on any particular project within a group with similar characteristics and environments. Research team of CII developed a method, called the Best Productivity Practices Implementation Index for Industrial Projects (BPPII Industrial), that can be used by members of a construction project"s management team to assist in the planning and implementation of management practices that have the potential to improve construction productivity on industrial projects. The goal of the BPPII Industrial is to produce a comprehensive checklist in the form of audits to assess the implementation levels of essential practices needed to ensure high levels of productivity by the craft workers. The method also provides a scoring system to quantitatively evaluate a project team"s level of preparedness in addressing these issues. The practices included are those that are widely accepted throughout the construction industry The research presented lays out a strategy to help improve the productivity of the infrastructure construction sector. It is based on the premise that what is measured will be improved. To effectively manage an industry, a project, or an activity, measurement tools are required. As a result, the proposed research was designed to improve the measurement of practices and the performance of productivity at the project level of the infrastructure construction sector by developing the Best Productivity Industrial Projects is a tool designed to help project managers or superintendents plan productivity enhancing jobsite activities. The philosophy of the BPPII is that one can only improve what is measured. The BPPII-Industrial Projects measures the planning and implementation levels of practices that have the potential to improve construction productivity on industrial projects. These practices differ from the Construction Industry Institute (CII) Best Practices since the focus of BPPII is on practices that promote productivity improvement. The BPPII enables managers to identify practices with low implementation levels on their projects. And it helps them carry out practices that, as noted above, positively affect productivity. The BPPII should be used at the beginning of the execution phase, to help project managers identify the productivity-improving practices to be implemented at the construction site. However, it can also be used at the end of the detailed scope phase to help prepare the project execution plan. Development of the Best Productivity Practices Implementation Index (BPPII) as a measurement tool as this research intends, has the potential to have a substantial impact in terms of its influence on improving the productivity of the industry during an especially crucial time in our history. Because of the total investments required in the infrastructure construction sector, improving construction labour productivity by only a few percent could save many tens or hundreds of millions of dollars in the years ahead. In order for the construction industry to contribute to economic growth with other industries, construction productivity must grow along with other industries. Therefore, the productivity of a major sector like construction in a national economy is of great importance. Improving productivity in the infrastructure construction sector will help in improving productivity in the overall construction industry as it represents a major portion of the overall construction industry. Definition of Productivity in Construction Industry The productivity can be defined in many ways in construction industry. Productivity is a comparison between how much you have put into the projects in terms of manpower, material, machinery or tools and the result you get out of the project. Productivity has to do with the efficiency of production. Making a site more productive that means getting more output for less cost in less time. Productivity covers every activity that goes into completing the construction site works, from the planning stage to the final state. If the contractor can carry out these activities at lower cost in less time with fewer workers, or with less equipment then productivity will be improved. Productivity is generally defined as the ratio of outputs to inputs. Productivity = Input It is important to specify the inputs and outputs to be measured when calculating productivity because there are many inputs, such as labour, materials, equipment, tools, capital, and design. The conversion process from inputs to outputs associated with any operation is also complex, influenced by the technology used, by many externalities such as government regulations, weather, economic conditions, management, and various internal environmental components. What is Labour Productivity? Labour is one of the basic requirements in construction industry. Labour productivity is the most widely used standard to which other measurements or comparisons are judged of operational efficiency. This does not imply that labour is the best input element for productivity measurement but simply reflects the difficulty or impossibility of obtaining numerical values for the other determinants of productivity. One common measure of average labour productivity is a ratio of output per labour. In other words, definition of labour productivity is the amount of goods and services produce by a manpower in unit of time. Aim and Objectives The main aim of this research is to develop a process for improving project labour productivity in the infrastructure construction sector. This aim would be achieved by developing the Best Productivity Practices Implementation Index (BPPII) for infrastructure projects. The following objectives are required to achieve the main research objective 1) To identify and define the categories and element of the BPPII for Infrastructure projects. 2) To find out the maximum score by PIL formula. 3) To rank the top ten elements in BPPII Infrastructure These research objectives are achieved by identifying, mapping, and measuring processes or practices which are essential for improving construction productivity at the project level for the infrastructure construction sector and by collecting and analyzing data for validation. Scope The scope of the research was to identify potential best practices based on literature review and expert opinions like those of members of the Construction Industry Institute (CII) Research Team 252, and develop an implementation index based on these practices for increasing productivity at the project level for the infrastructure construction sector. Practices which have the potential of increasing construction productivity were identified and grouped together, and an implementation index was developed. The scope of the research was limited to the infrastructure construction sector and to differences in labour productivity at the project level. Organization of Project Report This project work is divided into five chapters. Chapter 2 deals with the review of literature. Chapter 3 presents improve the construction productivity. The main approach of author study was to improve the construction productivity. Thus best productivity practices implementation index method was used for industrial projects. Thus finding shows planning and implementation level practices are adopted. Various elements which influence construction labour productivity are taken into consideration. Author presents the development, verification, and validation for the bppii industrial. Another primary contributions of author are the identification of key productivity practices, quantification of the relative importance of these practices, development of a method to assess the level of implementation of these practices on project productivity performance and validation of the proposed method based on its implementation on several industrial projects. Through this review author concluded that many different factors are affecting project performance. The goal of the authors study was to improve the effective planning and implementing of management practices that positively impact construction productivity. (Hassan Nasir 2013) several practices have been identified through a literature review and consultation with industry experts, and have been proven to positively affect productivity. BPPII Infrastructure is a check list of practices which are considered to have a positive influence on labour productivity. In BPPII the categories, sections, and elements are grouped together. Each practice and its planning and implementation levels had been completely defined. Each practice in the index had been assigned a relative weight based on its importance in affecting labour productivity. The six categories of the BPPII Infrastructure are: (1) Materials Management; (2) Construction Machinery and Equipment Logistics; (3) Execution Approach; (4) Human Resources Management; (5) Construction Methods; and (6) Health and Safety. Data were collected from questionnaire survey for infrastructure projects. Data analysis was done by PIL formula for all the categories. Through this review conclusion is drawn based on the statistical analysis of the productivity factor and BPPII scores. Projects that have a high score on the implementation of best practices as defined by the BPPII Infrastructure showed better productivity performance than those that have a low score on the implementation of best practices. (Hassan Nasir et al. 2015) analyzed the productivity and project performance can be improved through implementing best practices. Author describes the development of best productivity practices implementation index (BPPII) for infrastructure project. In that the index is a checklist of practices are considered to have a positive influence on labour productivity at the project level for infrastructure project. Further these practices have been grouped together into a set of categories, section and elements. Each practice and its planning and implementation level were defined and assigned a relative weight on the basis of its importance in affecting labour productivity. Result shows the statistical tests which confirmed the higher implementation of best practices as defined in the index and have a strong positive relationship with the productivity factor and project schedule performance. (Rojas and Aramvareekul 2003) conducted a survey on productivity drivers (e.g., materials management) and opportunities. The survey analysis of drivers regrouped the results under four categories: (1) management systems and strategies, (2) personnel, (3) industry environment, and (4) external conditions. They also noticed that certain factors have different impacts on labour productivity. The main finding of this research brings out the controllable characteristic of labour productivity, representing a huge opportunity for productivity improvements. Thus, labour productivity was seen as a manageable issue. Moreover, it is observed that the introduction of innovations helps to improve labour productivity but cannot solve all the problems related to it. (Allmon et al. 2000) several studies was also attempted to assess construction labour productivity trends. These study had been tried to explain and understand the causes and implications of these trends. Further a study reviewed the principal factors affecting construction labour productivity. Six factors were identified by the author: (1) project uniqueness, (2) technology, (3) management, (4) labour organization, (5) real wage trends, and (6) construction training. After identifying the factors this study also categorized four ways to improve labour productivity through management practices (1) planning, (2) resource supply and control, (3) supply of information and feedback, and (4) selection of the right people to control certain factors. (Shan et al. 2011) author has discuss relationship between the implementation level of management practices and mechanical labour productivity. They utilized the construction industry institute (CII) benchmarking and metrics data set from 41 projects. They found that four management practices have positive correlations with mechanical productivity improvement: (1) pre-project planning, (2) team building, (3) automation and integration of information systems, and (4) safety. Therefore, numerous research programs looked into the potential ways to improve management and make it more effective in supporting craft workers on a jobsite. Licensed Under Creative Commons Attribution CC BY authors identified the top 15 factors that had the most significant impact on construction productivity as the following: (1) lack of detailed planning, (2) worker experience and skills, (3) inadequate supervision, (4) worker motivation, (5) non availability of materials, (6) worker attitude and morale, (7) crew team spirit, (8) non availability of information, (9) changes in drawings and specifications, (10) non availability of tools, (11) non availability of equipment, (12) project size and complexity, (13) lack of procedures for construction methods, (14) changes in contract, and (15) congested work areas. There are three management factors in the top five. This fact confirms the notion that productivity can be managed and was controllable through the effective implementation of certain management practices. This notion was crucial to improving construction productivity through implementing management practices. At the same time, workers were asked to evaluate the overall labour productivity of their project. In addition, this survey was completed with a statistical analysis on the real impact on productivity of factors which were perceived as having low influence. The results of this survey show the following six categories, listed in order of importance, have the greatest impact on labour productivity: (1) tools and consumables, (2) materials, (3) engineering drawing management, (4) construction equipment availability, (5) supervisor direction, and (6) safety. Finally, this study raised a very important point about worker motivation. Workers appreciated the research program took account of their points of view. This suggests the huge influence that management practices have on labour productivity. However, this survey reveals that depending on the trade, the impacts of the categories varied. This should be taken into account in future studies when using craft worker input. (Borcherding and Garner 1981) over the last few decades, the factors affecting labour productivity have received increasing attention among construction researchers. The U.S. Department of Energy conducted a survey of 12 construction projects. Among these industrial projects, the U.S. DOE surveyed both craft workers and supervisor to determine and quantify the diverse factors that both negatively and positively impact construction productivity and worker motivation. From the craftsmen questionnaire surveys, the researchers identified nine major factors and ranked them according to their relative impact on labour productivity: (1) material availability, (2) tool availability, (3) rework, (4) overcrowded work area, (5) inspection delays, (6) supervisor incompetence, (7) crew interfacing, (8) craft turnover and absenteeism, and (9) supervisor changes. Most of these factors are considered by many to be universal, and therefore can be generalized to the whole construction industry. However, others factors, such as the increased lead time in engineering design for these projects, are very specialized and unique and cannot be applied to all kinds of construction projects. (Eddy M. Rojas 2003) labour productivity declined significantly in the construction industry during the 1979-1998 period. Author critically examines the construction labour productivity macroeconomic data to determine their validity and reliability. The method of Data collection, distribution, manipulation, analysis, and interpretation are reviewed and problems are identified. The main conclusion of the study was that the raw data used to calculate construction productivity values at the macroeconomic level. The uncertainty generated in the process of computing these values was such that it cannot be determined if labour productivity has actually increased, decreased, or remained constant in the construction industry. (Paul M. Goodrum et al.) some new technologies promise to improve construction productivity, their ability to deliver was not always realized. A four-stage predictive model was developed and validated to estimate the potential for a technology to have a positive impact on construction productivity. The method adopted by author examine the four stages costs, feasibility, usage history, and technical impact of a technology. The predictive model combines results from historical analyses to formalize how selected technologies with improved construction productivity can be used as a predictor of how future technologies might do the same. Each of the stages of a predictive model was subdivided into a series of categories and questions, which were weighted by importance by using the analytical process. The predictive model was then validated by using 74 previous and existing construction technologies. From results statistical analysis confirmed that average performance scores produced by the model were significantly different across the categories of successful, inconclusive, and unsuccessful in the actual implementation experience of technologies. (William F. Maloney 1983) labour has a significant influence on construction productivity. The level of productivity was a result of the driving, induced, and restraining forces acting upon workers. These forces act positively and negatively with regard to productivity improvement. Author presented a framework for analyzing the influence of each of these forces on four major labour related determinants of construction productivity. Approaches to productivity improvement are analyzed in terms of reducing the negative forces and strengthening the positive forces. (Upul Ranasinghe1 et al. 2012) construction productivity improvement has become a key area to focus on industry. Despite its high impact on the construction industry, productivity improvement was still an area in which much research work needs to be done to find out true potential in industry. Author discusses a framework for the implementation of productivity improvement activities on a construction site, making the process more systematic and accountable with the creation of the construction productivity improvement. (Jimmie Hinze and Gary Wilson 2000) in the past decade the terms ""zero accidents"" and ""zero injuries"" have been used a great deal by construction firms espousing their commitment to safety. Proposed work have shown many Methodology This chapter explains the procedure of this study. A list of various elements which affect construction productivity were collected from the review of the literature and developed into a questionnaire. After preparing questionnaire, data was collected through the questionnaire survey and analyze the data by using PIL method and rank them according to their score. Process of Methodology The methodology process consists of six steps which are given in Figure 3.1. Questionnaire Design "Questionnaire Survey is defined as collection of different data by asking people questions". This questionnaire was designed for collecting information related to the Best Productivity Practices Implementation Index for Infrastructure projects. The tool was designed to be simple and easy to interpret. Data was collected from literature reviews which emphasize building construction"s productivity. A survey was given to employees from different trades involved with the construction project. The design strategy of the questionnaire was that the questions had to be simple, clear, and understandable for the respondents. The questionnaire has the definite advantages of requiring a smaller time to be responded and more accuracy in the final outcome. elements influencing construction labour productivity were identified through the literature survey. A total of 31 elements were identified. These 31 elements were classified under 6 group"s i. The Materials Management group consists of 6 elements which influencing productivity. The construction machinery and equipment logistics group consists of 4 elements which influencing productivity. The Execution Approach group consists of 6 Execution related elements which influencing productivity. The Human Resource Management group consists of 5 elements such as age factors, working hours per day etc. The Construction Methods group consists of 4 elements which influence productivity. The Environment, Safety, and Health group consists of 6 elements such as safety of labours, work environment and health related terms. The participants were required to rate the elements for the way they affect construction labour productivity, considering time, cost, and quality using their own experiences on building sites. The questionnaire required the respondents to rank the elements affecting labour productivity on a scale with the rating of 1 to 5 i. e. "1," representing not important; "2," representing low important; "3," representing moderately important; "4," representing important; "5," representing very important, according to the degree of importance on construction labour productivity. The responses were to be based on the experiences, understanding and knowledge of the respondents and not related with any specific project. The method of questionnaire was simple and direct and this method was selected to establish a means of developing a list of elements impacting on labour productivity on construction projects. Age factor Average working hours/day Site crowding factor enables labors to do work in uncomfortable manner. Over time work will not give good productivity in an job On time payment is done right at the time when the work is accomplished. Construction Methods The construction team (including the owner, engineering and Procurement) was both integrated and aligned. Drawings, site permits, and other required documents were Available before starting construction. All necessary material, equipment, tools and work permits were available before starting construction. Required construction and management personnel were available as needed before starting construction. 6 Environment, Safety, and Health Formal Health and Safety Policy Climatic condition will affect your working performance Site safety procedures were followed for the project. Safety incentives were used on the project. Accidents were formally investigated. Contractor employees were randomly screened for alcohol and Drugs. Category I -Materials Management The first category in the index is materials management, and practices related to materials management are grouped in this category. Materials management is concerned with the availability of the right materials at the right time and at the right place in the construction process. Materials management has been identified as one of the most important factors affecting construction projects in several research efforts. Materials management has been identified as a best practice by CII. Effective materials management on a construction project has the ability to improve productivity and reduce crew idle time. Research found that inefficient materials management could be responsible for an increase in field labour hours of 50% or more. Category II -Construction Machinery and Equipment Logistics The CII conducted the Voice of the Worker Index research project and determined that the factors having the most significant impact on craft labour productivity was the availability of appropriate construction equipment and tools on jobsites. Research reported that the most significant factor affecting craft labour productivity in workers" view is that, "I have to wait for people and/or equipment to move the material I need". Research supported the importance of using a site tool management plan to ensure tools are present on site, stored in a location that is organized and easy to locate, and in proper condition to perform designated tasks. Research found that tools, equipment, and trucks are some of the main factors that can decrease productivity if they are not managed properly. They also reported that tools and equipment and trucks account for a major portion of the total waiting time. Research reported that construction machinery have high significance in factors related to working conditions. Category III -Execution Approach This category deals with issues which are required to be planned and settled so that the construction work can be properly executed and progresses without stopping. The practices in this category are related to planning, constructability reviews, utility alignments, acquisition strategies, and regulatory requirements. They reported that fixed price contracts have better productivity and that lowest bid contracts had a negative effect on productivity. They also mentioned that planning, scheduling, and availability of working drawings are critical for project productivity. Category IV -Human Resources Management As labour productivity involves humans, it is imperative that proper management of this resource is essential. Human Resources Management deals with the planning, training and development, organizational structure, and employment issues of people working in the company. Research identified lack of experienced design and project management personnel, restrictive union rules, and lack of management training for supervision and project management as the main factors affecting productivity. Category V -Construction Methods Construction methods used for the completion of a project play an important role in project success. This category groups together best practices related to site layout planning, scheduling controls, and design/construction planning and approach. Ineffective communications and inadequate planning and scheduling as major impediments in productivity improvement. Scheduling, overtime strategies, and task sequencing have greater impact on labour productivity Category VI -Health and Safety Health and Safety category relates to the implementation of practices that ensure the safety and health of workers and Performance Improvement Program has also metrics defined for health and safety related practices. CII developed practices such as Zero Accident Techniques that should be implemented to protect workers" safety on a jobsite. The construction industry is one of the hazardous industries. The numbers of fatalities and injuries in the construction industry. His research found that accidents can be avoided by establishing procedures and regulations to enhance safety. Different job classifications have safety hazards related to them and usually the construction workers underestimate the hazards. The cost of accidents in construction accounts to approximately 6% of total building costs Data Collection The questionnaire was designed for collecting information related to the Best Productivity Practices Implementation Index for Construction projects. Questionnaire survey was carried out at different construction sites, such as government organization and private firm. 31 factors are classified under 6 categories are distributed to respondents. Total 31 respondents give their opinion out of 50 respondents. Weightage of Element The five-point scale ranged from 1 (not important) to 5 (extremely important) were adopted and transformed to relative importance indices. The value of ƩW is calculated by the sum of total 5 responses multiply by weightage given by the respondent. After that average has been calculated for getting PIL 5 score, where: Ʃ w = sum of the weights assigned by the respondents Linear Interpolation by PIL After assigning the maximum scores to each category, the next step is to assign the relative scores to each PIL of all the 31 elements in the BPPII Infrastructure. The elements that are not applicable to a construction project are given a score of 0. A level one of planning and implementation was given a score of 1 for all the elements. The scores of the PIL 2 to PIL 4 were determined based on linear interpolation between PIL 1 and PIL 5 using the following formulas: PIL 2 Score = (PIL 5 Score -1)/4 +1 PIL 3 Score = (PIL 5 Score -1)/4 + PIL 2 Score PIL 4 Score = (PIL 5 Score -1)/4 + PIL 3 Score Scoring of the BPPII Infrastructure Each element of the BPPII is scored using a system that ranks the planning and implementation level (PIL). The PIL definitions are organized on a scale from 0 to 5. The guideline for each PIL is explained as below: 1) Planning and Implementation Level 0: The planning and implementation of the element is not applicable. 2) Planning and Implementation Level 1: The planning and implementation of the element is not addressed in any capacity on the project. 3) Planning and Implementation Level 2: The planning and implementation of the element is addressed up to a certain extent, but in a below average manner. 4) Planning and Implementation Level 3: The element has average level of planning and implementation. 5) Planning and Implementation Level 4: The planning and implementation of the element is thorough, above average, but not perfect. 6) Planning and Implementation Level 5: The element has the highest possible planning and implementation level, i.e. at most state of the art and technologically advanced level. It is expected that few projects would achieve this level. Results and Discussion This chapter presents results of responses received from the respondents. It also shows the relative importance index and planning implementation level of various studied factors for the above mentioned group of respondents. This study involves a questionnaire survey approach from which statistical data was collected to answer questions in respect of the main subject of study. Questionnaire is the main tool used. Table 4.3 All of the elements that are N/A to a construction project, for whatever reason, are given a score of zero. A Level 1 of planning and implementation was given a score of 1 for all the elements. The maximum average value is calculated from ƩW for getting the value of PIL 5 i.e. In this case 26 is the maximum value for Defining accurate materials specifications. After that PIL 2, PIL 3, PIL 4 are calculated from the given formula. The maximum score is obtained for the category of material management is as 129. Table 4.5 shows that all of the elements that are N/A to a construction project, for whatever reason, are given a score of zero. A Level 1 of planning and implementation was given a score of 1 for all the elements. The maximum average value is calculated from ƩW for getting the value of PIL 5 i.e. In this case 20 is the maximum value for Short interval planning. Maximum score for the category of Execution Approach is obtained as 146. This is the highest score in the entire category. Over time work will not give good productivity in an job has score 16.8 and the least score is obtained by On time payment is done right at the time when the work is accomplished that is 15.6. Table 4.7 shows that all of the elements that are N/A to a construction project, for whatever reason, are given a score of zero. A Level 1 of planning and implementation was given a score of 1 for all the elements. The maximum average value is calculated from ƩW for getting the value of PIL 5 i.e. In this case 20 is the maximum value for the construction team (including the owner, engineering and Procurement) was both integrated and aligned. The maximum score for the category of Construction Methods has obtained the score of 93. .6 shows that the element Drawings, site permits, and other required documents were available before starting construction has the highest score that is 25. All necessary material, equipment, tools and work permits were available before starting construction and Required construction and management personnel were available as needed before starting construction has score 24. And the element that has score the least is The construction team (including the owner, engineering and Procurement) was both integrated and aligned is 20. Fig 4.7 shows that the element 'Climatic condition will affect your working performance' has the highest score of 18. Formal Health and Safety Policy, Site safety procedures were followed for the project, Accidents were formally investigated and Contractor employees were randomly screened for alcohol and drugs has score of 17. And the element that has score the least is Safety incentives were used on the project that is 16. Using survey data from 31 completed forms, the average importance factor of each category, and element was calculated. Once the averages had been calculated, the maximum scores of the 31 elements and 6 categories were determined. This corresponded to a situation in which the planning and implementation level (PIL) of all elements was 5 (i.e., maximum) on a 1 to 5 scale. The research team defined that the maximum attainable BPPII Industrial score for a project was set to 656, representing the best level of practice planning and implementation (i.e., all elements have a PIL of 5). The lowest score (i.e., all elements have a PIL of 1 with no nonapplicable elements) was set to zero. Table 4.9 shows the ranking of 6 categories in the BPPII infrastructure based on the score assigned to each of them. Table 4.10 provides the listing of top 10 ranking elements in the BPPII Infrastructure. Use of Software has been ranked on second position which shares the same ranking. Drawings, site permits, and other required documents were available before starting construction and Dedicated planner has been ranked on the third position which also shares the same ranking. Construction Machinery and Equipment Maintenance has been ranked on the fourth position. All necessary material, equipment, tools and work permits were available before starting construction; Model requirements/3d visualization and Well defined scope of work has been ranked on the fifth position respectively. Locating sources of materials for procurement has been ranked on sixth position. On site Tools Maintenance has ranked on seventh position followed by Site Tools and Equipment Management Strategy that is ranked on eight position. Controlling over ordering and purchasing and Procurement Procedures & Plans for Construction Machinery both are simultaneously ranked on ninth position. Average working hours/day has been ranked on tenth position. There are 17 practices in the top 10 ranked and it makes almost 50% of the total index score. This shows the importance of these particular practices for the improvement of productivity on construction projects. Conclusions This chapter presents conclusion of dissertation that can be carried out on basis of the present work. The conclusion summarizes the overall work that has been carried out in this dissertation. The future scope provides the direction for extension of the presented work. Conclusion Various different factors affect project performance whereas labour productivity is recognized as one of the most important factor in construction industry. So the objective behind this research was to address this critical aspect of project performance. The goal was to improve the effective planning and implementing of management practices that positively impact construction productivity. The main role of this paper is the development of a method called BPPII Industrial. The BPPII Industrial includes 31 elements, also called best productivity practices, which were grouped in six categories: Further the top 10 element are ranked according to their score. The very first element is Contracts & Agreements with Agencies. As this practice should given a very high importance before starting of any construction project. Future Scope This study is conducted for Government and private organization in Amravati and nearby districts where questionnaire survey was carried out from few respondents. To understand more about the exact scenario, scope and number of respondents can be increased. Similar tool based on PIL of best practices should also developed for various construction industry. Annexure 1 Questionnaire Please indicate by ticking the appropriate column the relative importance of each of the following .You have to tick mark ( ) in any one column for each row of element according to experience for identifying elements which affecting construction labor productivity. Materials Management Defining accurate materials specifications 2 Locating sources of materials for procurement 3 Preparing for material storage 4 Daily recording of using materials in the project 6 Controlling over ordering and purchasing 8 Co-workers are mishandling the materials due to lack of training Site crowding factor enables labours to do work in uncomfortable manner. 34 Over time work will not give good productivity in an job 36 On time payment is done right at the time when the work is accomplished. Construction Methods The construction team (including the owner, engineering and Procurement) was both integrated and aligned. 39 Drawings, site permits, and other required documents were available before starting construction. 40 All necessary material, equipment, tools and work permits were available before starting construction. 42 Required construction and management personnel were available as needed before starting construction. 45 Environment, Safety, and Health Formal Health and Safety Policy 46 Climatic condition will affect your working performance 47 Site safety procedures were followed for the project. 51 Safety incentives were used on the project. 53 Accidents were formally investigated. 54 Contractor employees were randomly screened for alcohol and drugs. Date: Engineer Name Designation
2018-10-14T00:55:12.309Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "9eede79d077bc756fd671905197842b664a0b2b8", "oa_license": null, "oa_url": "https://doi.org/10.21275/v5i6.nov164044", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9eede79d077bc756fd671905197842b664a0b2b8", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [] }
249438312
pes2o/s2orc
v3-fos-license
Relationships between coagulation factors and thrombin generation in a general population with arterial and venous disease background Background The current study aims to identify the relationships between coagulation factors and plasma thrombin generation in a large population-based study by comparing individuals with a history of arterial or venous thrombosis to cardiovascular healthy individuals. Methods This study comprised 502 individuals with a history of arterial disease, 195 with history of venous thrombosis and 1402 cardiovascular healthy individuals (reference group) from the population-based Gutenberg Health Study (GHS). Calibrated Automated Thrombography was assessed and coagulation factors were measured by means of BCS XP Systems. To assess the biochemical determinants of TG variables, a multiple linear regression analysis, adjusted for age, sex and antithrombotic therapy, was conducted. Results The lag time, the time to form the first thrombin, was mainly positively associated with the natural coagulant and anti-coagulant factors in the reference group, i.e. higher factors result in a longer lag time. The same determinants were negative for individuals with a history of arterial or venous thrombosis, with a 10 times higher effect size. Endogenous thrombin potential, or area under the curve, was predominantly positively determined by factor II, VIII, X and IX in all groups. However, the effect sizes of the reported associations were 4 times higher for the arterial and venous disease groups in comparison to the reference group. Conclusion This large-scale analysis demonstrated a stronger effect of the coagulant and natural anti-coagulant factors on the thrombin potential in individuals with a history of arterial or venous thrombosis as compared to healthy individuals, which implicates sustained alterations in the plasma coagulome in subjects with a history of thrombotic vascular disease, despite intake of antithrombotic therapy. Supplementary Information The online version contains supplementary material available at 10.1186/s12959-022-00392-0. Introduction Thrombin generation (TG) is established as an important research tool for exploring the plasma "coagulome" in relation to clinical risks for bleeding or thromboembolism. For a bleeding tendency, like hemophilia subjects lacking factor VIII or IX, reduced peak height and endogenous thrombin potential (ETP) of the TG curve have been observed, supporting a state of hypocoagulability [1][2][3][4][5]. Correction of such factor deficiency normalized the TG profile [6]. In thrombosis research, the reported findings on TG are conflicting, e.g. whereas an increased thrombin potential is frequently reported in venous thrombosis, for subjects with arterial thrombosis data are quite inconsistent [7][8][9][10].While some studies show positive associations of increased peak height and/ or ETP to outcomes like ischemic stroke, other studies show reverse associations of increased lag time and/or lower peak height levels in patients that suffered a myocardial infarction or stroke [11,12]. The reasons for these discrepancies are not fully understood but might include variations in coagulation factor concentrations, release of tissue factor pathway inhibitor from the endothelium as well as effects of specific medication. Solid evidence based on a comprehensive set of different data is still missing [13]. Venous thromboembolism (VTE) and arterial thrombotic diseases share several risk factors and several studies have shown that the risk of arterial thrombosis is increased in those that suffered a first VTE and vice versa [14]. Therefore, one would expect that also the plasma coagulome, assessed by the TG assay, would reflect certain similarities between subjects with VTE or arterial thrombosis [15]. However, given the observed discrepant associations with TG data, different profiles between venous and arterial thrombotic disease may also be present. In order to address these issues, we carried out the present study to identify the relationships between (natural anti-) coagulation factors and parameters of the TG in individuals with a history of either an arterial cardiovascular disease or venous thrombotic disease compared to cardiovascular healthy group within the populationbased Gutenberg Health Study. Research design The Gutenberg Health Study (GHS) is a prospective, observational, single center cohort study, designed for population-based health research, in the Rhine-Main region in Germany. With a total of 15,010 individuals between 35 and 74 years enrolled at the baseline examination, the GHS aims to assess the consequences of diseases and environmental factors in addition to the inherited predisposition on the development and progression of asymptomatic and symptomatic disease. During the baseline visit, every participant underwent a comprehensive, standardized 5-hour clinical examination program, as reported elsewhere [16,17]. The baseline visit at the GHS study centre comprised a standardized 5-h investigation according to standard operating procedures. Participants underwent a detailed computer-assisted interview covering assessment of cardiovascular risk factors, lifestyle, socioeconomic status, and other areas. The prevalence of cardiovascular disease was determined by history taking. In addition to the extensive clinical assessment, a large biobank has been established for future biochemical and genetic analyses. As part of the follow-up, a standardized computer-assisted telephone interview and an inventory of primary and secondary endpoints were done 2.5 years after baseline visit. In addition, participants undergo a quinquennial, extensive clinical examination in the same research facility as the baseline visit. Primary endpoints of the study were myocardial infarction and cardiovascular death. Secondary endpoints were cerebrovascular accident, diabetes mellitus, heart failure, atrial fibrillation or death caused by the previously named diseases. Details of the study protocol and the further purposes of the study are discussed elsewhere [18]. Study sample From the initial GHS cohort including 5000 subjects TG data was available for 4843 subjects. From these, 1402 individuals were included in the reference group, 502 in the arterial disease group, and 195 in the venous disease group. Selection for each group was as follows:" The overall study sample consisted of the first 5000 subjects enrolled into the GHS between April 2007 and October 2008. After excluding subjects without biomaterial available or without complete TG assessment (one or several TG parameters were missing), 4843 individuals were successfully included in the present analysis. The reference group was defined as apparently cardiovascular healthy subjects without history of cardiovascular diseases (myocardial infarction [ Blood sampling and laboratory assessment Venous blood sampling was performed according to standard operating procedures and the blood was collected in trisodium citrate (0.109 M, 1:9 vol:vol) monovette plastic tubes, while the subject was in fasting state (i.e. overnight fast, if subject was examined before 12 p.m.. and 5 hour fast, if subject was examined after 12 p.m.). Platelet poor plasma (PPP) was prepared by one-step centrifugation at 2000 x g at room temperature for 10 minutes. After preparation the PPP was aliquoted and immediately stored at − 80 °C in the Biobank of the GHS study center. The TG was assessed in the Laboratory for Clinical Thrombosis and Hemostasis, Maastricht University, the Netherlands, by the Calibrated Automated Thrombogram (CAT) assay (Thrombinoscope BV, Maastricht, The Netherlands), according to the recommendations [19,20]. The TG was triggered by PPP Reagent Low (Stago) in freshly thawed PPP. The CAT method employs a low affinity fluorogenic substrate for thrombin (Z-Gly-Gly-Arg-AMC) to continuously monitor thrombin activity in clotting plasma. TG measurements were calibrated against the fluorescence curve obtained in a sample from the same plasma (80 μL), supplemented with a fixed amount of thrombin-alfa 2-macroglobulin complex (20 μL of Thrombin Calibrator; Thrombinoscope BV, Maastricht, The Netherlands) and 20 μL of the fluorogenic substrate and calcium chloride mixture. TG parameters were derived from the TG curve and include lag time (time to minimum thrombin formed [min]), peak height (the maximum amount of thrombin formed [nM]) and endogenous thrombin potential (ETP or area under the curve [nM.min]). Coagulation factors were measured by means of BCS XP Systems in the Biomolecular laboratory at the Department of Epidemiology, University Medical Center Mainz, Germany. The coagulation factors II, V, VII, VIII, IX, X, XI, XII were determined using the clotting-based coagulation methodology, protein C and antithrombin by the chromogenic assay and von Willebrand factor (vWF) and protein S by using immunological-based assay. Reference values by the WHO standard provided by Siemens were used. Total TFPI activity was assessed in PPP by the Actichrome TFPI activity assay (American Diagnostica, Stamford, CT, USA) in the Laboratory for Clinical Thrombosis and Hemostasis, Maastricht University, the Netherlands. Data management and statistical analysis A central data management unit conducted quality control on all data in this study. Statistical analysis was performed with software program R, version 3.3.1 (http:// www.R-proje ct. org). Data on coagulation factors and inhibitors are presented as mean (standard deviation) in case of normal distribution. Multiple linear regressions were used to assess the associations between biochemical variables and TG parameters in the reference group as well as in the arterial and venous disease group. The analyses were adjusted for age, sex and additionally for hormones (oral contraceptives and hormone replacement therapy = G03) and anti-coagulant agents (B01AA, B01AB, B01AE, B01AF, B01AX) as these may affect the thrombin potential. Due to a skewed distribution, lag time, as a dependent variable, was log-transformed prior to the analysis. Estimated beta regression coefficients, presented with corresponding 95% confidence interval (CI), were calculated as per standard deviation to compare the effects of different coagulation-related factors on TG parameters. Due to its explorative nature, a p-value threshold was not defined. However, to account for multiple statistical tests and minimize the risk of type 1 error, a Bonferroni corrected p-value (0.00036) was set for the results on the multiple linear regression analyses. Baseline characteristics of the study sample Baseline characteristics of the reference group, arterial and venous disease groups are shown in Table 1. The majority of the individuals in the arterial subsample were males (63.3%), whereas there was a preponderance of females in the reference group (60.5%) and the venous disease group (63.6%). The mean age in the reference group was 49.3 years and the mean age of the study population in the arterial and venous disease groups was 63.8 years and 61.3 years, respectively. In both the arterial and venous disease groups, hypertension (arterial disease group: 72.5%, venous disease group: 59.0%) was the most prevalent traditional CVRF, followed by family history of MI/stroke (arterial disease group: 43.6%, venous disease group: 43.6%). Of the cardiovascular diseases, CAD was the most prevalent with 46.5% of the study subjects in the arterial disease group. In the venous disease group, 99.0% of the individuals had a history of DVT and 5.7% of the individuals had a history of PE. Of the arterial vascular diseases, PAD was predominant with 26.6% of the study subjects in the venous disease group. Anti-coagulant therapy was most common in the arterial disease group (61.0%), followed by the venous disease group (35.4%) and the reference group (1.8%). While individuals from the reference group were most often taking oral contraceptive therapy (12.5%), individuals in the arterial disease group were most often using hormonal replacement therapy (11.3%). van Levels of coagulation factors and inhibitors Levels of coagulation factors and inhibitors in the reference group, arterial and venous disease group are shown in Table 2. Most notably, the lag time was significantly prolonged in individuals with a history of arterial vascular disease or venous thrombosis in comparison to the cardiovascular healthy individuals. In addition, the ETP from the arterial disease group was lower compared to the reference group. The activity level of factors II, X and antithrombin were lower in the arterial and venous disease groups compared to the reference group. Differently, activity levels of factors VIII and XI, vWF activity and fibrinogen concentration were higher in both arterial and venous disease groups compared to reference group. The individuals from the arterial disease group compared to the control subjects showed additionally higher activity levels of factor IX and Protein S and slightly lower activity of factor XII. Determinants of thrombin generation The multivariate analysis for relationships between coagulation factors and the TG assay in the reference group, arterial and venous disease group is presented in Fig. 1A-C. Presented in supplemental material Table 1A-C are beta per standard deviation SD, meaning that one SD change of the predictor (coagulation factors) leads to beta change in dependent variable (TG parameter). The lag time in the arterial and venous disease group was strongly and negatively associated with coagulation factors II, V and VII and with the natural anti-coagulants protein S, antithrombin and TFPI activity. Fibrinogen was negatively associated with the lag time in the arterial disease group and positively associated in the venous disease group. Differently, factor XII was positively associated with the lag time in the arterial disease group and negatively associated in the venous disease group. In general, the effect size of the reported biochemical 20:32 determinants was 10 times higher for the arterial and venous disease groups compared to the reference group (e.g. factor II, beta estimate median: arterial: − 0.18 vs venous − 0.23 vs reference 0.017). In addition, the direction of the associations for the reference group was positive for all reported variables, except for factor VII that was negatively associated. The ETP was strongly positively determined by factor II, VIII, X and IX in the reference group as well as the arterial and venous disease groups (Fig. 1A-C). In addition, antithrombin was a negative determinant for the ETP in the reference group, though no association was observed for antithrombin in the arterial and venous disease groups. VWF was a negative determinant for the ETP in both the arterial and venous disease group, whereas VWF was positively associated with the ETP in the reference group. Moreover, the effect size of the reported relationships was nearly 4 times higher for the arterial and venous disease groups in comparison to the reference group (e.g. factor VIII, beta estimate median: arterial 184 vs venous 217 vs reference 55.6). There was a positive association between factor VIII, IX, II, X and the peak height in the reference group, arterial and venous disease group. (Fig. 1A-C) Protein S was a negative determinant of the peak height in the reference group, whereas it was positively associated with the peak height in the arterial and venous disease group. In addition, factor IX was a negative determinant for the peak height in the reference group, though no association was found in the arterial and venous disease group. In general, the effect sizes for the reported biochemical determinants of the peak height were similar in all groups, with the exception of factor VII that had lesser effect in the arterial disease group compared to the reference group and venous disease group (beta estimate: arterial: 10.4 vs venous 24.4 reference 17.6). Discussion This is the first large scale population-based study that has explored the relationships between coagulation factors and the TG parameters in individuals with a history of arterial vascular disease or venous thrombotic disease, as compared to cardiovascular healthy individuals. The main findings from our study show important distinct differences for the biochemical determinants between cardiovascular healthy individuals and those with a background of an arterial or venous disease. Whereas lag time was mainly negatively associated with the procoagulant and anti-coagulant factors in the plasma, meaning higher factor levels result in a shorter lag time, the same associations were positive for the healthy individuals. Furthermore, the effect size for the biochemical parameters determining the lag time was about 10 times higher for the arterial and venous disease than for the reference group. Dielis et al. previously investigated the coagulation factors as determinants of the TG parameters at 1 pM TF (comparable to the applied PPP Reagent Low) and 13.6 pM TF in the absence or presence of thrombomodulin or in the absence or presence of activated protein C in a sample of healthy adults [21]. TFPI activity, protein S and fibrinogen were the strongest positive determinants of the lag time. Similarly, the results of the present study showed that TFPI activity, protein S and fibrinogen are strong positive determinants of the lag time in the control individuals. Fibrinogen was also positively associated with the lag time from the venous disease group. A possible explanation for the paradoxical association between fibrinogen and lag time may be the anti-coagulant properties of fibrinogen by inhibiting the binding with thrombin directly as well as through accelerating the activation of plasminogen into plasmin by tissue plasminogen activator [22]. Interestingly, for the arterial disease individuals, higher fibrinogen concentration was associated with shorter lag time. These contrasting results raise the possibility of differential effects of fibrinogen on the initiation phase of the TG process in diseases affecting different vascular beds. Factor VII was the unique coagulation factor that shared the same direction of association with the lag time for control subjects and disease individuals. Factor VII is well known to play an important role in the initiation phase of the coagulation cascade by formation of the factor VIIa/TF complex that promotes the generation of the prothrombinase complex (factor Xa/factor Va) and ultimately leads to TG amplification [23]. Higher factor VII activity level and shorter lag time, shared by both control and disease individuals, confirms the role of factor VII in the ambient coagulation cascade reaction. Furthermore, Dielis and colleagues reported fibrinogen and factor XII as positive determinants for the ETP, which we confirmed in the present study [21]. As expected and as previously reported, antithrombin, a potent anti-coagulant, was negatively associated with the ETP. In general, the present analysis demonstrated that the direction of associations with coagulation factors and ETP were similar in the reference group, arterial and venous disease group. The analysis of the levels of natural coagulation and anti-coagulant factors showed that factor II, VIII, X and XI were significantly increased in the subjects with an arterial or venous disease background in comparison to the healthy individuals, which is in accordance with previous reports [9,[24][25][26][27][28][29]. This finding illustrates a "hypercoagulable" state in these subjects and may explain the fourfold increased effect size of the associations with the reported coagulation factors and the ETP in arterial and venous disease groups compared to the reference group. This is further supported by evidence from previous TG assay studies demonstrating its potential to expose hypercoagulability in plasma from patients with arterial and venous thrombosis [30]. In contrast to the reference group, protein S, a natural anti-coagulant, was a positive determinant for the peak height in individuals with a history of arterial or venous thrombotic disease. Our analysis confirms increased levels of coagulation factors in patients with an arterial or venous thrombotic disease background, which could potentially result in excessive activation of the activated protein C pathway to which protein S is a supporting cofactor. Therefore, as demonstrated by the analysis from the arterial disease group, levels of protein S may be elevated. However, the net effect of these pathological mechanism remains an increased thrombin generation which translates to the increased peak height. The effect sizes of the associations with the peak height were similar for healthy individuals and individuals with an arterial or venous thrombotic disease background. Comparing the determinants of thrombin generation from the arterial disease group with patients suffering from acute myocardial infarction [10], reveals an opposite negative association between fibrinogen levels with the ETP and peak height at the acute phase. To what extend the associations in the acute phase are comparable to those after the event, remains to be elucidated. Limitations to the study were: The TG was measured in PPP after one-step centrifugation of whole blood (10 minutes at 2000 x g), in contrast to standard recommendations (two-step centrifugation; 2000 x g for 5 minutes, 10,000 x g for 10 minutes), which may affect the TG results. The history of arterial or venous disease was selfreported by the participants. There was no data available for analysis on the time from the initial diagnosis of the arterial and/or venous event to study enrolment. Therefore, we were not able to investigate if different duration of disease has different impact on the coagulation and TG profile. However, this study has important strengths, including the standardized clinical investigation of the participants' present cardiovascular profile and the comprehensive laboratory investigation of coagulation and anti-coagulant factors. In conclusion, this large-scale analysis shows that the individual coagulation factors more strongly affect TG parameters in individuals with a history of arterial or venous thrombosis as compared to cardiovascular healthy individuals. This illustrates the different effect size contribution of the coagulation factors to the hypercoagulable state of individuals at risk for a cardiovascular event and suggests that the coagulome might be tuned to a "hypersensitive" state increasing the risk for recurrence. Overall, the important finding of altered determinants of thrombin generation shows that in patients with a history of cardiovascular disease levels of coagulation factors should be taken into account. It also provides further rationale for the observed benefits of anti-coagulant therapy in patients with cardiovascular disease at risk of atherothrombosis.
2022-06-08T13:50:30.631Z
2022-06-08T00:00:00.000
{ "year": 2022, "sha1": "4b480e10353c465fc748678ff75efabd4129e6ae", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4b480e10353c465fc748678ff75efabd4129e6ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221344604
pes2o/s2orc
v3-fos-license
Protocol for a scoping review of digital health for older adults with cancer and their families Introduction The potential for digital medicine and healthcare in geriatric oncology settings has received much attention. This scoping review will summarise the nature and extent of the existing literature that describes and examines digital health development, implementation, evaluation, outcome and experience for older adults with cancer, their families and their healthcare providers. Methods and analysis Arksey and O'Malley’s six stages of scoping review methodology framework will be used. Searches will be conducted in Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, Embase via OvidSP, Cumulative Index to Nursing and Allied Health Literature (CINAHL) Plus via EBSCO, Scopus and PsycINFO via OvidSP for published articles in peer-reviewed scientific journals from year 2000 onwards. In addition, we will screen databases for all prospectively registered trials. Research articles using quantitative or qualitative study design or reviews will be included if they describe or report the design, development or usability of digital health interventions in the treatment and care of patients 65 years of age or older with cancer and their families before, during and after cancer treatment. Grey literature will not be searched and included. Two investigators will independently perform the literature search, eligibility assessments and study selection. A Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram for the scoping reviews (PRISMA-ScR) will be used to delineate the search decision process. For included articles, the extracted results will be synthesised both quantitatively and qualitatively and reported under key conceptual categories of this scoping review. Research gaps and opportunities will be identified and summarised. Ethics and dissemination Since this review will only include published data, ethics approval will not be sought. The results of the review will be published in peer-reviewed scientific journals. We will also engage with relevant stakeholders within research team’s networks to determine suitable approaches for dissemination. INTRODUCTION More than 60% of all cancers arise in the older adult population. 1 It is estimated by 2030, people 65 years of age or older will account for 70% of cancer diagnoses. 2 Coupled with an increased incidence of cancer among older people is the unprecedented demographic growth of the elderly population. The United Nations highlighted that populations are becoming older in all regions of the world, with an estimate that one in six people globally will be 65 years or older by 2050. 3 Such epidemiological evidence highlights the importance of advancing the knowledge base in geriatric-oncology settings and improving clinical management. Older people make up a unique group of patients in a clinical cancer setting and are at increased risk for toxic side effects during cancer treatment. 4 5 Treatment of older patients is complicated by the fact that many are entering the cancer trajectory with pre-existing comorbidities and susceptibility to the progressive accumulation of multiple chronic diseases and a decline in functional capabilities. [6][7][8][9] The assessment, treatment and supportive care needs of older patients with cancer are substantial and complex. Strengths and limitations of this study ► This review will use an established, rigorous and systematic framework for conducting scoping reviews to explore in detail the current state of knowledge and evidence about digital health for older adults with cancer and their families. ► Gaining insights into the characteristics and contexts leading to positive outcomes, patient/family and healthcare provider experience and integration challenges and barriers will provide a big picture on digital health progress in geriatric oncology settings and would be helpful in identifying what recommendations can be made to improve gaps in the research and knowledge base. ► This scoping review will not examine health economic and clinical use evidence for digital health. ► We will not assess the risk of bias and the quality of evidence of included studies as the goals are to identify and summarise research gaps and opportunities. ► Given the anticipated volume of peer-reviewed scientific literature and databases for prospectively registered trials, grey literature will not be searched and included. Open access Geriatric oncology has progressed immensely in terms of understanding the predicative ability of baseline comprehensive geriatric assessment for chemotherapy toxicity and treatment completion. 8 10 However, concerns remain about how to improve the experience of older cancer patients 11 12 and their tolerance of and adherence to cancer treatment regimens to optimise the full benefits of chemotherapy. 12 Chouliara et al indicated that symptom management, chemotherapy, choices of medical provider, complementary treatments and family involvement in the patient's care are important decisionmaking topics for older people with cancer and their relatives. 13 There is a pressing need to open new avenues for effective treatment, care and support of older patients with cancer to optimise clinical and patient outcomes. The growth and advancement of digital technology through the integration of sensor arrays, interactive multitouch screens, voice and video media, interactive longitudinal data, the advanced functions of apps, high computing power and fast network speed have revolutionised the medical and healthcare sciences. Shen and Naeim highlighted that digital technology has transformed the ways that healthcare can be delivered all across the world. 14 Literature resources are rapidly expanding to include digital health applications and evaluations, technology integration into healthcare systems, technology literacy and exposure and technology acceptance, barriers and engagement across a wide spectrum of healthcare domains. [15][16][17] These insights are helpful for the geriatric oncology field. Technology use by older adults is increasing, and the potential for digital medicine and healthcare in oncology settings has received much attention. In 2016, the adoption rates of smartphones and tablets by older adults 60 to 69 years of age were 46% and 41%, respectively. 18 Anderson and Perrin indicated that 67% of individuals older than 65 years of age have Internet access and go online. 19 Recent reviews identified the potential of, and need for, both remote digital self-reporting solutions for cancer and treatment-related symptoms using web-based or smartphone-based portals, and Internet of Thingsbased solutions using various ambient-sensing technologies to enable objective and real-time monitoring of treatment toxicity, symptoms and functional status in geriatric oncology settings. 14 20 Denis et al demonstrated that remote symptom monitoring using a web-based selfreport of symptoms improved overall survival rates among lung cancer patients 35.7 to 88.1 years of age (median age, 64.5 years). 21 The functional performance status of older cancer patients has also been monitored with wearable electronic activity monitoring technologies. 22 Villani et al evaluated eHealth stress inoculation training intervention on emotional regulation and cancer-related well-being in 29 women over 55 years of age with breast cancer (mean age, 62.76 years), and revealed a good level of acceptance of the eHealth intervention, an increase in relaxation and a reduction in anxiety among women in the intervention group. 23 Hoogland et al surveyed eHealth literacy in older adults with cancer and found that older adults had significantly lower eHealth literacy than younger patients, which suggests older cancer patients' needs and abilities should be considered when designing and implementing health information technology. 17 There is a need for comprehensive evidence to inform the research and development of digital health solutions, and to understand the imperatives of designing, implementing and evaluating digital healthcare initiatives in the context of geriatric oncology populations. Peters et al noted that a scoping review is particularly useful when a body of literature has not been comprehensively reviewed. 24 Scoping reviews are one of the most common forms of review in healthcare sciences, and are characterised as a broader approach with the aim of mapping literature and addressing a broader research question. 24 An appealing feature of scoping review methodology, contrary to other review approaches, is that it does not limit the parameters of the review to randomised controlled trials or require methodological homogeneity of studies included in a review. 25 Such an approach is consistent with literature that the hierarchies of evidence used in existing health research should be replaced by embracing diverse research methods approaches. 26 Indeed, the inclusion of published literature in a particular field as a whole regardless of its methodological approaches, settings and contexts is necessary to provide a big picture of existing knowledge, thereby improving research planning, strategic research prioritisation and evidenceinformed policies. 27 Literature also indicates that scoping reviews are rigorous in their approach to providing rich and contextual details evidence of research activity in a particular area. 28 Despite the strengths, the lack of methodological quality assessment and quality of evidence grading even for empirical evidence minimises the ability of results from a scoping review to inform clinical decision-making. A recent critical review of the scoping review methods indicated that results of scoping reviews are not used to create recommendations for policy or practice. 28 Other important challenges in scoping reviews are the level of complexity across multiple stages 28 and practical issues related to the large corpus and the wide range of research literature. 25 Therefore, it is crucial to follow a sound methodological guidance or framework, 28 and to have a balance between breadth, comprehensiveness and feasibility. 29 Consistent with the rationale for conducting scoping reviews, the scoping review methodology of knowledge synthesis is well suited to the present review study purpose. Here we describe a scoping review protocol to systematically review published literature on digital health for older adults with cancer, their families and their healthcare providers. The purpose of this scoping review protocol is to systematically map and explore the literature and evidence describing and examining digital health development, implementation, evaluation, outcome and experience for older adults with cancer, their families and their healthcare providers. We focus primarily on Open access evaluation of baseline performance status and/or geriatric assessment, clinical assessment, treatment outcomes and patient management including self-management and self-monitoring. The following points in the cancer pathway are considered: before, during and after cancer treatment, which includes surgery, adjuvant chemotherapy and/or radiotherapy, targeted therapy, follow-up care during remission and survivorship care. Our goals are to direct future research efforts by identifying gaps and limitations in the literature and to highlight relevant determinants of positive outcomes in the emerging field of geriatric oncology. METHODS Our protocol was developed using the scoping review methodological framework proposed by Arksey and O'Malley 30 and further refined by the Joanna Briggs Institute. 24 The approach describes six methodological stages: (1) identification of the research question, (2) identification of relevant studies, (3) selection of studies, (4) extracting and charting the results, (5) collating, summarising and reporting the results and (6) 31 hence research in this field only became mainstream and gained momentum after 2000. In addition, the publishing date range (within the last 20 years) will accommodate the wide adoption of digital healthcare following publication of WHO vision for digital health 32 and after the introduction of smartphones. Two investigators will independently perform the literature search, eligibility assessments and study selection. We performed a pilot search on PubMed to identify relevant keywords contained in the title, abstract and subject descriptors. We will use the following search terms related to digital health and older adults with cancer, with various combinations in each electronic database while using controlled vocabulary with the Boolean operators AND and OR. We will use appropriate subject headings (eg, MedicalSubject Headings) whenever possible. The term 'healthcare providers' encompasses oncologists, nurses, nurse practitioners, clinicians, physicians, general practitioners and pharmacists in this scoping review. A copy of the search strategies and the preliminary searches results in each electronic database is included as an online supplementary file. The basic search terms used will be: We will also screen reference lists of all identified studies and reviews for any relevant publications. Finally, we will screen databases for ongoing clinical trials and all prospectively registered trials, including the WHO International Clinical Trials Registry Platform and Clinical-Trials. gov. Selection of studies (stage 3) Studies will be selected following two stages of screening. The first stage will be an initial screening of titles and abstracts by two reviewers independently to assess relevance. The initial review will be done independently, and the reviewers will discuss the results once screening is complete. Resolving disagreements will be attempted first by the two reviewers, but if necessary, a third member of the research team will be consulted to reach consensus. Once the initial decision is reached on which articles to include, we will begin the second stage of conducting a full-text review. Two reviewers will independently assess the articles to determine whether they meet the inclusion criteria. Disagreements regarding inclusion will be discussed and resolved by consensus, with a third Open access member of the research team adjudicating articles without a consensus agreement or with questions about their relevance or eligibility. The first stage of screening is underway. Table 1 delineates the inclusion and exclusion criteria, following the Population, Concept and Context categories for scoping reviews. 24 We will iteratively refine the inclusion and exclusion criteria to align potentially eligible studies to the purpose of this scoping review. 24 Inclusion criteria are studies that focus on: (1) older individuals 65 years of age or older (or where the mean or median age of the study sample was 65 years of age or older) who had (3) how geriatric oncology treatment, services and care are provided in clinical and community settings. The inclusion of both clinical and community settings is meant to provide a comprehensive picture of the digital health landscape in geriatric oncology settings, particularly gaining insights into the interface between clinical treatment/care in hospital and remote monitoring of treatment toxicity, symptoms and functional status in community. Research articles using quantitative or qualitative study design or reviews will be included to support the greater breadth of this scoping review. 24 Peer-reviewed scientific articles describing study protocols will also be included. A Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram for the scoping reviews (PRISMA-ScR) will be used to delineate the search decision process for the scoping review. 33 This will include the results from the search, removal of duplicate citations, study selection, full retrieval and additions from reference list searching and final selection for inclusion in the scoping review. Extracting and charting the results (stage 4) In this stage, articles that meet the criteria for inclusion will be summarised. Literature highlighted that the essence of data extraction is to record characteristics of the included studies and key information relevant to the review questions. 24 Congruent with the purpose and questions of this scoping review and taking reference from the existing digital health interventions design and evaluation framework, 34 we identified a priori categories and related variables including 'general information categories', 'key conceptual categories' and 'additional categories' with related variables described below as the data extraction framework to guide the extraction and charting of data from the included studies. A data chart will be developed based on the data extraction framework by our research team. The general information and the key quantitative and/or qualitative findings of the included studies will be extracted into the data chart. The data chart will be piloted by two reviewers on two or three studies, and differences in charting will be resolved by a third member of the research team. After the pilot trial, the results will be discussed with the research team to determine whether the data chart satisfactorily captures the information to align and be consistent with the purpose and questions of this scoping review. 35 Refinements will be made after the pilot trial if deemed necessary by the research team. In addition, the data extraction framework will be updated or refined according to categories and related variables emerging as the conduct of the review progresses. 24 Any iterative changes or refinements needed during the actual conduct of the full review will be clearly detailed and explained in the future scoping review report paper(s). General information categories General information that will be included in this chart include: description of study characteristics (eg, year of publication, country of origin, the geographical location in which the research was conducted, aims/ purpose, methodology and sample size); descriptions of study populations (eg, age, gender, residence, ethnicity, cultural background, socioeconomic status, functionality, cognition, disability, comorbidity and family carers); type and stage of cancer, and years of survival after cancer diagnosis; milestones along the cancer continuum (eg, before, during and after cancer treatment including follow-up care during remission and survivorship care); cancer treatment (eg, surgery, chemotherapy, radiation and/or targeted therapy); context of cancer treatment and care (eg, clinical and community); and survival from diagnosis. For studies in which digital health interventions are developed, the theoretical basis underpinning the intervention, and the extent to which stakeholders including patients, family and healthcare providers were involved in the development of the intervention will be extracted when possible. For studies that evaluate digital health interventions, the digital health intervention type according to WHO classification 36 and comparator, mode of delivery, study design, outcome measures and context and key findings that relate to the scoping review questions will be extracted. Key conceptual categories An initial set of data extraction variables that correspond with the key conceptual categories of this scoping review will be extracted, including the development (eg, conceptual and technological foundation; intervention component and content), usability (eg, usage rate), effectiveness (eg, process, impact and outcome), context (eg, personal and environment context) and experiences of digital health interventions (eg, positive, neutral and negative experience); and the challenges and barriers (eg, individual characteristics, social support, workforce, planning, funding for equipment, cost of technology, guidelines, methodology and ways and timing in which digital health could be integrated/adapted) of integrating digital health into clinical practice. Additionally, methodological and knowledge gaps in the research and future research directions offered by author(s) will be extracted from the included studies. This information will also be noted for studies whose study population was not older adults with cancer, but rather families or healthcare providers. Additional categories We drew on digital health intervention design and evaluation framework 34 to identify the following sets of variables which will be extracted from the included studies to strengthen the technological foundation of evidence: eHealth literacy, ease of use, perceived benefit, content quality, personalisation, adherence, safety, privacy and security. Collating, summarising and reporting the results (stage 5) Describing and reporting the study characteristics The extracted data will be summarised to provide a description of the collected data. Descriptive statistics including frequencies and central measures of tendency will be used to report the number of studies under each general information, key conceptual and additional category, such as the type of study design, the type of digital health intervention, the outcomes or experiences of digital healthcare at each stage of the cancer continuum, the type of contributing characteristics and the type of challenges and barriers. Descriptive statistics and numerical summaries will be reported in tables and/or in narrative form. Synthesising and reporting qualitative and quantitative evidence The extracted results will be synthesised and reported under key conceptual categories aligning to the purpose and questions of this scoping review including: the development, usability, outcomes, effectiveness, context and experiences of digital health interventions; and the challenges and barriers of integrating digital health into clinical practice. In accordance with recommendations from the literature, 37 a parallel-results convergent synthesis design will be used to synthesise quantitative and qualitative data, where both types of evidence will be analysed and presented separately, with integration occurring during the interpretation of results. 37 A particular strength of the parallel-results convergent design is that it provides a synthesis strategy for addressing multiple complementary review questions pertaining to our broadly covered topic with regards to digital health for older adults with cancer and their families, thereby providing a big picture of existing knowledge, evidence and research gaps. For quantitative data, we will use frequency distribution analysis to describe the data and map out the evidence by providing a summary of the counts of the included studies. 29 We will also use thematic synthesis to contextualise the findings of the quantitative studies where appropriate. For qualitative data, a narrative synthesis of the findings will be conducted for review questions pertaining to development, usability, outcomes and effectiveness. 37 Major review findings will be summarised and explained. In addition, thematic synthesis will be used to synthesise data from qualitative studies that address the review questions related to context and people's views and experiences. Major themes will be identified and developed across included studies. 37 Research gaps and opportunities will also be identified and summarised. The review results may be presented as a 'numerical summary', 'narrative summary', 'table', 'conceptual map' and/or 'schematic representation' of the data. We may decide on additional presentation formats after data extraction from the included studies, so to make sure results data are clear and visually compelling to the readers. 28 Consultation with stakeholders (stage 6) We will engage with relevant stakeholders within the research team's networks, including oncology/geriatric oncology healthcare providers, regional and international community and professional organisations and digital health developers, if it is feasible for them to contribute towards the interpretation as well as research and policy implications of the results, so as to improve the relevance and impact of this scoping review. In addition, we aim to engage with stakeholders to determine suitable approaches for dissemination and additional knowledge translation initiatives, so as to optimise knowledge translation. PATIENT AND PUBLIC INVOLVEMENT Patient and public involvement strategy or group was not used in the development of this scoping review protocol. Neither patients nor the public were involved in the design and the development of the protocol. Nevertheless, we might invite older adults with cancer and their families from patient support groups within research team's networks at the consultation phase to determine suitable approaches for dissemination of review findings. ETHICS AND DISSEMINATION This scoping review protocol outlines a method to rigorously and systematically search and map the literature on digital health for older adults with cancer and their families. Since this review will only include published data, ethics approval will not be sought. Results from this review will be disseminated through peer-reviewed scientific journals and major professional conferences. We will also engage with relevant stakeholders within research team's networks to determine suitable approaches for dissemination after the completion of stage five of the review methodology. We anticipate that our review results regarding the current state of knowledge and evidence about digital health in geriatric oncology could provide direction for future research efforts and inform clinical practice and policy. Congruent with the scoping review methodology guidance, we will consider the implications of review results within the broader context. 25
2020-08-27T09:12:49.531Z
2020-07-07T00:00:00.000
{ "year": 2020, "sha1": "f38c39f9d42cefd3cb631d1a809b97e021276dd6", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/8/e038876.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "0390d93a06649d234d992ae5f8d32245f482d80f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
133141924
pes2o/s2orc
v3-fos-license
Glutathione Peroxidase 3 as a Biomarker of Recurrence after Lung Cancer Surgery We aimed to examine the usefulness of serum glutathione peroxidase 3 (GPx3) as a biomarker of lung cancer recurrence after complete resection. We prospectively collected serial serum samples at the baseline, as well as 3, 6 and 12 months after surgery from complete resection cases in 2013. GPx3 levels were measured by enzyme-linked immunosorbent assay. Statistical tests including t-tests and Cox proportional hazard regression analyses were performed. Totally, 135 patients were enrolled, and 39 (28.9%) showed relapse during the median follow-up period (63.60 months; range, 0.167–81.867). The mean GPx3 change was significantly higher in the recurrence group at 6 months (0.32 ± 0.38 vs. 0.15 ± 0.29, p = 0.016) and 12 months (0.40 ± 0.37 vs. 0.13 ± 0.28, p = 0.001). The high GPx3 change group showed significantly higher 60-months recurrence rates than the low group (48.1% vs. 25.2% at 3 months, p = 0.005; 54.5% vs. 28.9% at 6 months, p = 0.018; 38.3% vs. 18.3% at 12 months, p = 0.035). High GPx3 change at 3 months were independent risk factors of recurrence (hazard ratio (HR) 3.318, 95% confidence interval (CI), 1.582–6.960, p = 0.002) and survival (HR 3.150, 95% CI, 1.301–7.628, p = 0.011). Therefore, serum GPx3 changes after surgery may be useful predictive biomarkers for recurrence in lung cancer. Larger-scale validation studies are warranted to confirm these findings. Introduction In 2018, lung cancer was the most frequently diagnosed cancer worldwide, and the disease is associated with the highest mortality values of all cancers [1][2][3]. Surgery is recommended for stage I and II lung cancer cases and some stage III cases. However, the recurrence rate is as high as 20% even in stage I disease [4,5], and this recurrence is associated with poor prognoses. Risk factors associated with lung cancer recurrence after surgery, in addition to the well-known TNM staging [5,6], include the degree of tumor differentiation [7][8][9], visceral pleural invasion [6,10,11], complete resection status [12,13], and angiolymphatic invasion [8,9]. However, serum protein biomarkers such as carcinoembryonic antigen, CYFRA 21-1 and neuron-specific enolase have not been investigated sufficiently in such settings, and do not show adequate sensitivity or specificity [14]. Therefore, there is a requirement for a study to identify a blood biomarker for the early detection of lung cancer recurrence. The correlation between the development of cancer and reactive oxidative stress has been reported in several studies [15][16][17][18][19]. Reactive oxygen species (ROS) cause direct or indirect DNA damage, and are involved in the development of cancer through gene mutation and the alteration of signal transduction [15][16][17][18][19]. In addition, a correlation between ROS and angiogenesis and metastasis has been reported [18,19]. Antioxidant enzymes like nicotinamide adenine dinucleotide and glutathione peroxidase (GPx) provide resistance against oxidative stress development [19,20]. GPx3 is the only secretory form of the GPx family; it is a selenoprotein containing selenocysteine and acts as an antioxidant [21,22]. Hydrogen peroxide is detoxified through the oxidation of selenocysteine to selenic acid [23]. In this manner, GPx3 protects cells from oxidative stress [23]. The hypermethylation of GPx3 promotor CpG island induces the downregulation of GPx3 expression [24]. Serum GPx3 level downregulation has been observed in many cancers, while its upregulation is connected to the suppression of tumorigenesis [15,21,[24][25][26]. Barret et al. found that the tumor number is increased in GPx3 knock out mice [21], suggesting that GPx3 has a role as a tumor suppressor and that its downregulation is related to tumor progression and proliferation. A previous retrospective study reported the downregulation of GPx3 in lung cancer patients who underwent surgery [27]; therefore, serum GPx3 was proposed as a biomarker of early-stage lung cancer. Accordingly, we conducted this prospective study to examine the usefulness of serum GPx3 as a biomarker of recurrence after lung cancer surgery at a single institution. Patients and Materials A total of 165 patients underwent lung cancer surgery in Chonnam National University Hwasun Hospital in 2013. We defined 'complete resection (CR)' after discussions with thoracic surgeons as the satisfaction of all the following conditions: (1) having undergone segmentectomy or a more extensive range of operations (e.g., lobectomy or pneumonectomy), (2) sufficient lymph node dissection, or sampling, and (3) absence of postoperative stage IV disease. Exceptionally, we included 7 cases of wedge resection for ground-glass opacity nodules (GGNs) as CR cases, and their histology was invasive adenocarcinoma. Totally, 135 patients were classified into the CR group and enrolled for analysis. Serum samples were prospectively collected at the baseline, as well as 3, 6 and 12 months after surgery. All data were gathered in accordance with the amended Declaration of Helsinki, following the approval of the independent institutional review board (IRB) of Chonnam National University Hwasun Hospital (IRB approval number: CNUHH-2014-035). Blood samples were collected in BD Vacutainer SS Plus Blood Collection Tubes (BD Biosciences, USA). For serum collection, samples were centrifuged using Rotina 380R centrifuge (Hettich, Germany) at 3000 rpm at 4 • C for 20 min. The samples were then stored at −180 • C until further laboratory use. GPx3 levels were measured three times per sample using the enzyme-linked immunosorbent assay (ELISA). Statistical Analysis For the statistical analysis, IBM ® SPSS ® Statistic version 25.0 was used. All ELISA data were expressed as mean ± standard deviation for continuous variables. When variables were normally distributed, the mean difference between the recurrence group and non-recurrence groups was tested using Student's t-test or Welch's t-test. To determine whether there existed a difference in the recurrence ratio between the variables, Pearson's chi-square tests and Fisher's exact tests were employed. Kaplan-Meier analysis was performed using recurrence and recurrence time (months) as a status variable and time variable, respectively. Log-rank tests were used to test differences in the survival distributions across the subgroups of "GPx3 change". The GPx3 change was defined as the ratio of the difference between the measured and baseline value and the baseline value (GPx3 change = (measured GPx3 − baseline GPx3)/baseline GPx3). The cutoff value in the prediction of recurrence was defined as the highest Youden index (sensitivity + specificity − 1), with sensitivity and specificity values of 70% or higher, based on receiver operating characteristic (ROC) curve analysis. Cox proportional hazard regression analysis was performed to analyze the effect of GPx3 levels on survival using recurrence as a status variable and recurrence time as a time variable after controlling for confounding covariates such as smoking, age, histology, stage and adjuvant treatment. A p-value lower than 0.05 indicated statistical significance. Thirty-nine (29.9%) patients were confirmed as having recurrence during the median follow-up period of 63.60 (range, 0.167-81.867) months. We divided them into two groups: the recurrence group and non-recurrence group. Risk Factors for Recurrence In order to identify the factors that may affect recurrence after surgery, we analyzed some known factors such as stage, pathologic invasion, and adjuvant treatment. Pathologic invasion was confirmed based on a pathologist's report of visceral pleural invasion (n = 32), lymphovascular invasion (n = 22) or microscopic residual tumor on resection margin (n = 3). Because 7 patients had more than two factors, total 50 patients were classified into the group with pathological invasion. A chi-squared test and Fisher's exact test were performed to determine the relationship between these variables and recurrence (Table 2). In the groups without and with pathologic invasion, 17.6% (15/85) and 48.0% (24/50) of the patients, respectively, showed recurrence; the difference was statistically significantly different (p < 0.001, odds ratio = 4.308). On comparing the rates of recurrence in the stage I and non-stage I groups, 15.9% (14/88) and 53.2% (25/47) of those in the stage I and non-stage I (p < 0.001, odds ratio = 6.006) groups showed recurrence. In the group that did not receive adjuvant treatment, the lung cancer recurrence rate was 18.1% (17/94), while the corresponding value was 53.7% (22/41) among those in the other patient groups who received adjuvant treatment (p < 0.001, odds ratio = 5.245). Kaplan-Meier Curve Analysis of Recurrence and Survival We hypothesized that the GPx3 change and recurrence rate would be related. To investigate this, we analyzed the GPx3 change and recurrence using Kaplan-Meier curves. Figure 1 shows the ROC curve on the basis of the GPx3 change for the prediction of recurrence in the group without pathologic invasion (n = 85). The area under curve for recurrence was 0.812 (95% confidence interval (CI], 0.657-0.968), and the cutoff value (0.285 µg/mL) was identified on the basis of the highest Youden index, with a sensitivity of 72.7% and specificity of 72.3%. The sensitivity, specificity, and Youden index in the prediction of recurrence for different cutoff values are shown in Table S1. We divided patients into the high and low groups based on the cutoff value, and the recurrence-free time after each measurement point was compared by a log-rank test. Lung cancer recurrence before each measurement was treated to censored data. At 3, 6 and 12 months, in terms of the GPx3 change, the high group showed a shorter time to recurrence, and all these differences were statistically significant (60-months recurrence rate 48.1% vs. 25.2% at 3 months, p = 0.005; 54.5% vs. 28.9% at 6 months, p = 0.018; 38.3% vs. 18.3% at 12 months, p = 0.035, Figure 2). the prediction of recurrence for different cutoff values are shown in Table S1. We divided patients into the high and low groups based on the cutoff value, and the recurrence-free time after each measurement point was compared by a log-rank test. Lung cancer recurrence before each measurement was treated to censored data. At 3, 6 and 12 months, in terms of the GPx3 change, the high group showed a shorter time to recurrence, and all these differences were statistically significant (60-months recurrence rate 48.1% vs. 25.2% at 3 months, p = 0.005; 54.5% vs. 28.9% at 6 months, p = 0.018; 38.3% vs. 18.3% at 12 months, p = 0.035, Figure 2). The effect of the cutoff value on the survival rate of patients was also assessed using the Kaplan-Meier curve. The high group tended to have lower survival rates than the low group at all the measurement time points (60-months survival rate 67.4% vs. 83.3% at 3 months, p = 0.069; 70.5% vs. 83.7% at 6 months, p = 0.140; 73.9% vs. 85.2% at 12 months, p = 0.197, Figure 3). Multivariate Cox Regression of the GPx3 Change To assess the impact of GPx3 on postoperative recurrence, we performed multivariate Cox regression analyses. Independent variables included age, sex, smoking, histology, stage, adjuvant Multivariate Cox Regression of the GPx3 Change To assess the impact of GPx3 on postoperative recurrence, we performed multivariate Cox regression analyses. Independent variables included age, sex, smoking, histology, stage, adjuvant treatment, pathologic invasion, and the GPx3 change at each month. Lung cancer recurrence before each measurement had been treated to censored data. In all the measurements, the presence of a high GPx3 change (over cutoff value) showed statistical significance as a risk factor for recurrence. Except in cases of death or recurrence or those that were censored before the measurement point, high GPx3 changes were independent factors for postoperative recurrence at 3 and 12 months, but not 6 months (hazard ratio (HR) 3.318, 95% CI, 1.582-6.960, p = 0.002 at 3 months; HR 2.086, 95% CI, 0.907-4.795, p = 0.083 at 6 months; HR 4.018, 95% CI, 1.365-11.828, p = 0.012 at 12 months, Table 4). Additionally, pathologic invasion was an independent risk factor for recurrence at all the measurement time points. Disease stage was an independent risk factor at 3 and 6 months, but not 12 months. Using the same variables, we also investigated the hazard risk for death. In the Cox regression analysis, GPx3 change showed potential as a risk factor for death; it showed statistical significance for death at 3 and 6 months, but not 12 months (HR 3.150, 95% CI, 1.301-7.628, p = 0.011 at 3 months; HR 3.322, 95% CI, 1.055-10.462, p = 0.040 at 6 months; HR 2.435, 95% CI, 0.918-6.457, p = 0.074 at 12 months, Table 5). Adjuvant treatment and stage were also independent risk factors for death at all times points, and pathologic invasion showed a statistically significant effect at 3 and 12 months. Discussion In this study, we found that serum GPx3 changes after surgery may be useful predictive biomarkers for recurrence. We assigned patients to the experimental group if complete surgical resection of the lung cancer was achieved. The high GPx3 change group, at 3, 6 and 12 months postoperatively, showed significant associations with shorter recurrence-free durations than the low group. However, GPx3 change were not associated with overall survival by Kaplan-Meier analysis. In the Cox-regression analysis, values higher than the cutoff GPx3 change at 3 months were revealed as independent risk factors of recurrence and survival. In the EAGLE study published in 2015, surgery was superior to non-surgical treatment in terms of survival in stage I-IIIA disease [5]. Lung cancer, including small cell lung cancer, is associated with a recurrence rate of 33.9% in stage I patients and 62.8% in stage IIIA patients who received surgery. The 1-year mortality in stage I patients before relapse was 2.7%, but increased to 48.3% after recurrence [5]. For non-small cell carcinoma (NSCLC), the recurrence rate in stage I disease after surgery was approximately 20% [4,28,29]. In stage I patients, the mean duration to postoperative recurrence was observed to be about 13 months [4]. In our study, the recurrence rates were 21.6% in stage I, 44.8% in stage II, and 80% in stage IIIA disease. Therefore, there is a need for more aggressive adjuvant treatments following the identification of populations with a recurrence risk using effective biomarkers. We performed analyses among patients in whom it was suspected that the lung cancer was removed completely. CR is known to be associated with recurrence after surgery. In 2005, a definition was proposed by the Complete Resection Subcommittee, which is a subgroup of the International Association for the Study of Lung Cancer [13]; however, no clear consensus has been reached yet [12]. However, it seems clear whether CR is an important prognostic factor [12,30]. In cases with GGN, it is controversial whether wedge resection or segmentectomy yields better CR results [31]; however, several studies have reported the absence of significant differences between the two surgical methods in GGN [32]. Wedge resection of GGN was considered to lead to CR achievement in our analysis. The risk factors for recurrence after the surgical resection of lung cancer have been reported [8,33]. Several studies have reported tumor size, visceral pleural invasion, angiolymphatic invasion, tumor grade, and complete surgical resection as significant risk factors of recurrence [33]. The TNM stage is widely accepted as a risk factor for recurrence and survival [8]. In our study, similarly, there was a statistically significant difference in the postoperative recurrence rates between the stage I cases and others. Several studies have reported that pathologic variables such as visceral pleural invasion and lymphovascular invasion are associated with cancer recurrence after surgery [8,10,11]. Visceral pleural invasion became an independent factor in determining the T stage regardless of tumor size since the publication of the TNM stage 7th edition [8,34]. Lymphovascular invasion is known to be associated with prognoses in various cancers [8]. In NSCLC, lymphovascular invasion has been accepted to be associated with relapse [9,10], but this is not yet reflected in the TNM staging. When the presence of visceral pleural invasion, lymphovascular invasion or microscopical residual tumor on resection margin was confirmed on histology, we defined the cases as having pathologic invasion that was also correlated to recurrence. Interestingly, when multivariate Cox analysis was performed with several risk factors known to affect recurrence, the GPx3 change above the cutoff value (0.285 µg/mL) was found to be an independent risk factor for recurrence. In addition, the high GPx3 change at 3 months was revealed as significant factor related to survival. GPx is an enzyme that reduces the levels of hydroperoxides, phospholipid hydroperoxides, and fatty acid hydroperoxide [24]. A total of eight sub-types constitute the GPx family; among them, GPx1 to GPx4 and GPx6 are selenoproteins that are found in humans [35]. GPx3 does not have an endoplasmic reticulum retention signal and is secreted into the plasma uniquely [23]. The expression of GPx3 has been observed in various organs including the liver, kidney, heart, lung and intestine [21,36]. The downregulation of GPx3 has been observed in numerous cancers including hepatocellular carcinoma [26], prostate cancer [37], gastric cancer [38], Barrett's adenocarcinoma [39], glioblastoma [40], and cervical cancer [35]. Therefore, the potential of GPx3 as a tumor marker has been widely suggested. In vivo and in vitro studies have shown that GPx3 is associated with invasion, metastasis, recurrence, sensitivity to chemotherapy, prognoses, and tumorigenesis [24,26,35,37,39]. It is not clear how GPx3 aids in tumor suppression. In addition to protecting cells from oxidative stress for the prevention of tumorigenesis [19], one theory suggests that degree of tumorigenesis and metastasis is suppressed through intracellular signaling [41]. Recently, An et al. demonstrated, for the first time, that GPx3 suppresses the proliferation of lung cancer cells by the modulation of redox-mediated signals [42]. The study revealed that GPx3 prohibits the destruction of MAP kinase3 by ROS, finally suppressing cyclin B1 expression through the ErK/NF-kB pathway and arresting the cell cycle at the G2/M phase. The main advantage of this study is the performance of serial prospective monitoring of serum GPx3 changes after surgical resection for the identification of a biomarker for recurrence. Most previous studies reported on gene expression using tissue samples [35,37,38] or plasma/serum GPx3 levels at specific time points such as the diagnostic phase [26,40]. However, this study has several limitations. First, it had a small sample size and a single-institution design. As a result, the absolute GPx3 value (not GPx3 change) was not associated with tumor recurrence; furthermore, we could not do separate analyses for several different pathological factors, such as pleural and lymphovascular invasion. Second, the validated cohort was not homogenous in terms of the pathologic stage, with a majority of the patients showing stage I lung cancer, and adjuvant therapy. To minimize the effect of heterogeneity, we used Cox proportional hazard model including other known risk factors such as adjuvant therapy, pathologic invasion and stage. However, the generalizability of our findings to cases with many other disease stages may be low. Third, the causal relationship between GPx3 and recurrence is not clear. Although it is unclear how the increase of a supposedly protective marker leads to an increase in recurrence, we suggest one explanation based on the review of Chang et al. [43]. GPx3 expression is decreased in patients with smoking-induced chronic obstructive pulmonary disease due to the chronic adaptations [44]. Conversely, GPx3 levels in the epithelial lining fluid of cigarette smokers is higher than non-smokers, probably in response to the increased exogenous ROS [45]. Similarly, there is a possibility that the GPx3 levels have elevated to resist the increased oxidative stress caused by recurrence of lung cancer patient during postoperative period. Fourth, this study did not reveal the origin of GPx3 production after complete resection. GPx3 upregulation can be also found in non-cancerous fibrotic lung tissues, such as those with idiopathic pulmonary fibrosis, a disease associated with oxidative stress [46]. In conclusion, the high GPx3 change group showed significantly higher recurrence rates than the low group. Additionally, a high GPx3 change at 3 months was revealed as an independent risk factor for recurrence and survival. Therefore, serum GPx3 change after surgery may be a useful predictive biomarker for recurrence in lung cancer. Further studies are warranted, to examine how GPx3 affects oxidant scavenging and redox signaling in the extracellular tumor microenvironment. Supplementary Materials: The following is available online at http://www.mdpi.com/2077-0383/9/12/3801/s1: Table S1: Sensitivity and specificity in the prediction of postoperative recurrence on the basis of different cutoff GPx3 change values.
2019-04-26T13:36:16.426Z
2019-04-01T00:00:00.000
{ "year": 2020, "sha1": "4d147427a885ebcd38c35659d81638ce5d60baec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/9/12/3801/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3ce0c7d0369793c3fbebb936947e0e6c39a939ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10806898
pes2o/s2orc
v3-fos-license
Pinball dynamics: unlimited energy growth in switching Hamiltonian systems A family of discontinuous symplectic maps on the cylinder is considered. This family arises naturally in the study of nonsmooth Hamiltonian dynamics and in switched Hamiltonian systems. The transformation depends on two parameters and is a canonical model for the study of bounded and unbounded behavior in discontinuous area-preserving mappings due to nonlinear resonances. This paper provides a general description of the map and points out its connection with another map considered earlier by Kesten. In one special case, an unbounded orbit is explicitly constructed. Introduction. Theory of small perturbations of completely integrable Hamiltonian systems has a long history that goes back to 19th century effort to explain stability of planets. The major breakthrough occurred in the late 1960s, when Kolmogorov-Arnold-Moser (KAM) theory was created. The KAM theory states that under some non-degeneracy conditions, stable motion persists in a completely integrable Hamiltonian system under sufficiently small and smooth perturbation. For the original application, 3-body problem, the smoothness was not an issue as the gravity force is analytic, outside of a small set of singularities. However, further applications of KAM theory to stability problems in physics and engineering, do require limited smoothness assumptions and also weaker forms of the so-called twist (nonlinearity) conditions. The degree of smoothness of the perturbation has a crucial role in the theory. In his famous ICM lecture Kolmogorov gave an outline of the theory where he required analyticity. Shortly, V.I. Arnold proved Kolmogorov's statement, also under the assumption of analyticity. Independently, combining Kolmogorov's method with Nash smoothing technique, Moser proved a KAM type theorem requiring 333 derivatives. Subsequently, the smoothness requirement was reduced to single digits (C 3 ) and several counterexamples have been found for lower regularity maps, see e.g. [5]. Moser proved his theorem for the case of area-preserving monotone twists maps of the annulus. In this article we also restrict our attention to the representative case of twist maps on the plane, which corresponds to the periodically forced Hamiltonian systems with one degree of freedom. The above KAM counterexamples, that were constructed for the general twist maps, do not provide a tool to decide stability in specific physics problems. Therefore, it is important to investigate special maps arising in applications. We note that even in the most extreme case of discontinuous maps, the stability problem is already nontrivial. In the next section, we review several such systems where boundedness problem for discontinuous maps naturally arises. Then we introduce a simple family of discontinuous twist maps, which captures the essential properties of those examples. The family contains a natural physical system which we call pinball transformation. The hallmark of the pinball map is the small twist, which on the one hand frequently occurs in applications, and on the other hand makes stability problem rather delicate. In higher dimension, even the presence of KAM tori does not assure stability. The energy growth in smooth Hamiltonian systems in higher dimensions is an active area of research, see e.g. [6,12]. Discontinuous twist maps and αz-transformation. Discontinuous maps arise naturally in Hamiltonian systems with impacts, such as Fermi-Ulam problem, billiards, and more recently in hybrid or switched systems. It is usually the case that under the additional smoothness assumptions, KAM theory applies assuring boundedness of energy in all those problems. One should also keep in mind that while the general monotone twist maps are characterized by a function of two variables h(x 1 , x 2 ), these particular examples correspond to symplectic maps characterized by function of one variable, e.g. for billiards h(x 1 , x 2 ) = ||x 2 − x 1 ||. Such a restriction makes it nontrivial to construct physically meaningful escaping trajectories. For the readers' convenience, now we briefly describe several such systems. Example 1: Particle in square wave switching potential. Hybrid or switching systems is an active area of research in applied mathematics and engineering sciences, see e.g. [7,11,1]. A prototype example of a switching system, where boundedness problem is already non-trivial, is a classical particle in square wave periodic potential which changes the sign, periodically in time. More precisely, let the potential be V (x) = (−1) [x] and assume the potential is switched every . While such potential is not differentiable, there is a natural way to define the dynamics by using the energy relation: the kinetic energy changes by 2 if the particle passes t ∈ Z integer points. It is common to ignore the singular subset of the extended phase space (t,ẋ, x) where there is discontinuity in both time and space and the dynamics is not defined. Such subset has zero measure. Outside the singular set, the particle moves with constant speed v = √ E ± 1, gaining or losing energy by two at each switching, see the appendix for more details. Example 2: Fermi-Ulam accelerator. The Fermi-Ulam system consists of a classical particle bouncing between two periodically moving walls. The application of KAM theory shows that velocity (or energy) of the particle is uniformly bounded |v| < C(v(0)), provided the periodically moving wall's position is sufficiently smooth p(t) ∈ C 5 (0, T ), see [9]. Fermi-Ulam problem can be reduced to a particle traveling in a periodic non-smooth potential It turns out that lack of smoothness in x (e.g. due to the presence of the wall in Fermi-Ulam problem) does not destroy bounded behavior as one can exchange the role of time and coordinate and then obtain a smooth monotone twist map by integrating over x, see e.g. [10]. If there is lack of smoothness in both space and time in the periodic potential problem, then KAM theorems do not apply. In the worst case the map is discontinuous, but even then, finding unbounded solutions could be challenging. One case, however, is more tractable: if jumps in the velocity (energy) are so large that the solution makes full revolution over one period of forcing so it will be in tune for the next velocity increase. A typical example would be given by this map Such scenario takes place in Fermi-Ulam problem if p(t) has saw-tooth like shape. But, if the velocity increments are smaller, then the twist will eventually detune the solution out of the resonance. In the appendix, we give some heuristic description how our discontinuous twist map is related to this example. αz-Map: A model of boundedness problem for discontinuous twist maps. In this paper, we introduce a two-parameter family of discontinuous monotone twist maps that seems to capture the essential difficulties of several switching-like (discontinuous) systems. The map is given by and will be referred to as αz-map, where α and z are parameters. Note, that the map is invariant with respect to the natural scaling: varying the amplitude of the changes in the second variable or varying the length of the base circle in the first variable will lead to the equivalent system with different values of parameter α. We also observe that αz transformation preserves the unit-step lattice in action variable. In other words, the action variable is quantized for any fixed initial condition. For different values of parameters αz map corresponds to some natural systems: • z = 1, Fermi-Ulam with saw-tooth p(t), discontinuous standard map. • z = 0, Erdös-Kesten system (skew product of irrational rotation with jumps), which is defined in the next section. • z = -1, pinball problem, which is studied in this paper. We explain in more details how αz-transformation arises in each of these examples in the appendix. Zero twist example. Erdös-Kesten system. The following system was introduced by Erdös and studied by Kesten [8] independently of any KAM theory-type of problems. Erdös considered irrational rotation on the circle and asked what is the discrepancy between the orbit visiting different open subsets of the circle having equal measure. In particular, one can consider two halves of the circle x ∈ (0, 1/2) and x ∈ (1/2, 1). In our notation, his system corresponds to the map with z = 0. In this degenerate case, there is no twist in the system and the dynamics is a skew product. Thus, one can easily provide a set of values of parameter α (e.g. α = 1) for which there are unbounded orbits. On the other hand for α = 1 2 any trajectory of the system (2) is bounded since any point has period exactly 2. In the generic case of irrational values of α, Erdös' question leads to an interesting number-theoretic problem. General result can be found in the paper by Kesten [8] where it is stated that for almost every α there is a set of positive measure of orbits which escape to infinity but return to zero infinitely often. Most contemporary analysis of this phenomena can be found in [13]. Surprisingly, Erdös-Kesten (EK) system becomes important in the study of discontinuous twist maps after an appropriate renormalization procedure is carried out. Elementary properties of αz-map. For non-degenerate twist z = 0 the following properties hold: • For any z < −1 nearly half of trajectories of the system (2) escapes to infinity. It immediately follows from the fact that n n z converges. • Fix z ∈ N, then for α = 1, it is easy to verify that half the orbits are unbounded and for α = 1 2 all the trajectories are periodic. • The most interesting and difficult problem of boundedness occurs for z ∈ [−1, 1). Pinball system Now, we describe a simple mechanical system that corresponds to the case z = −1. Consider now Fermi-Ulam like system with the fixed walls, but with one of the walls containing a pinball mechanism: the momentum of the particle increases or decreases when it hits the wall according to the following law: i.e. the momentum is increased (decreased) during the first (second) half period. This dynamics is described by the map with α being the distance between the walls and z = −1. We rewrite the system (2) for z = −1 and it will be called the pinball transformation that will be denoted by P. For the sake of clarity, it would be more convenient to consider the base circle ϕ ∈ [0, 2). Numerical experiments show that for typical values of parameter α, the trajectory of the system (4) is nearly recurrent for a long time and moreover approximates some piecewise smooth function having singularities only at the discontinuity lines ϕ = 1 and ϕ = 0. Our goal is to explain this behavior by renormalizing the induced transformation in the socalled fundamental domain Φ. The fundamental domain is related to Poincaré section for flows in the sense that any orbit returns to it. In the Pinball map, we define the fundamental domain as the set of points (ϕ, I) located between singular line ϕ = 0 and its image The angular coordinate on Φ will be denoted byφ ∈ [0, 1]. Our first statement concerns the asymptotic description of the first return map: µ be characteristic functions of the 1 µ neighborhoods of the boundary of the fundamental domain, correspondingly, where [·] denotes an integer part and Functions χ + and χ − are characteristic functions of two intervals I + and I − respectively. If µ is rational, then the map might possesses a uniformly growing trajectory. Indeed, if one of the periodic points stays longer in the positive part of the base interval than in the negative part, then the corresponding trajectory grows without bound. Our construction of an escaping trajectory of the system (4) consists in choosing the initial data in an appropriate way so as to kill the leading order perturbation of the map. Next, we would have to estimate that the remaining perturbation will not destroy such "resonant" growth. Combining these ideas, we prove Theorem 2. For α = 1 ln 2m , m ∈ N there exists an unbounded trajectory in the system (4). Remark 1. If µ is irrational, then the leading order part of the map is reminiscent of EK map. Indeed, ignoring characteristic functions, the major part of the map takes the form It seems likely that the angular variable will be uniformly distributed and one should expect similar behavior as found by Kesten. This will be the subject of future investigation. Remark 2. Note that smoothing the signum function discontinuity in (4) will make KAM theory applicable and then all solutions will be bounded. Indeed, change the variables: (ϕ, y) = (ϕ, I −1 ). In the new variables, the smooth version of the Pinball transformation takes the form where f is smooth and |f (ϕ, y)| < 1. Then so the perturbation is of order O(y 2 ) which is much smaller than the twist. The curve intersection property follows from the area-conservation in the original variables. Therefore, this map satisfies the conditions of monotone twist theorem, see e.g. [12]. Proof of Theorem 1. Recall the definition of the fundamental domain as a subset Φ ⊂ S × R + between the discontinuity line φ = 0 and its first iteration: and consider the transformation T (ϕ, I) = (ϕ , I ) as the first return map for any point (ϕ, I) ∈ Φ according to (4). We have the following bound on the action change In other words, Lemma 1 states that as the angle variable winds around the cylinder and the action variable undergoes large changes, after returning to the fundamental domain, the action will not change by more than 1. This property assures a good local control on the orbits. where I + and I − are intervals of equal measure and consist of all points (ϕ, C) such that T (ϕ, C) = (·, C ± 1) respectively. Finally, we have the following lemma which ends the proof of Theorem 1. Proofs Proof of Lemma 1. Proof. We begin with giving a heuristic argument based on the Figure 2 The numbers n and n are uniquely defined by the relations: The equation (9) means that n-th iterate is the last one staying in the right half-circle, so the next iterate will be in the left half-circle. Similarly (n + 1) + n -th iterate is the last one before returning to Φ, see Figure 2. Proof of lemma 2. Proof. As already found in the proof of Lemma 1, transformation T : Φ → Φ in ϕ-variable takes the form Define δ 1 , δ 2 by the relations (see Fig. 2) then it is easy to see that Then the expression (16) could be rewritten in the form In particular, the first inequality in (17) implies that for ϕ ∈ I + one has ϕ < α I + n + 1 , since α I + n + 1 = δ 2 + δ 1 . Similarly, it is easy to verify that if ϕ ∈ I − then ϕ > α I Assume now that the number of terms in S 1 (ϕ , I) is different from n. Assume it contains n − 1 terms (all other cases can be treated similarly). Then we have, see Figure 3. Finally, we have The latter expression is negative since by construction This ends the proof of the proposition. Using proposition 1, we will find the set I + . Let the initial angle be ϕ 0 = 0 then If δ If the converse holds, i.e. δ we obtain This ends the proof of the lemma. The location and the measure of the intervals I + and I − are then controlled by the fractional parts of the sums αS 1 (0, I). We shall estimate these quantities in the proof of the next lemma. Proof of Lemma 3. Proof. For the clarity of presentation, we give the proof only for the case I = N . The general case can be treated similarly. We introduce the notation for the expression αψ(ϕ, I) = 2αS 1 (ϕ, I) + α I + n + 1 − 2 from (15). In the rescaled variables, the transformation (15) takes the form Proposition 2. Assume that µ ∈ (1, 3), then for ϕ ∈ I + the number n is given by where µ = exp(1/α) and Remark 3. If µ > 3, then an additional term µ−1 2 has to be added in (20). We omit the detailed calculations of this more general case, which can be reproduced in the same way. Proof. By proposition 1, the number of steps in the positive half of the cylinder n is independent of ϕ ∈ I + , therefore it is sufficient to verify (20) only for some ϕ ∈ I + using (9). Moreover, we can take ϕ = 0 even though this point might not be in I + . Indeed, according to lemma 2 the point ϕ = 0 satisfies ϕ ∈ I + or ϕ ∈ I 0 . If the former, we are done. If the latter then it is possible to see that by increasing ϕ we will eventually cross into I + without changing n (the number of steps in positive part of the cylinder). We have where H k denotes k-th harmonic number. The harmonic number has the following asymptotic expansion where B j denotes j-th Bernoulli number. Therefore, We will take n = [µ(N − 1)] − N + γ and will verify that γ is given by the above Heaviside function. Using the relation [µ(N − 1)] = µ(N − 1) − {µ(N − 1)} and denoting := γ µ(N − 1) and we can rewrite (20) Substituting this expression into (21) we obtain Now, we expand the expression for S 1 in the perturbation series using Collecting all the terms of the same order in N up to O(y) and O(N −3 ) We need to choose γ in such a way that αS 1 < 1 but αS 1 + 1 N +n+1 > 1. Recalling that α = 1 ln µ and that the leading contribution for deviation from 1, is controlled by terms of order 1/(N − 1), we must assure The additional summand 2 comes from We rewrite two inequalities in a more compact form Now, if γ = 0, then clearly left side of the inequality holds, since µ > 1, and then right side would also hold provided µ + 2{µ(N − 1)} < 3. Thus, γ = 0 if µ + 2{µ(N − 1)} < 3. If, on the other hand, µ + 2{µ(N − 1)} > 3, we can take γ = 1 as the left side of the inequality will hold. The right side will also hold if apply the assumption µ < 3. Therefore, we can finally conclude γ = χ 0 (µ + 2{µ(N − 1)} − 3). Now, we will derive an explicit expression for the first return map T . Multiply (22) by 2α So we get for ψ(ϕ, N ) And finally (using that χ 2 0 = χ 0 ), we have Multiplying both sides of the equation (24) by the scaling factor N − 1 α we obtain Since ϕ ∈ I + we can use (19) Thus, in the leading order in the rescaled variables, the transformation takes the form Case ϕ ∈ I − . By the direct calculations, one obtains Indeed, moving in the opposite direction, we have Neglecting terms of order O(N −2 ) and rearranging the terms , we obtain Multiplying with (N − 2)/α, we arrive at the above formula since there we neglect terms of order O(N −1 ). Case ϕ ∈ I 0 . We already know that I 0 is either a single interval with I + and I − complementing it to the full fundamental interval or I 0 consists of three intervals. In the latter case, Next, we follow the previous calculation of the transformation T on I + indicating the changes. First, recall that on I 0 , we always have Thus, we can modify (23) for our case by subtracting 1 from H Next using αψ = 2αS 1 − 2, we have After slight simplifications, we finally havẽ Now, we consider the leftmost subinterval of I 0 , if it exists. By previous arguments, n is the same as for the I + , so we have Using again αψ = 2αS 1 − 2 and following the same calculations, we find The last case is the rightmost subinterval of I 0 if it exists. Similar argument leads tõ Proof of Theorem 2. Proof. If α = 1 ln n then µ = n and therefore y ≡ 0. Thus, all the terms in series expansion Other cases can be treated similarly. Thus, we get To construct an unbounded orbit, we search for an initial pointφ 0 satisfying two conditions: • The imageφ 0 under the transformation T coincides with initial angle up to the new normalizationφ 0 . Note that for a larger action variable the scaling factor would be different. Therefore, we havẽ 1 Nφ 0 and the "renormalized fixed point" condition yields Thus, up to the order O(N −3 ) . As a result, forφ To check the first statement χ + (ϕ 0 ) = 1 it is sufficient to show that ϕ 0 + 2αS 1 + α N + n + 1 is greater than 2. This immediately follows from (23) and the estimate Remark 4. For µ = 2m + 1, m ∈ N rotation number for the map equals (µ − 1)/µ which guarantees that any trajectory could not hit I + two successive times. Moreover such map has a fixed point in the set I 0 . However since renormalization for the neutral set is an identity map, drift produced by the second order term could not be compensated. We conjecture that in this case one could possibly derive an exponential bounds on the rate of action growth for the map T . Remark 5. We conjecture also that the same arguments as in Lemmas 1 -3 can be used for other values of z ∈ (−1, 0). Here instead of asymptotic expansion for harmonic numbers one should use generalized harmonic numbers H n,z . Since H n,z → ζ(k) as n → ∞ one can establish the relation for H (I+n),z − H (I−1),z = 2 to find the expression for n. Arguments in Lemma 1, 2 and 3 can be applied with minor changes. Appendix A. In the appendix we give more detailed derivation of some specific problems that lead to αz map. Example 1: Particle in switching potential. Consider a classical particle moving on the line in the square wave potential V (x) = (−1) [x] and assume the potential is switched every . While such potential is not differentiable, there is a natural way to define the dynamics by using the energy relation: the kinetic energy changes by 2 if the particle passes t ∈ Z integer points. We should ignore the singular subset of the extended phase space (t,ẋ, x) where there is discontinuity in both time and space and the dynamics is not defined. Such subset has zero measure. Outside the singular set particle moves with constant Since the dynamics defined for x mod 2, t mod 2 we can project the dynamics on a plane onto the system on a cylinder [0, 2) × R + . Write the Hamiltonian form of the unit time step transformation for this system. Hamiltonian has a form H(x,ẋ, t) =ẋ where α is some constant. Clearly, this system can be considered as a particular example of transformation (2) with z = 1 2 . Example 2: Fermi-Ulam acceleration. In Fermi-Ulam problem the particle bounces between two walls. Assume that one wall is at rest x = 0 and the other moves periodically x = p(t), p(t + 1) = p(t) > 0. There is a standard transformation "stopping" the wall, see e.g. In the new variables, the equation takes the form y +pp 3 y = 0, where denotes the derivative with respect to τ . Evaluating the one period map, under the assumption that p is piecewise linear, we obtain (28)      y 2 = (y 1 + y 1 ) mod 1 y 2 = y 1 + y 2 sgn (y 2 − 1/2). This mapping is not a particular case of αz-map but it corresponds to the linear growth of the action when z = 1. While showing unbounded growth is relatively easy in this case, more challenging problem is to estimate the relative measure of bounded solutions. This has been done in [2]. Example 3: Outer billiards with degenerate boundary. Consider the outer billiard system. Let γ be a smooth strictly convex curve on the plane. Take any point x ∈ R 2 outside of γ and let l(x) be a ray tangent to γ and oriented in the counter-clockwise direction. There is another point on the ray T (x) which has the same distance to the tangency point as x. This defines the outer billiard map, see e.g. [15]. A natural question is whether all the orbits are bounded. Thus, one is led to study this map for large x. Assume that γ is a unit circle centered at the origin, then for large x the square of the map T 2 is close to identity and it leaves concentric circles invariant. The angle changes by a factor of 1/|x|, which corresponds to z = −1 in the αz-map. Now, consider the circle with a small circle segment removed. Then, the map becomes a small discontinuous perturbation of the above integrable map. When, γ is half the circle, the outer billiard has unbounded orbit, see [4,3].
2013-06-11T19:12:44.000Z
2013-03-22T00:00:00.000
{ "year": 2015, "sha1": "39b947ab311cbbdf0e6e82d663ff13529cca0968", "oa_license": null, "oa_url": "http://www.math.uiuc.edu/~vz/pubdir/Pinball.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "39b947ab311cbbdf0e6e82d663ff13529cca0968", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
233834830
pes2o/s2orc
v3-fos-license
The initial study on implementation of vertical greenery in Malaysia Mass urbanization and rapid global population growth lead to the occurrence of dense urban areas. For buildings located in densely built urban areas, there are minimum horizontal green spaces. One practical idea to increase the presence of green is to plant upwards when horizontal space is a constraint. This research is done for the case in Malaysia as Malaysia have yet to emerge in vertical greenery implementation for buildings. The aim of this research is to determine the significances of implementing vertical greenery. The second aim is to identify the barriers to the implementation of vertical greenery and the final aim is to provide practical approaches as well as recommendations to increase the adoption of vertical greenery in Malaysia. The methodology of this research is qualitative method whereby interviews are conducted. Vertical greenery is introduced to provide environmental, economic and social benefits. For environmental issues, the temperature can be reduced and energy efficiency can be improved. Economically, vertical greenery can assist in energy savings and improve acoustic insulation. As for the social impacts, the presence of vertical greenery provides pleasing and better environment for the community. With these three aspects, sustainability can be achieved. The main barrier to the implementation of vertical greenery is due to the high cost of construction. The government should place more emphasis on implementing vertical greenery as it is considered as a promising solution to bring significances. Introduction One of the ancient Seven Wonders of the World, known as Hanging Gardens of Babylon is the pioneer for the concept of vertical greenery [1]. Although the application of climbing plants along the facade of a building is not an unfamiliar trend, the purpose and the systems of the usage of vertical greenery has evolved. As stated by [2], historically, the usages of green walls were for ornamental or horticultural. However, in this modernized era, other passive techniques of vertical greenery are used to improve their sustainability [3]. In modernized era, high density urban spaces have minimal horizontal green spaces which lead to lesser greenery for the entire building and community. The usage of the outer surface of the buildings by implementing vertical greenery would allow the amount of greenery to be increased in high density cities where horizontal green areas are at minimum [4]. Therefore, results shown by Bass and Baskaran [5] appear that planting on walls turn out to be an inventive and expeditious developing field towards the direction of sustainable construction. Apart from that, since the vertical surface area of buildings is much larger than the available horizontal surface area, there is more potential for the adoption of vertical greenery. Hence, the chances of improving environmental issues through vertical greenery are greater than green roofs [6]. As mentioned by [2], building sector generates energy and accounts for 40% of energy consumption. Although there are abundant of great technological methods to reduce the negative impacts, green might be a simpler, easier and cheaper method, making it accessible to more people as compared to technological manners. Likewise, the usage of plants provides benefits because plants are a clean source [7] [8]. Vertical Greenery The term vertical greenery has a divergence of meaning collected from various authors. Vertical greenery is generally defined as plants growing on vertical surfaces. [3] defined vertical greenery as structures that allow vegetation to spread over a building facade or interior wall. Based on [9] "Vertical greenery are different forms of vegetated wall surfaces, based on the spreading of plant species across the wall surface by using vertical structures, which may or may not be fixed to an indoor wall or to a building facade." All the vertical greenery definitions stated has similar meanings. There are other similar terms for vertical greenery, namely green walls, living walls, green facade and so on. Nonetheless, vertical greenery is a comprehensive and commonly used term for plants growing on surfaces vertically [10]. As such, the term vertical greenery will be used for this research. Significances of Implementing Vertical Greenery The significances of implementing vertical greenery are divided into three categories, that are, environmental, economic and social benefits. Environmental Benefits One of the significances of implementing vertical greenery is that it is able to reduce and regulate the surrounding temperature. There was an experiment carried out in Hong Kong, where [11] observed that tall buildings with walls and roof covered with plants resulted to a temperature reduction by 8-9 °C. As a result, energy efficiency can be improved through external temperature regulation. Vertical greenery is a promising solution to make buildings more energy efficient [2]. Energy efficiency is also improved through ambient temperature reduction via shading and evaporation by plant processes. A buffer zone is created by these vertical greeneries against the wind during winter period contributing to energy efficiency [12]. Moreover, vertical greenery plays a crucial role in shaping the urban microclimate [13]. Vertical greenery is used as a new bioclimatic design concept of buildings in order to reduce urban heat island effect [14][15] [16]. This is possible as plants take in a significant amount of solar radiation for respiration, transpiration and photosynthesis [17]. By reducing urban heat island effect, natural cooling processes are promoted and ambient temperature in urban areas are reduced. Another significance of vertical greenery is that it is able to absorb dust and clean the air, thus, improving the air quality [18] [19]. By the same token, vertical greenery is able to reduce the greenhouse effect by carbon dioxide absorption. By implementing vertical greenery, it is a step to reduce greenhouse effect because as plants grow, greenhouse gases are absorbed from the atmosphere. In addition, vertical greenery provides ecological restoration for plants and also on faunas. The design of vertical greenery for biodiversity requires designers to have a certain knowledge of the plant requirements and the specific needs of the faunas. For example, climbing plants like Climbing Hydrangea (Hydrangea anonomala petiolaris) and Morning Glory (Impomea tricolor) are known to attract butterflies and hummingbirds [12]. Other advantage of adopting vertical greenery is that it can improve rainwater retention. Rainwater is collected in the hydroponic system and this is used for plant irrigation [20]. Thus, it is more logical to make use of the rainwater to irrigate the vertical greenery as it is more economical this way. Economic Benefits Lately, sustainable cities discovered that implementing greenery is an essential element in addressing noise pollution, leading to the increase in popularity of vertical greenery [13]. In the same manner, vertical greenery can control noise and act as a noise barrier [20]. The substrate where plants grow possess a sound absorbing effect and this is able to provide acoustic benefits [21]. Besides that, Wong [13] stated that indoor vertical greenery is may be effective in protecting speech privacy. Another significance of vertical greenery is that it is an effective tool for energy savings. The shade effect comprises of solar radiation interception provided by plants [3]. One method is by using vertical greenery as window shadings [5]. It has properties of appropriate shading system by increasing daylight while decreasing discomfort glare [22]. During the summer months in Hong Kong, an average daily electricity savings of 16% was attained by implementing vertical greenery [23]. The next economic benefit is that vertical greenery is suitable for retrofitting projects. It is crucial to initiate energy conservation retrofits such as incorporating vertical greenery to reduce energy consumption as 32% of energy consumption originates from heating and cooling [24]. By doing so, it is also more economical than the demolition of the old building and reconstruction of a new building [25]. The implementation of vertical greenery helps to create new job opportunities in the economy. Therefore, new businesses and job opportunities are formed in the economy when the government and private sector initiate on adopting vertical greenery to enhance the environment and have an identity in the green market [26]. Besides that, adopting vertical greenery can increase property values because of its aesthetic and functional properties. This means that vertical greenery is a marketable green feature as it adds value to homes and businesses. Social Benefits Vertical greenery is aesthetically pleasing as it enhances the architectural design. In the urban environment, vertical greenery can hide and obscures unappealing sights by covering the deformed structure surfaces with plants [26]. Moreover, vertical greenery can serve as a function to isolate views. Mechanical and electrical components of a building's system requirement that ruin the aesthetic of the building can be hidden by using vertical greenery [12]. Other than improving the appearance, several studies have linked plants to improving human health and mental well-being. Symptoms like headache might be reduced to a minimum of 20% [27]. Mental health can be improved as green plants in working environment and classrooms are able to decrease absenteeism among employees by 5-15% while the stress level of students decreased; at the same time the productivity of the students increased by 12% [28]. Vertical greenery is developed to reduce the negative impacts of rapid urbanization by providing other alternatives for green spaces to city dwellers [26]. By connecting the occupant directly to natural elements, occupant satisfaction and productivity can be increased [29]. Despite the above-mentioned advantage, intensive literature review found that vertical greenery is not a popular topic in Malaysia. No government authority or professional board has comprehensive record of vertical greenery building in Malaysia. Barriers to Implementation Of Vertical Greenery 3.4.1 Cost of Green Construction. Based on the calculations, the total cost of vertical greenery for a surface area of 35 meters x 50 meters is certainly more costly than the conventional horizontal planting. The most expensive type of vertical greenery is 'green facade (steel mesh)'. The material price of steel itself is high which leads to this type to be the most expensive, costing RM4,872,000. The is followed by 'living wall', being RM3,129,000 and lastly the most economical type of vertical greenery, 'green facade (HDPE)', costing RM2,520,000. This may be a reason why builders and clients choose to let go of option of incorporating vertical greenery in their buildings. Lack of Technical Knowledge. [30] asserted that although innovative green technologies are introduced, the construction project team are not knowledgeable enough towards the technical specifications and operations of the green technologies. As a result of inadequate skills and unfamiliarity, there is a higher risk of error and delay that might occur during the construction period. Possible Increase in Insect and Pollen. By implementing vertical greenery, the amount of plants may be a home to unwelcomed creatures, a source of diseases and a bearer of doom [31]. Although vertical greenery is perceived to improve air quality, there are concerns towards the fear of more insects and the discomfort of pollen allergies that some people are allergic to it. Damage to Building Façade. Through a micro scale, vertical greenery may be a potential damage towards the facade through the growth of the plants. The suckers and tendrils can damage the surface of the facade and leave pattern of marks when removed [32]. [32] also observed that rainwater goods may become blocked while extensive growth may force gutters and other fixtures from the wall. On a macro scale, there will be extra loadings of greenery on the building's structural system [29]. Competition for Use of Façade. There is a significant competition for the uses of exteriors of buildings [33]. This indicates that there is limited usage of the building exterior as the facades may be financially maximized with glassing to provide solar access to building interiors. Alternatively, the building facade may also be used in the advertising field as advertisement signages and media facades, for instance, the GreenPix wall in Beijing. Lack of Policy and Standard. A lack of standard for green exteriors is the cause of poor designs which results in undesirable situations [33]. Thence, developers are not keen to take a huge risk to implement vertical greenery as there are risks of having faulty designs. Approaches To Increase The Implementation Of Vertical Greenery Vertical greenery is something worth implementing hence initiatives and approaches should be taken to increase its implementation. Public Awareness. To utilize greeneries in buildings, public awareness about the applications and significance of vertical greenery is needed [10]. As such, public awareness is needed to help ensure that people understand the pros of implementing vertical greenery and to encourage them to implement it. Government Initiative. Government may increase grant allocations for research and development (R&D) to be conducted on vertical greenery. Since it is a relatively new field in Malaysia, grants can be allocated for vertical greenery to promote this emerging greenery. [34] identified that educational programs for developers, contractors and policy makers related to green building standard procedures was a significant strategy for boosting green building guidelines adoption. Better Enforcement of Green Building Policies and Standards. Having a set of standards would give the builders a clearer direction and ensure that the construction to implement greenery is on track and built accordingly. Green Rating and Labelling. The ratings and labelling a building as a green building would give the developer and owner a valuable recognition in the industry and this recognition would push stakeholders to be part of the green building adoption [35]. Methodology The research approach for this research is qualitative method. Qualitative method is conducted to obtain facts and clarifications regarding vertical greenery. Structured interviews with five experts as per Table 2 were conducted to obtain valuable data. Structured interview is a formal interview whereby a detailed questionnaire is prepared. This is to guide the question order and the specific method the questions are addressed, thereby permitting more direct comparability of responses [36]. Interviews were conducted with interviewees from various positions but from similar background. This is done in order to obtain information on vertical greenery and understand their respective point of views on implementing vertical greenery in Malaysia. The data were then analyzed through content analysis. Content analysis allows qualitative data to be analyzed in a systematic and reliable manner, followed by conclusions drawn from them. This research is conducted for the case in Malaysia as this country has yet to unfold in vertical greenery implementation for buildings. Findings 1: Significances of Vertical Greenery The first objective of this research is to determine the significance of implementing vertical greenery in Malaysia. As a result, the findings on the significance of vertical greenery from the interviewees show that there are a number of benefits that vertical greenery brings. The benefits of vertical greenery are categorized as environmental, economic and social. The most impactful environmental significance is temperature reduction. Significant differential of temperature reduction can be achieved in a hot and humid environment like Malaysia. This is in line with [37] statement whereby the potential thermal benefits of vertical greenery in a tropical climate leads to temperature reduction of the building facade surface temperature. Consequently, this leads to the cooling load and energy costs to be reduced. Majority of the interviewees also agree that vertical greenery can improve energy efficiency, improve air quality and reduce urban heat island effect and regulate microclimate. Based on analysis on data obtained from interviewees, rainwater retention improvement may not be the case for vertical greenery. Likewise, vertical greenery may not be beneficial in terms of biodiversity enhancement. This is because the facility management would usually use pesticides to get rid of the insects. As for the top economic significance, energy savings ranks the first. Majority of the interviewees support that vertical greenery leads to energy savings. This is parallel to [23] research discovering that during the summer months in Hong Kong, an average daily electricity savings of 16% was attained by implementing vertical greenery. The following top economic significances are acoustic insulation improvement and the fact that vertical greenery is suitable for retrofitting projects. Most retrofitting projects have minimal ground space for landscaping, hence one of the solutions would be to allow plants to grow upwards where space is a constraint. Moreover, the most impactful social significance is that vertical greenery provides a pleasing and better environment. All of the interviewees gave positive reviews on this benefit. By the same token, connecting the occupant directly to natural elements, occupant satisfaction and productivity can be increased [29]. The least impactful social significance is isolating unsightly features. Some interviewees agree that vertical greenery helps in this sense whereas some stated that vertical greenery is costly hence it is not used to hide unsightly features. It is more towards enhancing the appearance of the building. Findings 2: Barriers to the Implementation of Vertical Greenery The second objective of this research is to identify the barriers to the implementation of vertical greenery in Malaysia. As analysed through the data collected from the interviewees, the cost of green construction is the main barrier that building owners and builders are not keen to implement vertical greenery. The high cost in constructing vertical greenery leads them to think that vertical greenery is not a necessity. Aside from construction cost of vertical greenery, maintenance also incurs cost, so it must be taken into account as well. The reason behind the high cost of vertical greenery is because there are very few vertical greenery experts and some people do not see vertical greenery as a necessary cost to be included in their budget. The following top relevant barriers comprise of lack of policy and standard and lack of technical knowledge. A lack of standard for green exteriors is the cause of poor designs which results in undesirable situations [33]. Based on the analysis on data obtained from the interviewees, possible increase in insect and pollen is not a barrier because insects and pollens can be minimized through careful considerations. Likewise, vertical greenery will not cause damage to building façade if proper construction has been planned out and if it is maintained properly. Having said that, these two points contradicts the statements from the literature review. Findings 3: Approaches to Increase the Adoption of Vertical Greenery The third and final objective of this research is to provide approaches and recommendations to increase the adoption of vertical greenery in Malaysia. Based on analysis carried out, the government should intervene and impose policies on vertical greenery. Several policies that could be imposed would be to make vertical greenery a compulsory criterion for GBI buildings, government buildings and new buildings. Moreover, the government can provide financial incentive for building owners who implement vertical greenery. Financial incentives provide a beneficial economic support in the industry, especially for individual stakeholders or firms in the adoption process of green building technologies because it usually involves higher investment cost compared to traditional building technologies [35]. An interesting finding from one of the interviewees is that the ultimate reason vertical greenery is implemented is due to the fact that it is a premium addition to properties, as such it will increase property value. Another interesting finding is that vertical greeneries are installed at areas with a lot of vehicle movement in order to improve the air quality. This is a great idea that can be applied to minimize the pollutants around these areas, thus providing better air quality. Conclusion More than half of the interviewees had given favourable feedback that Malaysia is heading towards the right direction in implementing vertical greenery and the progress is encouraging. With the positive possibilities that lie within vertical greenery, vertical greenery is a promising solution to improve the environmental, economic and social impacts. Although the barriers to implement vertical greenery exists, the significance of vertical greenery outweighs the barriers of it. Thereby, Malaysia should increase the implementation of vertical greenery. Further researches can be done on the benefits of vertical greenery, namely the temperature reduction and reduction of urban heat island and regulation of microclimate. This is to measure the actual impact that vertical greenery has on the two benefits stated. A scientific research can be conducted by comparing the temperature difference between a building with the presence of vertical greenery and a building without vertical greenery. This future research provides a good follow up to show the actual results. Apart from that, Green Building Index (GBI) has a set of criteria for their assessment. Vertical greenery falls under the sixth and last category, that is 'Innovation'. One point out of 100 points would be scored for vertical greenery that covers at least 10% of the facade area. According to these data, it Another recommendation for future research is to identify which species of plants work best to achieve the benefits of vertical greenery. For instance, certain plant species have a higher potential in providing energy benefits. As each plant species would have slightly different characteristics, scientific research on selected plant species can be conducted to identify the most suitable plants for the intended purposes of vertical greenery.
2021-05-07T00:04:29.396Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "07193db565e0f5938ec3bb6ceeee07cc97cb4ef5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/685/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "735a4e43f0da2239b49ec7da734659d05a4004ba", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
231880649
pes2o/s2orc
v3-fos-license
Vitamin B12 Deficiency Presenting With Microangiopathic Hemolytic Anemia Vitamin B12 has essential roles in DNA synthesis, red blood cell development, and neurologic functions. Vitamin B12 deficiency is relatively common, particularly in people aged over 60 years. Among hematological disturbances, microangiopathic hemolytic anemia with thrombocytopenia or so-called pseudo-thrombotic microangiopathy (pseudo-TMA) is a particularly rare but significant clinical complication in patients with vitamin B12 deficiency. We herein describe a case of an elderly patient with pseudo-TMA whose lack of vitamin B12 was misdiagnosed as thrombotic thrombocytopenic purpura (TTP). The patient was admitted as a case of pancytopenia with a hemolytic picture. The initial impression was TTP versus acute promyelocytic leukemia M3. After examination of laboratory tests and bone marrow examination, we deduced that the patient had a B12 deficiency. The condition of the patient improved with B12 replacement. This report should remind physicians to widen their differential diagnoses when patients present with microangiopathic hemolysis or in patients who are not responsive to standard treatments for TTP. Introduction Every vitamin is assigned a specific and unique role in the human body, for instance, "vitamin B12" is one of the most vital vitamins with its unique structure and composition of the mineral cobalt and thus the origin of the name cobalamin. It possesses various roles on different levels, including DNA and red blood cell (RBC) synthesis, in addition to several neurologic functions [1]. The cut-off was defined by the World Health Organization, where cobalamin deficiency was presented as less than 150 pmol/L [2]. During the recent decade, vitamin B12 deficiency has become relatively common, particularly in the population aged over 60 years [3]. A broad range of vitamin B12 deficiency-related clinical evidence has been reported describing clinical severity ladder, ranging from fatigue, anemia, glossitis, and subtle neurologic disturbance in mildto-moderate cases, to severe hematological abnormalities, severe neurologic manifestations, and/or cardiomyopathy in severe cases [3]. Microangiopathic hemolytic anemia (MAHA) with thrombocytopenia or so-called pseudo-thrombotic microangiopathy (pseudo-TMA) is a particularly significant hematological complication in patients with cobalamin deficiency [4,5]. Damaged RBC membrane can cause intravascular hemolysis, leading to MAHA, as characterized by the appearance of schistocytes (key characteristics of MAHA) [6]. Primary thrombotic microangiopathy syndromes involve serious conditions such as thrombotic thrombocytopenic purpura (TTP), hemolytic uremic syndrome, drug-induced TMA, and complementmediated TMA. These conditions must be managed and controlled immediately, therefore uncovering the primary etiology including plasmapheresis or monoclonal antibodies that bind complement proteins [7,8]. One of the cobalamin deficiency-TMA features is that patients do not respond to plasma infusion or exchange; the failure to recognize this diagnosis may prompt unnecessary treatments [2]. TTP is a quickly advancing and life-threatening illness that, in past years, featured a classic pentad of MAHA, thrombocytopenia, fever, renal dysfunction, and neurologic abnormalities [9]. Patients with malignancies, as well as those with autoimmune disorders and following solid organ and stem cell transplants, may present with thrombocytopenia and MAHA. In this case, the treatment should be directed at the specific underlying condition [6]. Cobalamin deficiency-induced TMA designates TMA secondary to vitamin B12 deficiency. Usually, cases with pseudo-TMA present with hemolytic anemia, thrombocytopenia, and dysmorphic "fragmented" RBCs. They are often misdiagnosed to have other TMA syndromes and receive unnecessary therapy such as plasmapheresis [1,2]. We herein describe a case of an elderly patient with pseudo-TMA whose lack of vitamin B12 was misdiagnosed with TTP. An 84-year-old married male presented to the outpatient clinic for a routine annual check-up. The patient had hypertension and hypothyroidism. He had no change in his bowel habits, no melena or bleeding from any site, no weight loss, no loss of appetite, no fever, no shortness of breath, no headache or other neurologic symptoms, no chest pain, palpitation, or other cardiovascular complaints, and no urinary symptoms. He underwent bowel resection due to intestinal obstruction. Accordingly, he was taking amlodipine 5 mg, thyroxine 75 mcg, and aspirin 81 mg daily. He had no history of smoking, alcohol, or drug use, nor a family history of hematological disease. On physical examination, the patient was fully conscious oriented; his body temperature was 36.8°C, blood pressure was 131/59 mmHg, pulse was 78 beats per minute, respiratory rate was 18 breaths per minute, and oxygen saturation was 98% on room air. He was conscious, oriented, and slightly pale, and had no jaundice or nail changes. His head and neck examination showed no oral ulcers, a normal tongue, no lymphadenopathy, and no peripheral stigmata of chronic liver disease; his jugular venous pressure was not raised, and cardiovascular and respiratory examinations were normal. There were midline abdominal longitudinal and right sub-costal scares; no organomegaly was present. There was a bilateral petechial rash on the anterior aspect of both legs, extending from the knee joint to the ankle joint. His neurologic and musculoskeletal examinations were normal. FIGURE 1: Peripheral blood smear showing schistocytes (red arrows), hypersegmented neutrophils (blue arrows), and macrocytes (black arrows) As per the available laboratory results, the patient was initially diagnosed with MAHA. Though the full pentad of TTP was not fully matched and rest of the laboratory workup results were awaited, he was started empirically on fresh frozen plasma (FFP) transfusion every six hours while ADAMTS13 (a disintegrin and metalloproteinase with a thrombospondin type 1 motif, member 13) was sent for workup. However, after 72 hours of FFP, his blood count did not improve and hemolytic markers were rising. Pan CT was performed showing no hidden malignancy that could trigger MAHA. On the third day of admission, the vitamin B12 level result came back with 61 pmol/L (normal: 138-652 pmol/L); his folate level was normal. He has a normal level of ADAMTS13, normal homocysteine level, and negative anti-intrinsic and anti-parietal cell antibodies. Finally, we deduced that the patient has severe vitamin B12 deficiency; therefore he was started on intramuscular cyanocobalamin 1,000 mcg once daily for seven days and then changed to once weekly for five weeks and then monthly lifelong. The patient showed clinical improvement at day 3 of parenteral replacement of vitamin B12; his white blood cells increased to 3.5 x 10 9 /L, hemoglobin to 10 g/dL, and platelet to 100 x 10 9 /L. The schistocytes started to disappear gradually. The complete blood count and vitamin B12 levels were normalized after one month of treatment ( Table 1). Discussion This is a detailed case of pseudo-TMA due to extreme vitamin B12 deficiency following bowel resection (terminal ileum). Previous evidence showed that the maintenance of the terminal ileum may protect vitamin B12 retention capacity [10]. Moreover, vitamin B12 deficiency-induced TMA postures a real challenge for professionals managing cases of thrombocytopenia, hemolytic anemia, and schistocytosis. In spite of the fact that the differential diagnosis should aim at ruling out the foremost critical conditions in the first place, estimation of vitamin B12 and methylmalonic level to the current symptomatic board for the assessment of TTP can guide clinicians to appropriate determination and treatment [11]. As mentioned, vitamin B12 plays a significant role in RBC synthesis; therefore, when this compound reaches low levels (cobalamin deficiency), the rigidity of RBC membrane increases and the erythrocyte deformability decreases, causing the lysis of RBCs [12]. Furthermore, cobalamin deficiency not only affects RBCs but also pauses the maturation of all cell lines in the marrow. It can manifest with hemolytic anemia secondary to abnormal erythropoiesis and indirect hyperbilirubinemia, yet it does not often manifest with MAHA. Past literature contains only very few cases of vitamin B12 deficiency-induced MAHA, termed as "pseudo-thrombotic angiopathy." Others proposed that severe hyperhomocysteinemia combined with vitamin B12 deficiency points to an impressive peripheral blood smear and clinical findings similar to TTP [13]. TTP is the most differential diagnosis of pseudo-TMA that causes a hurdle for the professionals. Although TTP can be deadly without any proper therapy plan, starting plasmapheresis treatment for TTP and vitamin B12 replacement for cobalamin deficiency might be a plausible choice for pseudo-TMA-suspected cases [14]. One of the features that assist in primary diagnosis is that pseudo-TMA does not respond to FFP, which was the case in our report [1]. Vitamin B12 deficiency-related hemolytic anemia may cause hyperbilirubinemia (due to the destruction of RBCs that have not achieved maturation in the marrow), or may also cause extravascular hemolysis, which should not result in microangiopathy [13]. Moreover, the patient in this report presented with frequent appearance of schistocytes on the peripheral blood smear, in addition to elevated bilirubin and LDH levels. Conclusions TMA due to vitamin B12 deficiency is a rare condition, yet it must be inspected in all patients with clinical and laboratory manifestations of TTP. Our case highlights the importance of differential diagnoses of vitamin B12 deficiency in cases presenting microangiopathic hemolysis or in cases that are not responsive to standard treatments for TTP. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-02-12T05:10:15.120Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7866da07d57650b3f97e64f2e4db27036bf14ec7", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/45743-vitamin-b12-deficiency-presenting-with-microangiopathic-hemolytic-anemia.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7866da07d57650b3f97e64f2e4db27036bf14ec7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253462548
pes2o/s2orc
v3-fos-license
Roles of N6-methyladenosine (m6A) modifications in gynecologic cancers: mechanisms and therapeutic targeting Uterine and ovarian cancers are the most common gynecologic cancers. N6−methyladenosine (m6A), an important internal RNA modification in higher eukaryotes, has recently become a hot topic in epigenetic studies. Numerous studies have revealed that the m6A-related regulatory factors regulate the occurrence and metastasis of tumors and drug resistance through various mechanisms. The m6A-related regulatory factors can also be used as therapeutic targets and biomarkers for the early diagnosis of cancers, including gynecologic cancers. This review discusses the role of m6A in gynecologic cancers and summarizes the recent advancements in m6A modification in gynecologic cancers to improve the understanding of the occurrence, diagnosis, treatment, and prognosis of gynecologic cancers. Background The main types of gynecologic cancers, which seriously damage the female reproductive organs, include vulvar cancer, vaginal cancer, cervical cancer (CC), endometrial cancer (EC), uterine cancer, and ovarian cancer (OC). CC and OC are the most frequent types of gynecologic cancer, accounting for 6.5% and 3.4%, respectively, of all the new cancers in women [1]. A population-based study conducted on the epidemiological trends of gynecologic cancer from 1990 to 2019 indicated that the incidence and mortality of gynecologic cancers might have geographical variations and changes along with sociodemographic index value [2]. Most gynecologic cancer patients have no distinct symptoms or physical signs in the early stages. In addition, the specific biomarkers for the early diagnosis of gynecologic cancer are also lacking. Moreover, most of the cases are in advanced stages at the time of diagnosis. Therefore, understanding the pathogenesis of gynecologic cancer is particularly important. This might identify specific markers for early diagnosis and therapeutic targets for the related therapeutic drugs, thereby ultimately improving the prognosis and quality of patients [3][4][5]. N 6 -methyladenosine (m 6 A) was first discovered in 1974 as an internal chemical modification, which was widely observed in the messenger RNAs (mRNAs) and noncoding RNAs (ncRNAs) [6]. The m 6 A plays important role in numerous aspects of RNA metabolism, such as pre-mRNA splicing, processing of 3′-untranslated region (UTR), export, translation, and degradation of mRNA, and processing of non-coding RNA [7][8][9][10]. Recent studies have shown that the m 6 A regulatory proteins act as writers, erasers, and readers, thereby modulating the dynamic deposition of mRNAs and other nuclear RNAs [11,12]. These findings strongly suggest the dynamic regulatory role of m 6 A modification is similar to the other well-known chromosomal reversible epigenetic modifications. This reversible RNA methylation provides a new dimension in the post-transcriptional regulation of gene expression [11]. In addition to mRNAs, m 6 A is also reported in a variety of ncRNAs, such as microRNAs (miRNAs), long noncoding RNAs (lncRNAs), circular RNAs (circRNAs), and ribosomal RNAs (rRNAs) and has been indicated to be crucial for their metabolism and function [13][14][15]. In addition, abnormal m 6 A modifications in ncRNA by some m 6 A regulatory proteins participate in the proliferation, invasion, and drug resistance of cancer cells, thereby indicating their potential association with cancer [16,17]. Therefore, this new field in cancer pathogenesis might provide new opportunities for the diagnosis and treatment of cancer. This review summarizes the recent studies on m 6 A modifications in OC, CC, and EC, particularly focusing on the regulatory mechanism of m 6 A regulatory proteins in promoting the proliferation, invasion, and metastasis of these three gynecologic cancers. Finally, the current knowledge and prospects of m 6 A modifications in the tumor immune microenvironment, diagnosis, and prognosis of gynecologic cancer are also discussed. m 6 A modification The m 6 A methylase complex consists of at least five methyltransferases (writers) with methyltransferase-like 3 (METTL3) as its catalytic core. METTL14 is present as the structural support for METTL3; this core complex is stabilized by wilm's tumor-associated protein (WTAP). The RNA binding pattern protein 15 (RBM15) helps the recruitment of the complex to its target site. Another component of this complex is vir-like m 6 A methyltransferase-associated protein (VIRMA), which is also known as KIAA1429; its molecular function is uncertain [18]. On the other hand, a demethylase (eraser), consisting of fat mass and obesity-associated protein (FTO) and alkB homolog 5 (ALKBH5), removes the m 6 A modifications, thereby reducing its modification rate [19,20]. The functional interaction between the methyltransferases and demethylase of m 6 A might contribute to the dynamic regulation of the m 6 A modifications. Recent studies have identified the m 6 A binding proteins (readers) for mRNA, containing YT521-B homology (YTH) domains, such as YTHDF1-3, YTHDC1, and YTHDC2, which have shown a greater affinity for the methylated mRNAs as compared to the non-methylated mRNA [21][22][23][24][25]. If induced by the different cellular environments, the YTH proteins, belonging to the same YTH domain family, can bind to the different subsets of the m 6 A site and regulate different genes [21]. For instance, the YTHDF2 protein colocalizes with the decay factor and enhances the m 6 A-modified mRNAs expression [26,27]. On the other hand, YTHDF1 can bind to the m 6 A site near the stop codon, thereby subsequently activating the translation by interacting with eukaryotic translation initiation factor 3 (eIF3) [28]. Another YTH protein, YTHDF3 can decay mRNA when working with the YTHDF2 while enhancing the translation of m 6 A-modified RNA when working with YTHDF1 [10,29]. m 6 A and OC Numerous studies elucidated that the m 6 A regulators could participate in many functions in OC, such as dysregulation of signaling pathways and anti-tumor drug resistance. The role and mechanism of m 6 A regulators in OC are summarized in Fig. 1 and Table 1. Role of m 6 A regulators in the progression of OC m 6 A writer and OC Studies on OC have mainly focused on the regulatory factor METTL3 [30][31][32][33]. Numerous studies have shown that METTL3 plays an important role in the tumorigeneses of lung and hepatocellular cancer (HCC) [14,34,35]. However, METTL3 has also been identified as a tumor suppressor in renal cell carcinoma (RCC) and inhibits the proliferation, migration, and progression of cancer [36]. Ma et al. first reported that METTL3 could promote m 6 A methylation in the OC without interacting with METTL14 and WTAP [31]. A previous study also showed that in human cancer, METTL3 could directly regulate the specific mRNA translation by recruiting eIF3 without coordinating with METTL14 and WTAP [34]. Studies suggested that METTL3 had a novel m 6 A regulatory mechanism, which might play an important function in the occurrence and development of OC. Currently, numerous studies indicate that METTL3 can promote the occurrence of OC by affecting the maturation and stability of multiple RNAs. Hua et al. first confirmed the role of METTL3 in tumor progression in the OC cells [30]. In-vivo and in-vitro studies reported that METTL3 could promote the epithelial-to-mesenchymal transition (EMT) by stimulating the mRNA translation of AXL receptor tyrosine kinase (AXL), which might play an important role in the occurrence and/or invasion of OC [30]. Further studies showed that there were no significant changes in the expression levels of AXL in YTHDF1silenced cells [30]. A study suggested that METTL3 might act as an m 6 A reader, thereby directly promoting the translation of mRNA; these results were consistent with a previous study, which reported that METTL3 could promote the translation of mRNA in the human lung cancer cells, containing m 6 A regulatory proteins in their cytoplasm [30,34]. However, the exact mechanism of the role of m 6 A requires further investigation. Liang et al. reported that knocking down the METTL3 gene decreased the expression levels of phosphorylated AKT and its downstream p70S6K and cyclin D1, indicating a decrease in the activation of the AKT pathway in the absence of METTL3 [32]. Bi et al. confirmed that METTL3 could regulate the m 6 A level of miR-126-5p, promoting its maturation which activated the phosphoinositide 3-kinase (PI3K)/Protein kinase B (AKT)/mechanistic target of rapamycin (mTOR) signaling pathway [33]. In addition, in a xenograft experiment, the knockdown of phosphatase and tensin homolog (PTEN) gene could reverse the METTL3 knockdown-induced decrease in the expression levels of PI3K, p-AKT, and p-4EBP1. The results further confirmed that the knockdown of the METTL3 gene inhibited the PI3K/AKT/mTOR signaling pathway by inhibiting the expression of miR-126-5p and upregulating that of PTEN in-vivo. These results showed that silencing the METTL3 gene could decrease tumor growth and PTEN gene silencing [33]. METTL3 could improve the m 6 A modification levels in RHPN1antisense RNA 1(RHPN1-AS1) and increase its RNA stability, which might partially contribute to the up-regulation of RHPN1-AS1 in OC [37]. RHPN1-AS1 absorbs miR-596, which increases the expression of LETM1 and activates the FAK/PI3K/AKT signaling pathway, thereby contributing to the occurrence and development of cancer [37]. These results were similar to Luo's study, which indicated that YTHDF1 could promote the HCC Fig. 1 In OC, m 6 A regulatory proteins contribute to tumorigenesis and metastasis by interacting with various RNAs. METTL3 and METTL14 stimulate the progression of OC by promoting the expression levels of FZD10, CSF-1, EIF3C, AXL, RHPN-AS1, miR-125-5p, and TROAP. HOXA10 forms a loop with ALKBH5 and jointly activates the JAK2/STAT3 signaling pathway by mediating JAK2 m 6 A methylation and promoting the OC resistance to cisplatin. The activated NF-κB up-regulates ALKBH5 expression and increases m 6 A level and NANOG expression, contributing to ovarian carcinogenesis. FTO and ALKBH5 stimulate/inhibit the progression of OC by affecting the expression levels of ATG5, ATG7, PDE1C, PDE4B, and FZD10.YTHDF1, YTHDF2 and IGF2BP1 stimulate/inhibit the progression of OC by affecting the expression of BMF, TRIM29, EIF3C, SRF, and UBA6. In addition, FBW7 and miR-145 inhibit the expression of YTHDF, leading to OC suppression progression by activating the PI3K/AKT/mTOR signaling pathway and inducing EMT [38]. In most tumors, METTL14 downregulates the m 6 A levels in cancer cells by acting as m 6 A methyltransferase to inhibit the occurrence and development of tumors, thereby playing an anti-tumor role. In breast cancer (BC), the low METTL14 level was correlated with a poor prognosis, and its abnormal expression could promote the invasion of BC by affecting the tumor progression-related pathways and mediating the immunosuppression [39]. The studies on OC have shown similar results. Liang et al. reported a considerable copy number variation (CNV) in the METTL14 gene in the OC tissues and a decrease in its expression level as well as low m 6 A methylation level [40]. Further investigation showed that the trophininassociated protein (TROAP) was a downstream target of METTL14 [40]. METTL14 reduced the mRNA stability of TROAP, inhibiting the proliferation of OC cells at the G1 phase [40]. In addition, WTAP, an m 6 A writer, has been reported as a classic biomarker for the progression and metastasis of OC, thereby contributing to the diagnosis and prognosis of OC [41,42]. Fu et al. reported that the overexpression of WTAP in the OC cells resulted in a high malignancy and low survival rate and promoted aberrant methylation of mRNA, thereby regulating the growth and migration of tumor cells [43]. Wang et al. reported a positive correlation of WTAP expression with two genes, including a family with sequence similarity 76 member A (FAM76A) and HBS1-like translational GTPase (HBS1L) [44]. However, the mechanism of the WTAP-HBS1L/ FAM76A axis, playing a functional role in the progression of OC, remains unclear. m 6 A reader and OC In OC, numerous studies have elucidated the different functional roles of YTHDF1, YTHDF2, and Insulinlike growth factor 2 mRNA-binding proteins (IGF2BPs) [45][46][47][48][49]. YTHDF1, a YTH domain family member, can identify the m 6 A post-transcriptional modification by the conserved aromatic cages in its YTH domain [50]. All the YTH domain-containing proteins can bind to the m 6 A sites in mRNA; however, they recognize different target mRNAs and play different functional roles. Studies have shown an increase in the recruitment of tripartite motif-containing 29 (TRIM29) mRNA by YTHDF1 in the cisplatin-resistant OC cells, thereby promoting the translation of TRIM29 transcripts [49]. Knocking down the YTHDF1gene inhibited the cancer stem cell (CSC)-like characteristics in the cisplatin-resistant OC cells, which were rescued by the overexpression of TRIM29 gene, thereby suggesting its oncogenic role in an m 6 A-YTHDF1-dependent manner [49]. Based on the multi-group analysis of OC, researchers have identified a new mechanism, involving EIF3C, a subunit of the translation initiation factor [47]. YTHDF1 could bind to the m 6 A-modified mRNA of EIF3C, stimulate the EIF3C translation, and promote its overall translational output, ultimately leading to the progression and metastasis of OC [47]. Recent studies have reported that the IGF2BPs in m 6 A "reader" also plays a similar role. Müller et al. found that IGF2BP1 could downregulate the mRNA expression of serum response factor (SRF) mediated by the miRNA production, thereby promoting the expression of SRF in an m 6 A-dependent manner [46]. Wang et al. also identified IGF2BP1 as an m 6 A reader of ubiquitin-like modifier activating enzyme 6 antisense RNA 1 (UBA6-AS1) -RBM15, which mediated the m 6 A mRNA level of UBA6, thereby enhancing its stability; this ultimately inhibited the proliferation, migration, and invasion of OC cells via UBA6 [51]. These studies suggested that YTHDF1 and IGF2BPs were involved in the OC progression by increasing the translation of target mRNA. Recent studies demonstrated a novel tumor-promoting effect of YTHDF2 by analyzing its upstream signal in the OC and showed that the chemical modification of m 6 A as a signal center could interact with important metabolic pathways [45,48]. Li et al. reported a close correlation between an increase in the YTHDF2 protein levels and the OC tissues in a clinical setting [48]. In addition, the investigation showed a key crosstalk between the miR-145 and YTHDF2 via a double-negative feedback loop [48]. Researchers have also reported similar double-negative feedback circuits in the HCC. The miR-145, which was down-regulated in the HCC patients, could directly target the 3′-UTR of YTHDF2 mRNA, thereby inhibiting its expression [52]. Additionally, Xu et al. demonstrated the E3-ubiquitin ligase F-box and WD repeat domain-containing 7 (FBW7), a tumor suppressor, could degrade YTHDF2 in the OC [45]. This study depicted the mechanism of YTHDF2 and FBW7 in OC; FBW7 could decrease the YTHDF2-mediated m 6 A-dependent mRNA decay for stabilizing the pro-apoptotic BMF mRNA [45]. In particular, Xu et al. demonstrated the role of the FBW7-YTHDF2-BMF axis in the occurrence and development of OC and described how the m 6 A-related regulatory factors in OC were activated, which provided new insights into the mechanism of OC. m 6 A eraser and OC In OC, studies on m 6 A erasers have focused on FTO and ALKBH5 [53][54][55][56][57]. The FTO gene is associated with obesity in children and adults [58] and regulates energy homeostasis by controlling food intake and fine-tuning nutritional sensing at the cellular level [59]. FTO is the first recognized nucleic acid demethylase, which physiologically targets the m 6 A residues in mRNA [19,60]. Studies have reported its high expression in acute myeloid leukemia (AML), promoting cellular transformation and proliferation by the post-transcriptional regulation of ASB2, RARA, MYC, and CEBPA [61,62]. CSCs have the abilities of self-renewal, spherical growth, differentiation and tumor formation, which are correlated with the initiation, metastasis, and recurrence of high-grade serous OC (HGSOC) after chemotherapy [63,64]. Interestingly, Huang et al. recently reported a tumor-inhibitory effect of FTO in HGSOC, which was contradictory to the tumor-promoting role of the FTO previously reported in other types of cancer. Huang et al. demonstrated that FTO targeted two phosphodiesterase genes (PDE4B and PDE1C), thereby regulating the cAMP signal transduction and maintaining the stemness of ovarian CSCs [53]. Their study demonstrated the inhibitory effects of FTO on solid tumors for the first time and identified the cAMP signal transduction as a key pathway for the self-renewal of CSCs regulated by m 6 A mRNA modification [53]. In addition, another eraser ALKHB5 was also demonstrated to upregulate the expression level of NANOG [54]. A study showed that ALKBH5 could mediate the posttranscriptional NANOG expression and enrich CSCs in BC [65]. ALKBH5 showed significantly higher expression levels in OC tissues as compared to the normal ovarian tissues; however, its expression in the OC cell lines was lower than that in the normal ovarian cells in-vitro [54]. Jiang et al. reported similar expression patterns for the Toll-like receptor 4 (TLR4) in the tumor microenvironment (TME) [54]. Further investigation verified that the highly expressed TLR4 could activate the nuclear factor kappa B (NF-κB) pathway, upregulate the ALKBH5 expression, and increase the m 6 A level and NANOG expression, thereby contributing to the tumorigenesis of OC [54]. Their study revealed an important functional role of mRNA m 6 A modification in the self-renewal of OC cells, particularly in the TME. Autophagy is related to the status of physiological processes and is involved in many pathological conditions, including tumors and inflammation. Currently, numerous studies have confirmed the effects of cir-cRNA on the malignant behavior of tumor cells by regulating autophagy. For example, CircDnmt1 stimulated autophagy in the BC cells to promote their proliferation [66]. CircRNA 103948 acted as competing endogenous RNA (ceRNA) and inhibited autophagy in CRC [67]. CircRNA ST3GAL6 could inhibit the malignant behavior of gastric cancer by regulating autophagy through the FOXP2/MET/mTOR axis [68]. Interestingly, Zhang et al. reported that the m 6 A modification was involved in autophagy [57]. They demonstrated that CircRAB11FIP1 could bind to the mRNA of FTO, promoting its expression, and thereby regulating the m 6 A methylation level of ATG5 and ATG7 mRNA through FTO, which ultimately promoted the autophagy and malignant behavior of OC [57]. Tumors show resistance to anti-tumor drugs by numerous mechanisms, including mutation of tumor suppressor genes, activation of oncogenes, and dysregulation of the signaling pathways [69][70][71]. In OC, the FTO and ALKBH5 are involved in drug resistance by activating or upregulating a specific signaling pathway [55,56]. Takeshi et al. demonstrated a significant increase in m 6 A modification in mRNA frizzled class receptor 10 (FZD10), thereby increasing its stability and upregulating the Wnt/β-catenin pathway [56]. Further investigation showed that the downregulation of m 6 A demethylases FTO and ALKBH5 enhanced the m 6 A modification of FZD10 mRNA and reduced the sensitivity of PARP inhibitor (PARPi), which increased the activity of homologous recombination [56]. Another study reported that ALKBH5-Homeobox A10 (HOXA10) loop could mediate the demethylation of Janus kinase 2 (JAK2) m 6 A and cisplatin resistance in the OC [55]. Noteworthily, HOXA10 could form a loop with ALKBH5 and act as an upstream transcription factor of ALKBH5. Its overexpression could also enhance chemoresistance in the OC cells [55]. The activation of the JAK2/ STAT3 signaling pathway can result in the overexpression of the ALKBH5-HOXA10 loop [55]. These studies have explained the mechanism of m 6 A modification in the anti-tumor drug resistance of OC. However, more detailed studies are needed in the future to study the mechanism in detail. m 6 A and immunoregulation in OC TME includes tumors, surrounding matrix, and immune components, such as tumor-associated macrophages (TAMs), CD8 + T lymphocytes, and myeloid-derived suppressor cells (MDSCs) [72]. Recent studies have demonstrated numerous important roles of TME components in various biological behaviors of cancer, such as invasion, metastasis, and immune evasion from immune surveillance [73,74]. Currently, based on the level of tumor-infiltrating immune cells (TICs) in TME, tumors have been classified into two groups: hot tumors, containing high-density CD8+ T lymphocytes, and cold tumors, lacking T lymphocytes [75][76][77]. Despite the relatively high tumor mutation burden in OC, it is a cold tumor, generally lacking the infiltration of cytotoxic T lymphocytes, and thereby lacking the ability to recognize all the tumor antigens [78]. Numerous studies have demonstrated that the degree of immune cell infiltration and expression levels of various immune gene markers are correlated with the specific m 6 A regulators [54,[79][80][81]. Wang et al. reported RBM15B, Zinc finger CCCH domain-containing protein 13 (ZC3H13), YTHDF1, and IGF2BP1 as important immune cell infiltration-regulated m 6 A regulators in the OC [79]. Yan et al. also reported cell division cycle 42 effector protein 3 (CDC42EP3) as a possible target gene of m 6 A, which was downregulated by m 6 A regulators in the OC cells and tissues [80]. Interestingly, the expression of CDC42EP3 was reported to be correlated with the various TICs, including natural killer cells, T central memory cells, and T gamma delta cells [80]. Jiang et al. further elucidated the mechanism of m 6 A regulatory factors involved in immune responses [54]. The m 6 A eraser ALKBH5 could promote the development of OC by stimulating the NF-κB pathway in the TME [54]. Prognostic effect of m6A in OC Numerous OC patients are diagnosed in the advanced stages due to the lack of specific biomarkers for early clinical screening as well as due to their relatively nonspecific disease symptoms. Currently, more than 70% of OC patients show < 30% overall survival after five years of cytoreductive surgery and adjuvant chemotherapies [82]. In order to better control tumorigenesis and monitor the [88]. In addition, researchers have established genetic models to predict the prognosis of OC patients. Fan et al. established a genetic model, consisting of three m 6 A regulatory genes, which could be used to predict the progression of OC patients [86]. Li et al. also established a risk-scoring model, consisting of three m 6 A RNA methylation regulators (VIRMA, IGF2BP1, and HNRNPA2B1) and a related miRNA-m 6 A regulator-m 6 A target gene network [87]. Similarly, Jiao et al. also established a genetic model, containing 12 genes (WTAP, LGR6, ZC2HC1A, SLC4A8, AP2A1, NRAS, CUX1, HDAC1, CD79A, ACE2, FLG2, and LRFN1) [89]. Moreover, several studies also have verified the reliability of these results. For example, Yu et al. reported the correlation of high WTAP expression with poor prognosis in HGSOC [90]. However, different m 6 A regulatory genes have shown different expression patterns in the different tumor types or independent databases, further demonstrating the complexity of the m 6 A post-transcriptional regulation mechanism and suggesting the tumor-specificity of m 6 A regulatory factors. LncRNAs are a group of 200-nucleotides long RNA, which regulate gene expression and various physiological and pathological processes [91,92]. Using the OC-related dataset from The Cancer Genome Atlas (TCGA), Nie et al. confirmed the prognostic potential of 21 m 6 A modifications in OC and identified two m 6 A subtypes using the m 6 A-related gene expression profiles [93]. Then, the authors established an OC risk model based on the differential expression pattern of lncRNAs between the m 6 A subtypes and lncRNAs co-expressed with the m 6 A-related genes [93]. The risk model not only simply evaluated the predictability of tumor prognosis using the risk score but also assessed the effectiveness of immunotherapy and developed novel and more accurate immunotherapies. m 6 A and CC Numerous studies have elucidated the participation of m 6 A regulators in many functions in the CC, such as aerobic glycolysis and EMT. The role and mechanism of m 6 A regulators in CC are summarized in Fig. 2 and Table 2. Role of m 6 A regulators in the progression of CC m 6 A writers and CC miRNA is a small non-coding RNA (18-24 nucleotides long), which negatively regulates the gene expression by binding to the 3′-UTR of its target mRNA at the posttranscriptional stage [94]. The correlations of miRNA activity with numerous physiological and pathophysiological processes, including carcinogenesis, have been reported [95,96]. It was reported that the miRNAs might regulate over 60% of the human mRNAs and participate in almost all the biological processes in mammalian systems [97]. The functional characteristics of the miRNA target network and identification of miRNA dysregulation emphasized their importance in malignant tumors [98]. In CC, studies have shown that the m 6 A methylation modification might regulate the maturation of miRNA, thereby affecting the progression of tumors. Huang et al. reported that METTL3 could modulate the maturation of miR-193b by increasing the methylation level of pri-miR-193b m 6 A [99]. In addition, miR-193b might play a tumor suppressor role in the CC by inhibiting the cell cycle at the G1 phase as well as inhibiting the proliferation of cells [99]. Recently, Su et al. reported that METTL3 could stimulate the proliferation of CC by regulating mRNA stability of apoptosis chromatin condensation inducer 1 (ACIN1) mRNA [100]. Further investigations showed that the overexpression of IGF2BP3 could reverse the mRNA and protein levels of ACIN1 in the METTL3downregulated CC cells, suggesting that the METTL3 could affect the mRNA stability of ACIN1 through IGF2BP3 [100]. In the previous section, it was elucidated that METTL3 could promote EMT in the OC by stimulating the mRNA translation of AXL [30]. Li et al. reported that METTL3 could negatively regulate the expression and membrane localization of β-catenin (encoded by CTNNB1) in the CC, thereby forming a complex with E-Cad to participate in the EMT and promoting the development of CC [101]. Further studies indicated that METTL3 could negatively regulate mRNA transcription, decay, and translation of CTNNB1 through different m 6 A regulators and mechanisms [101]. METTL3 indirectly inhibited the CTNNB1 transcription by upregulating the expression of its inhibitor E2F1 and recruiting YTHDF2 to the 5'-UTR m 6 A of CTNNB1 [101]. Additionally, METTL3 regulated both the classical and non-classical mRNA translation of CTNNB1 through YTHDF1 [101]. Piwi-interacting RNA (piRNA) was first discovered in 2006 as a small non-coding RNA, consisting of about 30 nucleotides, which could specifically bind to the PIWI family proteins (highly conserved RNA binding proteins) in the testicular germ cells [102,103]. A recent study demonstrated that it was highly expressed both in the normal and cancer cells [104]. Their presence in the cancer cells might affect cancer growth by directly binding to the PIWI proteins [105]. Xie et al. reported the piRNA-14633-METTL14-CYP1B1 signaling cascade and showed the interaction of piRNA-14633 with the 3'-UTR of METTL14, which increased its mRNA stability, thereby enhancing the METTL14 methylase activity and promoting tumorigenesis by enhancing the CYP1B1 expression [106]. In OC, circRNA might act as a pro-oncogene and participate in the occurrence and development of tumors by promoting autophagy [57]. In CC, Chen et al. reported that the m 6 A regulator METTL3 could enhance the stability of circ0000069 transcripts, thereby showing the carcinogenesis effects of circRNA [107]. This resulted in the production of low levels of miR-4426 in the CC cells due to the enhanced circ000006 9 [107]. Therefore, through m 6 A modification, miR-4426 was indirectly inhibited, which promoted CC development [107]. The methylation modifications of lncRNA and m 6 A play a key role in human cancer. The interaction mechanism between the lncRNA and m 6 A methylation modifications has been elucidated in other tumors, such as non-small cell lung cancer and CRC; however, little is known about their interaction in CC [108,109]. Therefore, researchers have attempted to explore the internal mechanism of methylation modification of lncRNA and m 6 A, regulating the tumorigenesis of CC. Ji et al. reported a significantly upregulated expression of METTL3-induced FOXD2-AS1 in the CC cells and tissues, which could stabilize its mRNA [110]. FOXD2-AS1 could bind to the promoter region of p21 and recruit lysine-specific demethylase 1(LSD1) to silence its transcription, thereby accelerating the progression of CC [110]. In conclusion, they suggested that the METTL3/ FOXD2-AS/LSD1/P21 axis could accelerate the CC progression in an m 6 A-dependent manner [110]. Yang et al. reported the role of m 6 A modification between miRNA and lncRNA and showed that the m 6 A modification significantly enriched the lncRNA ZFAS1 [111]. Further investigations showed the correlation of METTL3 with ZFAS1, interacting with the m 6 A level of ZFAS1 without affecting its expression level [111]. The knockdown of METTL3 abolished the miR-647-mediated suppression of ZFAS1, indicating that ZFAS1 could affect the miR-647 in an m 6 A-dependent manner [111]. These studies revealed that the lncRNA could promote the occurrence of CC in an m 6 A-dependent manner, and the lncRNA blocked the miRNA through the m 6 A modificationrelated proteins, thereby establishing a link between them and suggesting the potential mechanism of m 6 A between miRNA and lncRNA in CC. Warburg effect, also known as aerobic glycolysis, is a typical metabolic marker of cancer metabolism [112][113][114]. Despite the abundance of intracellular oxygen, tumor cells persist to generate energy through aerobic glycolysis rather than oxidative phosphorylation through mitochondria [115]. This characteristic energy supplementation is known as the Warburg effect. Inhibiting the Warburg effect is considered an effective treatment for CC. The effects of m 6 A methylation modification on aerobic glycolysis in cancer cells have rarely been studied. Recent studies have investigated the molecular mechanism of m 6 A methylation modification in the energy metabolism in tumor cells. Li et al. reported the upregulated expression of pyruvate dehydrogenase kinase 4 (PDK4) induced by m 6 A, which was reversed by the METTL3 deficiency-induced inhibition of glycolysis and ATP production in the tumor cells [116]. A previous study reported that METTL3 was highly expressed in the metastatic tissues of CRC and inhibited the mRNA degradation of SRY-box transcription factor 2 (SOX2) by specifically interacting with m 6 A reader IGF2BP2 [117]. Similarly, Li et al. found that m 6 A modification in the 5′-UTR region rather than 3′-UTR of PDK4 mRNA could positively regulate its translation and stability by binding to the YTHDF1/eEF-2 complex and IGF2BP3 [116]. In addition, the TATA-binding protein (TBP) could enhance the expression of METTL3 in the CC cells [116]. The in-vivo and clinical analyses demonstrated that the m 6 A could regulate the glycolysis of cancer cells by regulation of PDK4 [116]. In another study, Wang et al. elucidated that METTL3 could promote the occurrence of CC invivo and in-vitro by promoting cellular glycolysis [118]. Meanwhile, the authors also showed that the m 6 A readers could stabilize mRNA, playing a carcinogenic role in the development of CC [118]. YTHDF1 could recognize HK2 m 6 A and enhance its stability, thereby regulating the HK2 mRNA [118]. METTL3 stabilized the HK2 mRNA by recruiting YTHDF1 and exerted an oncogenic effect via the YTHDF1/HK2 axis by accelerating glycolysis [118]. m 6 A reader and CC In CC, studies on the m 6 A readers have mainly focused on the YTHDF1, YTHDF2, and IGFBPs. Previous studies showed that m 6 A readers could promote the progression of tumors by regulating mRNA translation [119]. Similarly, Wang et al. reported an elevated expression level of YTHDF1, which was closely related to the poor prognosis of CC patients [120]. RANBP2 was identified as a crucial target of YTHDF1 in the CC cells; YTHDF1 affected the RANBP2 translation in an m 6 A-dependent manner without affecting its transcription level [120]. Consequently, the overexpression of RANBP2 enhanced the progression of CC [120]. The lncRNAs have several functional roles, such as the organization of nuclear architecture, regulation of translation, mRNA stability, translation, and post-translational modifications [121,122]. Currently, lncRNAs are reported to be deregulated in non-small cell lung cancer and CRC and play a tumor suppressor or oncogene function by inhibiting the miRNAs [123,124]. Studies have also demonstrated the interaction of lncRNA with the m 6 A-related factors, affecting the occurrence and development of tumors. In CC, m 6 A readers promoted the growth and proliferation of the CC cells by affecting the fate of lncRNA [125,126]. Zhang et al. reported that KCNMB2-AS1 was predominantly located in the cytoplasm and served as a ceRNA to inhibit the binding of miR-130b-5p and miR-4294 to the IGF2BP3 mRNA, resulting in its upregulation; therefore, it is a well-documented oncogene in CC [125]. Moreover, IGF2BP3 could bind to KCNMB2-AS1 through its three m 6 A modification sites acting as an m 6 A reader and stabilizing KCNMB2-AS1 [125]. The KCNMB2-AS1 and IGF2BP3 formed a positive regulatory circuit, which enhanced the tumorigenic effects of KCNMB2-AS1 in the CC [125]. Similarly, another study showed that the degradation of m 6 A-mediated GAS5 RNA relied on the YTHDF2 [126]. The circRAB11FIP1 might promote tumor autophagy in OC by regulating the m 6 A methylation of the autophagy-related proteins via FTO [57]. Similarly, Ji et al. showed that IGF2BP2 could interact with circARH-GAP12, enhancing the mRNA stability of FOXM1 and forming a circARHGAP12/IGF2BP2/FOXM1 complex to promote the proliferation and migration of CC cells [127]. Human Papillomavirus (HPV) exists in several types, each of which, contains a circular double-stranded DNA genome. HPV primarily infects the basal keratinocytes of the squamous epithelium, which is poorly differentiated. The most common genotypes of HPV are HPV16/18, which are the high-risk types [128]. Generally, CC results from the persistent infection of high-risk HPV [129]. The HPV infection alters the metabolism of tumor cells, causing immune suppression and immune evasion, which lead to the occurrence of CC [130]. The alteration in metabolic phenotypes after HPV infection, contributing to the progression of malignant CC is a critical factor [131,132]. Hu et al. elucidated the roles and mechanisms, underlying the biological effects of HPV E6/E7 and IGF2BP2 on the CC progression in-vitro and in-vivo [133]. They demonstrated that the E6/E7 proteins could stimulate aerobic glycolysis, proliferation, and metastasis in the CC cells by modulating the MYC mRNA via IGF2BP2 [133]. The E6/ E7 of HPV16 could enhance the expression of HK2 in glycolysis by increasing c-MYC [134]. These studies suggested complex correlations among the E6/E7, m 6 A methylation modification, promotion of aerobic glycolysis, and CC progression, the mechanisms of which require further studies. m 6 A eraser and CC In the CC, studies on the m 6 A eraser have focused on the FTO and ALKBH5 [126,[135][136][137]. The m 6 A demethylases play a critical role in the CC, including the interaction of FTO with the transcription factors E2F1 and MYC, which significantly reduces their translational efficiency [137]. The overexpression of E2F1 or MYC can compensate for the lack of FTO, which negatively impacts the proliferation and migration of cells, suggesting that these two genes might have mutual regulatory functions in CC cells [137]. Researchers elucidated that the lncRNA GAS5-AS1 could increase the expression and stability of GAS5 through YTHDF2 [126]. They reported that GAS5-AS1 could reduce the m 6 A level of GAS5 by interacting with m 6 A eraser ALKBH5, thereby increasing the stability of GAS5 [126]. Wang et al. suggested that FTO could regulate the fate of lncRNA to promote the development of CC [135]. They demonstrated that the reduction of the m 6 A level could improve the stability of HOXC13-AS in the CC cells [135]. Further investigations in the same study revealed that the HOXC13-AS promoted the epigenetically-mediated upregulation of FZD6 and activation of Wnt/β-catenin for promoting CC proliferation, invasion, and EMT [135]. Chemoradiotherapy is a major therapeutic option in CC treatment [138][139][140]. However, both the acquired and primary resistances to chemoradiotherapy cause treatment failure [140,141]. Therefore, the mechanism of resistance to chemoradiotherapy in CC is needed to be investigated. Numerous studies showed that EMT was involved in drug resistance in cancer [142][143][144][145]. FTO could positively regulate the expression levels of β-catenin by reducing the m 6 A methylation level of its mRNA in EMT [136]. In the OC, the upregulation of the Wnt/β-catenin signaling pathway in EMT involved m 6 A modification [135,146]. Zhou et al. also screened the markers of the Wnt/β-catenin pathway and demonstrated that the canonical Wnt/β-catenin pathway was not involved in the FTO-induced upregulation of β-catenin [136]. However, the excision repair crosscomplementation group 1 (ERCC1) was determined as a downstream regulator of the FTO-induced up-regulation of β-catenin, confirming that FTO/β-catenin/ERCC1 axis might play an important role in developing resistance to the chemotherapeutic drugs in CC [136]. m 6 A and immunoregulators in CC The immunosuppressive cells in the TME, such as regulatory cells and MDSCs, affect each other as well as the development of tumors [147,148]. Tumor-infiltrating MDSCs usually induce anti-tumor immune tolerance by inhibiting the proliferation and function of T cells, such as blocking the antigen presentation by the antigen-presenting cells [149]. Ni et al. reported that METTL3 could directly induce the differentiation of MDSCs and tumorassociated MDSCs in-vitro, suggesting that METTL3 might play an important role in the TME of CC [150]. Programmed death-1 (PD-1) is present in apoptotic T-cell hybridomas. It is predominantly present on the surface of activated T cells and B cells as a surface receptor for the activation of T cells. There are two ligands for PD-1, including PD-ligand 1 (PD-L1) and PD-ligand 2 (PD-L2) [151,152]. TME enhances the expression levels of PD-1 molecules in the infiltrating T cells as well as those of PD-L1 and PD-L2 molecules in the tumor cells, thereby leading to constant activation of the PD-1 pathway within the TME. Zhang et al. investigated the effects of m 6 A-related lncRNA modifications on the immune response of CC patients [153]. The results showed a significant increase in the expression levels of several immune checkpoints in the high-risk subgroups associated with m 6 A, including PD-1 and PD-L1, which suggested the potential responses to PD-1 [153]. Prognostic effects of m 6 A in CC The diagnosis and treatment of cancer have been greatly improved over the past decades. However, the 5-year survival rate of the patients remains low. Therefore, accurate prognostic indicators are needed to establish an individualized treatment strategy for CC patients (Table 4). Wang et al. reported that a decrease in the m 6 A methylation level was closely related to the cancer progression and low survival rate, suggesting its potential as a target for the prognosis and treatment of CC [154]. Ma et al. reported that four genes had the prognostic potential for CC, including HNRNPC, KIAA1429, and ZC3H13, and a protective gene YTHDF1 [155]. Pan et al. established a characteristic model, consisting of three genes (ZC3H13, YTHDC1, and YTHDF1), and showed good performance in predicting the survival rates of cervical squamous cell carcinoma (CESC) patients [156]. Moreover, the results were validated using bioinformatics analysis in clinical cohort of CESC [156]. The protein and mRNA expression levels of ZC3H13, YTHDC1, and YTHDF1 were detected in further experiments [156]. The experimental results were consistent with the in-silico results, confirming their prognostic potential in CESC [156]. m 6 A and EC Numerous studies have explored the participation of m 6 A regulators in many functions in the EC, such as cell cycle regulation and self-renewal of CSCs. The role and mechanism of m 6 A regulators in EC are summarized in Fig. 3 and Table 3. Role of m 6 A in the progression of EC m 6 A writer and EC The signal transduction network is a communication line among cells, and is used for perceiving signals, including those from the extracellular environment, and transmitting them to the downstream targets for the proper functioning and maintenance of cells. The abnormal changes in the signal transduction pathways are the important biological characteristics of the tumor cells. Current studies have shown that the alternation in the signal transduction pathways affects the cellular metabolism and immune response in the tumor cells [157,158]. In EC, the mechanism of m 6 A methylation effects on tumor Fig. 3 In EC, m 6 A regulatory proteins contribute to tumorigenesis and metastasis by interacting with various RNAs. METTL14 mutation or reduced expression of METTL3 increases the proliferation and tumorigenicity of EC by activating the AKT pathway. WTAP downregulates CAV-1 expression to activate the NF-κB signaling pathway in EC, promoting EC progression. HIF-1α and HIF-2α activate the expression of ALKBH5 under hypoxic conditions, facilitating the SOX2 expression by demethylating the SOX2 mRNA, leading to the tumorigenesis of EC. ALKBH5 demethylates the target transcript IGF1R and enhances its mRNA stability to promote tumorigenesis and metastasis of EC. FTO promotes HOXB13 protein expression, activates the WNT signaling pathway, and promotes EC invasion and metastasis. PADI2 activates the IGF2BP1 expression and helps in maintaining the mRNA stability and expression of SOX2, thereby supporting the malignancy state of EC. IGF2BP1 recruits PABPC1 to promote PEG10 protein expression, contributing to the tumorigenesis of EC. YTHDF2 inhibits the expression of IRS1 and inhibits IRS1/AKT signaling pathway, consequently inhibiting the tumorigenicity of EC. YTHDF2-mediated LncRNA FENDRR degradation promotes cellular proliferation by elevating the SOX4 expression in EC growth through signaling pathways has been elucidated. Liu et al. reported higher expression levels of WTAP in the EC tissues as compared to the adjacent normal tissues [159]. Their further investigations confirmed that the expression of CAV-1 was regulated by WTAP in an m 6 A-dependent manner [159]. They also showed that, after being regulated by WTAP, the CAV-1 could activate the downstream NF-κB pathway [159]. PI3K/AKT pathway also plays an important role in various biological processes. The dysregulation of the AKT signaling pathway has shown crucial roles in the proliferation and apoptosis of numerous tumor cells [160][161][162]. Some studies showed that the stem cells and cancer cells proliferated with the reduction in the m 6 A methylation [36,163,164]. However, other studies reported that some cancers were related to the high expression of METTL3 and increased m 6 A methylation, which might involve different mechanisms and require more in-depth and detailed studies [35,165,166]. m 6 A reader and EC Researchers have shown that the dynamic m 6 A-modification in mRNAs, particularly in the key transcripts, might alter the physiology of cells [167]. For instance, the decreased m 6 A mRNA methylation could stimulate cellular proliferation by modulating the expression of the critical enzymes, which were involved in the AKT signaling pathway [168]. Hong et al. showed that the overexpressed YTHDF2 could bind to the m 6 A-modified insulin receptor substrate 1 (IRS1), which reduced its translation, thereby blocking the IRS1/ AKT pathway [169]. In the same study, the results indicated that several vital proteins could regulate the cellular activities of EC cells through the AKT pathway; these proteins were also responsible for regulating the dynamic equilibrium of EC cells [169]. On the other hand, studies showed that YTHDF2 could also regulate the proliferation of EC cells by affecting the metabolism of lncRNA. According to Shen et al., the expression levels of lncRNA FENDRR in the EC tissues were reduced; however, its m 6 A methylation levels showed a negative effect trend [170]. The subsequent in-vitro experiments in the same study demonstrated that YTHDF2 could recognize the abundance of m 6 A-modified lncRNA FENDRR in the EC cells and degrade it [170]. After expressing the YTHDF2 gene, the expression levels of lncRNA FENDRR were restored, thereby inhibiting proliferation and stimulating the apoptosis of EC cells [170]. Furthermore, they reported that overexpressing the lncRNA FENDRR could reduce SOX4 translation and result in inhibiting EC cell proliferation and promoting cellular apoptosis [170]. These results were consistent with those of the previous study by Liu et al., which reported an adverse effect of the lncRNA FENDRR on the SOX4 expression in CRC [171]. Among the molecular mechanisms of cellular proliferation, cell cycle acceleration is of great importance, which is regulated by the CDK-cyclin complexes and cyclin-dependent kinase inhibitors [172]. A recent study reported that paternally expressed gene 10 (PEG10), a critical factor, which directly modulates the key proteins, was involved in cell cycle proteins [173]. Numerous studies have demonstrated the contributions of PEG10 to cellular proliferation; however, its mechanism has rarely been studied [174,175]. The knockdown of the PEG10 increased the p21 and p27 expression levels [173]. Zhang et al. reported that IGF2BP1 could recognize the m 6 A site in the 3′-UTR of the PEG10 mRNA in EC and recruit the polyadenylate binding protein 1 (PABPC1) to stabilize PEG10 mRNA, thereby increasing its protein expression levels and accelerating the cell cycle [176]. The peptide arginine deaminases (PADIs) family contains five members, including PADI1-4 and PADI6. Except for the PADI6 which has no enzymatic activity and is expressed only in the ovary [177], other PADIs can deaminate positively charged arginine residues in substrate proteins into the neutral non-coding residues called citrulline [178,179]. The expression of PADIs was higher in various malignant tumor tissues as compared to healthy tissues [180,181]. Numerous studies showed that the PADIs-catalyzed protein citrullination could alter signal transduction, cell differentiation, and EMT in a variety of human cancer cells [180,182]. Xue et al., for the first time, reported that m 6 A reader IGF2BP1 could be used as a downstream factor of PADI2 and could regulate it to promote tumor progression in EC [183]. They further showed that PADI2 could interact with MEK1 kinase in the MAPK pathway and catalyze the citrullination, which was beneficial for the phosphorylation of ERK1/2 by MEK1, thereby activating the expression of IGF2BP1. In addition, IGF2BP1 could also bind to the m 6 A site in the 3′-UTR of SOX mRNA to prevent its degradation [183]. This study revealed that the PADI2/MEK1/ERK/ IGF2BP1 pathway could promote the characteristics of carcinogenic tumor cells in EC, and the combination of specific PADI2 and MEK1 inhibitors might provide a novel therapeutic target site for the treatment of the MEK inhibitor-resistant EC patients [183]. m 6 A eraser and EC The insulin-like growth factor (IGF) is involved in many functions in most organs [184][185][186]. IGF1 and IGF2 can affect EC, as observed in both clinical and experimental data [187]. Both the IGF1 and IGF2 ligands can activate insulin-like growth factor 1 receptor (IGF1R), a tyrosine kinase receptor present on the cell surface, which is coupled with several intracellular secondary messenger pathways, including Ras/Raf/MAPK and PI3K/ AKT signaling pathways, especially in regulating the normal uterine physiology [188]. Pu et al. reported that ALKBH5 enhanced the stability and translation of IGF1R mRNA by the demethylation of m 6 A and increased the protein levels of COL1A1 and MMP9, thereby promoting the tumorigenesis of EC cells [189]. However, the possibility of involving other signaling pathways cannot be excluded. Other signaling pathways may alter either directly or indirectly due to the changes in ALKBH5 and require further investigation. The role of m 6 A methylation, participating in the Wnt signaling pathway by regulating the related RNAs or proteins, has been elucidated in the CC [56,135,136]. FTO could remove the m 6 A modification of HOXB13 mRNA, abolish the degradation of HOXB13 mRNA mediated by YTHDF2, promote the expression of HOXB13 protein, activate the Wnt signaling pathway, and promote the invasion and metastasis of EC [146]. Hypoxia is an important niche feature of the CSCs, positively affecting the growth of stem cells and tumor progression [190,191]. Hypoxia-inducible factors (HIFs), including HIF-1α and HIF-2α, are the main media of hypoxia and indispensable for the activation and selfrenewal of CSCs; they are strongly associated with tumors [192]. The ability of CSCs to tolerate hypoxia can be attributed to the rearrangement of genes involved in cellular multipotency and differentiation [193]. All these studies further deepen the understanding of the correlations among hypoxia, HIFs, and SOX2 [194][195][196]. Chen et al. demonstrated that the hypoxia and high levels of ALKBH5 could restore the stemness of differentiated endometrial CSCs (ECSCs) and increase the ECSC-like phenotype [197]. In addition, a recent study showed that the changes in mRNA stability were negatively correlated with the expression of these multipotency factors [65]. The m 6 A reader IGF2BP1 could stabilize the degradation of SOX2 mRNA in EC, thereby promoting tumor progression [183]. Similarly, Chen et al. verified that ALKBH5 could stimulate SOX2 mRNA expression by reducing its m 6 A methylation level, thereby increasing the stemness and carcinogenicity of ECSCs [197]. These studies revealed that the decrease in the m 6 A mRNA methylation in the key mRNAs might be a potential mechanism of most EC. These studies also confirmed that m 6 A methylation was a regulatory factor for cell growth. m 6 A and immunoregulation in EC In EC, the correlation between m 6 A methylation modification and TME has rarely been reported. Recently, Ma et al. analyzed the EC patients' data from TCGA and identified the genetic changes in the m 6 A regulatory genes. The results showed a significant correlation between the negative changes in the m 6 A levels and adverse prognostic outcomes [198]. The study identified ZC3H13, METTL14, and YTHDC1 as independent prognostic factors for EC patients [198]. Noteworthy, the expression, mutation, and somatic copy number alterations (SCNAs) of these genes were associated with immune cell infiltration [198]. Prognostic effect of m 6 A in EC The incidence of EC has increased over the past few decades, making it one of the most prevalent gynecologic cancers [199]. In 2020, 417,367 new cases of EC were diagnosed, causing 97,370 deaths [1]. The global incidence rate of EC is continuously increasing, while those of several other types of cancer have decreased over the past two decades [200][201][202][203][204]. Despite the better prognosis of EC than that of CC and OC, screening for the high-risk parts of EC patients is imperative due to more likeliness of developing advanced cancer and early death. With the advancements in the studies on m 6 A methylation modification in EC, numerous research groups have reported the prognostic potential of m 6 A regulatory factors and m 6 A-related genes in EC ( [207]. These lncRNAs were verified to participate in the development of endometriosis by modulating m 6 A-related enzymes, suggesting that these RNAs might be associated with the diagnosis and treatment of EC [207]. Conclusions and perspectives In this review, the studies on the m 6 A methylation modification in OC, CC, and EC were summarized from the aspects of tumor development, immune microenvironment, and prognosis. From the existing studies, it was concluded that the abnormal expression of the m 6 A regulator in gynecological tumors might lead to an increase or decrease in the m 6 A modification level of RNA. The m 6 A modifications of RNA might affect the fate of various RNAs and lead to the proliferation, invasion, and metastasis of tumors as well as also alter the tumor immune microenvironment of the patients, thereby participating in the occurrence and development of tumors. The advancements in medical technology have greatly improved the survival rate of gynecologic cancer patients than before. However, due to the lack of specific biomarkers, early diagnosis is still challenging. At the same time, the resistance to the anti-tumor drugs in some patients also urges researchers to deepen the understanding of tumorigenesis and identify new immune targets for the development of anti-tumor drugs. This review summarized the results of numerous studies, which showed that the m 6 A regulator and related genes could be used as potential biomarkers or prognostic indicators for the early diagnosis of gynecologic cancers. Studying the mechanism of the m 6 A regulator in tumor development also supported this view. The role of m 6 A modification in the drug resistance mechanism has also been reported, which showed that the inhibitors of m 6 A regulators might have the potential of being used as anti-tumor drugs in drug-resistant patients. At present, numerous studies have reported the mechanism of m 6 A-promoting effects on the development of gynecologic cancer; however, the knowledge is still insufficient for a deeper understanding of tumorigenesis. The mechanisms, explaining the upregulation of the m 6 A regulator in gynecologic cancer and their relationship with oncogenes, are still unclear. Therefore, further studies are needed in the future to explain these mechanisms in order to develop effective therapeutic strategies.
2022-11-12T14:25:28.096Z
2022-11-12T00:00:00.000
{ "year": 2022, "sha1": "c8589794a2790445e96c71355ec913302c607f98", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "461a944bb8c910bbeb6706cfc353876849fb93f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263298712
pes2o/s2orc
v3-fos-license
Socioemotional and Behavioral Problems of Grandchildren Raised by Grandparents: The Role of Grandparent–Grandchild Relational Closeness and Conflict This study examined the associations of grandparent–grandchild relational closeness and conflict with grandchildren’s socioemotional and behavioral problems, including emotional symptoms, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors. We analyzed primary cross-sectional survey data collected from custodial grandparents in the United States using logistic regression models. The results indicated that grandparent–grandchild relational closeness was significantly associated with lower odds of custodial grandchildren having emotional symptoms, conduct problems, peer problems, and abnormal prosocial behaviors, whereas grandparent–grandchild relational conflict was significantly associated with higher odds of emotional symptoms, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors. Implications for increasing grandparent–grandchild relational closeness and decreasing relational conflicts among grandparent-headed families are discussed, which might improve grandchildren’s socioemotional and behavioral well-being. Introduction In the United States, nearly six million children live with their grandparents for various reasons, such as child maltreatment, parental substance abuse, domestic violence, incarceration, and military deployment [1,2].Due to COVID-19, an increased number of children lost their parents, and grandparents stepped in to care for them [3].In the United States, the formation of grandfamilies is influenced by macrosystems (e.g., welfare policies, cultural traditions) and microsystems (e.g., birth parents' divorce, parental substance abuse, and domestic violence; [4]).Grandparents sometimes assume caregiving responsibilities on short notice due to an emergency placement by a social service agency [5].Grandfamilies must learn to adapt to these life changes by adjusting their role, routines, and family dynamics [6].Grandfamilies provide caregiving either formally (i.e., grandparents receive legal custodial status, foster care, or adoption) or informally (i.e., private arrangement; [7]).Grandparent-headed households are the most likely to be in poverty and more likely to receive public assistance and are at risk of living in inadequate housing conditions [8].Social isolation, stress, poor health, and financial strain are issues that many grandparents raising grandchildren often face [9,10].Regarding the stability of grandfamilies, children in the care of grandparents compared to other types of out-of-home care have greater stability [11]. In 2018, the Supporting Grandparents Raising Grandchildren Act was passed by the U.S. Congress and called for establishing a federal advisory council to make recommendations that would help grandparents meet the needs of children in their care and maintain their own emotional well-being and mental and physical health.As such, grandparents raising grandchildren has become even more common; thus, further understanding of the well-being of custodial grandchildren is needed. Custodial Grandchildren's Socioemotional and Behavioral Problems An estimated 21% to 33% of grandchildren raised by grandparents (i.e., custodial grandchildren) experience socioemotional or behavioral problems [12].These co-occurring problems are usually associated with negative outcomes, such as poor academic performance and family functioning problems [13,14].These socioemotional and behavioral problems may include emotional problems, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors [15]. Given the significant impact of child socioemotional and behavioral problems on child development, it is important to unpack the risk and protective factors associated with custodial grandchildren's socioemotional and behavioral outcomes.A recent systematic review summarized various factors at both the grandchild and grandparent levels associated with custodial grandchildren's socioemotional and behavioral well-being [16].At the custodial grandchild level, some known predictors linked with their socioemotional and behavioral outcomes include custodial grandchildren's race, gender, age, prior traumatic experiences, social support, and self-esteem [17][18][19][20][21]. More factors were found at the grandparent level that contributed to custodial grandchildren's socioemotional and behavioral outcomes.As custodial grandchildren's primary caregivers, grandparents significantly contribute to grandchildren's well-being.For instance, custodial grandparents' education, household income, marital stress, coping resources, and family function can affect grandchildren's behaviors [22,23].Furthermore, custodial grandparents' parenting styles [22] and mental health status (e.g., depression; [13,22,23]) also can affect grandchildren's socioemotional and behavioral outcomes. Although some factors associated with grandchildren's socioemotional and behavioral well-being are known, very limited research has examined the role of the grandparentgrandchild relationship in determining custodial grandchildren's socioemotional and behavioral problems.Thus, this study seeks to examine the association between the grandparent-grandchild relationship and grandchildren's socioemotional and behavioral problems, including emotional problems, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors.The results of this study provide important implications with a focus on improving the relationship between grandparents and grandchildren, which may help decrease custodial grandchildren's socioemotional and behavioral problems. Attachment Theory Attachment theory provides a theoretical foundation for understanding the impact of family relationships, particularly the relationship between grandparents and grandchildren, on custodial grandchildren's socioemotional and behavioral problems.Attachment is developed in the first few years of childhood, and children may have more than one attachment figure, like parents or grandparents [24].The key principles of attachment theory include: (1) if the caregiver is nurturing, the child will achieve a secure attachment.This is demonstrated through proximity-seeking (i.e., protest at separation); (2) secure attachment extends beyond childhood into the social competence exhibited as an adult, and the child uses an attachment figure as a secure base to explore the world (i.e., strangers and objects); and (3) the child uses an attachment figure to seek comfort in times of stress [25,26].Related to attachment, the profound impact of the positive and secure relationship between children and primary caregivers on children's lives is well acknowledged.When children experience supportive and warm relationships during their early childhood, they develop a sense of security [27].This secure attachment further develops into expectations of trustworthiness, which children internalize and take with them throughout their lives.On the other hand, negative attachment experiences in close relationships between caregivers and children can lead to mistrust and hurt future relationships [28].Overall, attachment theory has been widely used to understand the importance of close and warm relationships in child development. In the context of grandparenting, grandchildren who are separated from their biological parents have their attachment bonds with their biological parents disrupted.Separating from biological parents may be related to adverse outcomes for children, including mental health problems [29].However, living with grandparents may rebuild their attachment relationships by allowing them to seek comfort from grandparents when they feel distressed.For some grandchildren, grandparents consistently serve as a source of emotional support when they have not received this support from their parents since they were born.However, for other grandchildren who may have only recently transitioned to the grandparent's care, it may be unclear whether their attachment to their grandparents is secure or insecure [30].Disruptions in attachment, whether temporary or permanent, result in cumulative negative effects on the grandchild's development [24].Thus, it is important to unpack the relationship between grandparents and grandchildren.Having a close relationship with custodial grandparents may create a secure attachment relationship that fosters grandchildren's healthy socioemotional and behavioral development. Grandparent-Grandchild Relationship Relationships are patterns of interactions, expectations, beliefs, and effects organized at an abstract level that captures observable behaviors [31].Relationships between grandparents and grandchildren are influenced by many factors, including sociohistorical (e.g., economic pressures, technological changes), cultural (e.g., roles and values regarding grandparenting), family structure (e.g., marital status, custodial grandparenting), and individual differences (e.g., health, gender; [32]).Characteristics of a secure attachment relationship are keeping track of a person, using that person as a secure base from which to explore, being comforted by that person, and being attuned to facial expressions and emotions [33].Pianta (1992) suggested that relationships can be assessed by relational closeness and conflict between individuals.Relational closeness includes mutual expressions of warmth and positive affect, while relational conflict can include over-control or under-control and/or having dependent or insecure relationships [33]. Grandparents' perceptions of their relationships with their grandchildren can serve as an indicator of the quality of the relationship.Due to prior traumatic stress and disrupted attachment with primary caregivers, custodial grandchildren usually have a high level of socioemotional and behavioral problems [2].Custodial grandchildren who struggle with attachment challenges, along with their existing socioemotional and behavioral problems, may have difficulty maintaining close relationships with their grandparents.Because of generational differences and complex family dynamics, grandchildren and grandparents may encounter daily conflicts with each other when they live together.Increased family conflicts may further exacerbate grandchildren's existing socioemotional and behavioral problems, which has been validated among parent-headed households [34,35]. When grandchildren are placed in grandparent-headed households, having a close grandparent-grandchild relationship is essential and may be a therapeutic means for improving and correcting grandchildren's socioemotional and behavioral problems and preventing the development of future adverse outcomes [36].Although there is increasing evidence of the association between family relationships and children's socioemotional and behavioral problems, a paucity of research has examined the effects of grandparent-grandchild relationships on grandchildren's socioemotional and behavioral problems.To the authors' best knowledge, no prior studies have examined the impact of grandparent-grandchild relational closeness and conflict on custodial grandchildren's emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behaviors among grandparent-headed families.Only a few studies have focused on the impact of the grandparent-grandchild relationship on other aspects of well-being among grandchildren.For example, a study found that grandparents' and grandchildren's perceived relationship quality was associated with grandchildren's subjective well-being [37].Similarly, another study indicated that the grandparent-grandchild relationship predicted grandchildren's social competence [38].More specifically, closeness between grandparents and grandchildren was positively associated with grandchildren's social competence, whereas conflict between grandparents and grandchildren was negatively associated with grandchildren's social competence.Likewise, Hayslip et al. (2019) revealed that relationship quality between grandparents and grandchildren predicted grandchildren's relational competence in the United States [39].Poehlmann et al. (2008) also found that poor family relationships were associated with more behavioral problems among grandchildren in a sample of 79 grandparent-headed families in the United States [24].In particular, custodial grandchildren with less responsive grandparents were more likely to exhibit aggression, conduct, and hyperactivity issues.Likewise, Pittman et al. (2022) suggested that the relationship quality between grandparents and grandchildren was associated with grandchildren's psychological outcomes (e.g., depressive symptoms, self-worth, close friendships, and romantic relationships) [40]. Although there is a limited amount of research on the effect of grandparent-grandchild relationships on grandchildren's socioemotional and behavioral outcomes, existing rich literature on child outcomes as a result of the child-parent relationship (e.g., [34,35,41,42]) provides a solid foundation to understand relational closeness and relational conflict in the context of grandparenting.Generally, relational closeness is a protective factor against children's socioemotional and behavioral problems.In contrast, relational conflict is a risk factor, but results across studies were slightly different depending on which children's outcomes were included and how these outcomes were measured [34,35,41,42]. Study Purpose This study aims to answer the following research question: What are the associations of grandparent-grandchild relational closeness and conflict with grandchildren's socioemotional and behavioral problems, including emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behaviors?The research hypotheses were: (1) Grandparent-grandchild relational closeness is associated with custodial grandchildren's lower odds of emotional symptoms, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors; and (2) Grandparent-grandchild relational conflict is associated with custodial grandchildren's higher odds of emotional symptoms, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors. Method 2.1. Study Design and Data Collection Procedure In the present study, we analyzed cross-sectional survey data collected from custodial grandparents raising grandchildren in the United States.The data featured two sources: a South Carolina community-based sample (n = 71) and a sample from Qualtrics Panels across the United States except for South Carolina (n = 216).We recruited grandparents raising grandchildren in South Carolina via various community partners, including state agencies (e.g., Department of Social Services, Department on Aging), local nonprofit organizations serving kinship families, the local foster parent association, schools, and churches, from May 2021 to February 2022.The survey link was shared with our partners via flyers and email to be used in listservs or online newsletters.We also provided some hard copies of the survey to partners.Most survey data in South Carolina were collected electronically, but a handful of surveys were returned in hard copies.In addition to data collected from South Carolina, we recruited other grandparents raising grandchildren across the United States except for South Carolina via Qualtrics Panels from January to February 2022.Qualtrics Panels is an online survey panel that includes millions of U.S. residents who regularly take online surveys [43].We used a few inclusion criteria to select custodial grandparents who raised grandchildren: (a) served as a primary caregiver for at least one grandchild younger than 18 years old and (b) the grandchildren lived apart from their biological parents in the United States.To ensure we selected eligible custodial grandparents, we further limited the grandparents' age to 40 years old or older.To eliminate the possibility of recruiting custodial grandparents in South Carolina who had already filled out our survey, we asked participants if they had filled out the South Carolina survey before.Participants were provided a USD 15 e-gift card for completing the survey. Sample Selection In the present study, we limited our sample to custodial grandparents who raised grandchildren between ages 4 and 17 years old (n = 255) because the measure of the dependent variable, the Strengths and Difficulties Questionnaire, was only applicable to children in this age range. Measures 2.3.1. Dependent Variables Dependent variables were grandchildren's emotional symptoms, conduct problems, hyperactivity, peer problems, and abnormal prosocial behaviors in the past six months.They were measured using the Strengths and Difficulties Questionnaire (SDQ; 25 items; [15]).The response options were 1 = not true, 2 = somewhat true, and 3 = certainly true.The SDQ has two versions for children between 4 and 17 years old.Most items were the same, but for items with slight differences, we adapted item languages in the survey.Each subscale had five items.Sample items are "Often complains of headaches, stomachaches, or sickness" for emotional problems; "Often fights with other children or bullies them" for conduct problems; "Easily distracted, concentration wanders" for hyperactivity; "Gets along better with adults than with other children" for peer problems; and "Considerate of other people's feelings" for prosocial behaviors.Some items were reverse coded, and summative scores of these items were calculated first, then these behavioral problems were categorized into four categories (close to average, slightly raised, high, very high) according to brief four-band cutoffs (Department of Health and Ageing, n.d.).Prior empirical research has suggested using 90% cutoffs (i.e., a combination of "close to average" and "slightly raised") of the SDQ to estimate the general magnitude of mental health problems [12,44].Therefore, we collapsed "close to average" and "slightly raised" to normal or borderline, and "high" and "very high" to abnormal [45], ending up with five dummy coded dependent variables (1 = abnormal and 0 = normal or borderline).The reliabilities of emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behaviors subscales were 0.82, 0.81, 0.87, 0.65, and 0.79, respectively, in the current sample. Independent Variables: Grandchild-Grandparent Relational Closeness and Conflict The independent variables, grandchild-grandparent relational closeness and conflict, were measured using an adapted version of the Child-Parent Relationship Scale (15 items) in the past six months, including relational closeness (seven items) and conflict (eight items) subscales [33].We modified "child" to "grandchild" to fit the context of this study.The response options ranged from 1 (definitely does not apply) to 5 (definitely applies).Items included "I share an affectionate, warm relationship with my grandchild," "My grandchild openly shares his or her feelings and experiences with me," and "My grandchild remains angry or is resistant after being disciplined."Summative scores of relational closeness and conflict were used in the present study, and the reliabilities of closeness and conflict were 0.83 and 0.90, respectively. Control Variables Control variables at both grandchild and grandparent levels were included in the models.These variables included the grandchild's race and ethnicity (0 = White, 1 = Black, and 2 = other, including American Indian or Alaska Native, Asian or Asian American, Hispanic or Latinx, and more than one race), age (measured using years), gender (1 = female and 0 = male), and disability status (1 = yes and 0 = no).Furthermore, we controlled for grandparents' characteristics, including race and ethnicity (0 = White, 1 = Black, and 2 = other, including American Indian or Alaska Native, Asian or Asian American, Hispanic or Latinx, and more than one race), gender (1 = female and 0 = male), age (measured using years), and marital status (1 = married and 0 = other, including divorced, separated, single, widowed, and other).We also controlled for grandparents' parenting stress, financial stress, and depression.Parenting stress was measured using one item: "I feel my parenting stress has been increased since the COVID-19 pandemic" (1 = strongly disagree to 5 = strongly agree). Grandparent financial stress was measured using the Consumer Financial Protection Bureau Financial Well-Being Scale, including three items: (a) "Because of my money situation, I feel like I will never have the things I want in life"; (b) "I am just getting by financially"; and (c) "I am concerned that the money I have or will save won't last long" (Consumer Financial Protection Bureau, n.d.).All three items were reverse coded, and the response options ranged from 1 (completely) to 5 (not at all).An average score was used in the current study, and the reliability was 0.84 in the present sample.Grandparent depression was measured using a shortened version (10 items; [46]) of the Center for Epidemiologic Studies Depression Scale.The scale had four response options (0 = rarely or none of the time (less than 1 day) to 4 = most of the time (5-7 days)).A summative score was used, and the reliability of this scale in the present sample was 0.91.Due to the differences in our data collection methods, we also controlled for the data source (1 = Data collected from Qualtrics Panels and 0 = Data collected from South Carolina). Strategies to Increase Data Integrity Our survey was initially attacked by survey bots (i.e., the survey was filled out by computer programs with random responses), but we used a variety of strategies to prevent survey bots and ensure data integrity.These strategies included adding open-ended questions, using inattentional checks, asking identical questions at different points of the survey, embedding reCAPTCHA, and examining survey response patterns (see more details in [47]). Data Analysis Data cleaning, descriptive analyses, and logistic regression were conducted using Stata 16.0.All variables except for grandparent race (missing: 1.57%) and financial stress (missing: 0.39%) had no missing data.Grandchildren's emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behaviors were not normally distributed, which violated the assumption of linear regression models.Thus, we used 90% cutoffs of the SDQ to dichotomize emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behaviors and conducted logistic regression models.Assumptions of logistic regression were examined, and no violation of these assumptions was identified in the present study.This study was approved by the affiliated university's institutional review board. Sample Characteristics The present study's sample characteristics are presented in Table 1.In terms of grandchildren's characteristics, 60% (n = 151) were White, followed by Black (24.31%; n = 62) and other (15.69%;n = 40).The average age of grandchildren was 9.49, ranging from 4 to 17 years old.Half (50.20%; n = 128) were boys, and a small proportion (10.20%; n = 26) had disabilities.Regarding custodial grandchildren's socioemotional and behavioral problems, 18.43% (n = 47) had abnormal emotional symptoms, 17.65% (n = 45) had abnormal conduct problems, 12.55% (n = 32) had abnormal hyperactivity, 24.31% (n = 62) had abnormal peer problems, and 29.41% (n = 75) had abnormal prosocial behaviors.Custodial grandparents reported a high level of relational closeness (30.84 out of 35) and a low level of conflict (16.82 out of 38) with their grandchild.Many grandparents were White (n = 168; 66.93%), followed by Black (n = 58; 23.11%) and other (n = 25; 9.96%).About 73% (n = 185) of grandparents were women, and slightly half (54.90%) were married.The average time that grandparents cared for grandchildren was 4.43 years, and 36.96% had child welfare agency involvement.Grandparents rated their financial stress as 2.88 out of 5, indicating a relatively high level.Custodial grandparents rated their parenting stress as 3.45 out of 5, indicating an even higher parenting stress level.The grandparents' depression was 8.31 on a range from 0 to 27.Of note, the cutoff for clinically significant depressive symptoms is 8 [46], suggesting that many of the custodial grandparents in the current sample had a high level of depressive symptoms. Discussion This study examined the role of relational closeness and conflict between grandparents and grandchildren in influencing custodial grandchildren's socioemotional and behavioral problems.The results of this study partially confirmed our research hypotheses that having a close relationship between grandparents and grandchildren would be associated with lower odds of custodial grandchildren's emotional symptoms, conduct problems, peer problems, and abnormal prosocial behaviors, but not hyperactivity.Regarding the second hypothesis, results suggested that grandparent-grandchild relational conflict was associated with the higher odds of having all these problems.These major findings highlight the significant need to improve grandparent-grandchild relationships, which may help decrease grandchildren's socioemotional and behavioral problems. The current study further identifies that the relational closeness between the grandparent and grandchild is associated with lower odds of having emotional symptoms, conduct problems, and peer problems.This is aligned with attachment theory, where a positive and close relationship helps build grandchildren's security.Unexpectedly, relational closeness did not predict grandchildren's hyperactivity.Hyperactivity is a symptom of attentiondeficit/hyperactivity disorder that may cause individuals to run, jump, climb, and move constantly [48].Genetics plays a role in developing hyperactivity among children, which may explain why the relationship and other socioenvironmental factors in the present study did not significantly predict custodial grandchildren's hyperactivity [49].In addition, the present study found the significant contribution of relational conflict in predicting grandchildren's emotional symptoms, conduct problems, peer problems, and hyperactivity, and a marginally significant association with abnormal prosocial behaviors.Prior literature has indicated that family members may have poor emotion regulation and conflict resolution skills in families with high conflicts, exacerbating children's behavioral problems [50].Increased relational conflict likely increases parenting and family stress, and children may internalize the stress, ending up with more socioemotional and behavioral problems [51]. In conclusion, the present study's findings on the associations of relational closeness and conflict with grandchildren's socioemotional behavioral outcomes were mostly consistent with findings among parent-headed families (e.g., [34,35,41,42]). Our study also found other interesting variables that contributed to custodial grandchildren's socioemotional and behavioral problems.For instance, we identified that grandchildren of other races and ethnicities had more conduct problems than White grandchildren.This finding was similar to a previous result where children of other races and ethnicities had more externalizing problems than White children in a sample of children in kinship care [20].However, we collapsed a few races and ethnicities (i.e., American Indian or Alaska Native, Asian or Asian American, Hispanic or Latinx, and more than one race) in the "other" type, making the explanation of why children of other races/ethnicity having more problems harder.Not surprisingly, female custodial grandchildren had fewer abnormal conduct problems and hyperactivity than their male counterparts, which has been validated in previous studies (e.g., [2]).Grandparents' age was positively associated with grandchildren's hyperactivity, which may indicate that older grandparents perceive grandchildren's hyperactivity more severely because of their low energy level.Also, grandmothers were more likely to report custodial grandchildren's conduct problems and hyperactivity than grandfathers, which might be because grandmothers as primary caregivers were more likely to notice their grandchildren's behavioral problems.It is important to understand whether grandchildren are more likely to exhibit emotional and behavioral problems when cared for by grandmothers due to how grandmothers are conditioned to care for grandchildren or perhaps because grandfathers are ignoring symptoms that may need to be brought to the attention of service providers to better assist the family.Another explanation is that grandmothers were more likely to be single, while grandfathers were more likely to have a partner sharing their grandparenting responsibilities. Another interesting finding was that custodial grandchildren's disability status was associated with their abnormal hyperactivity and peer problems.Because hyperactivity is a symptom of attention-deficit/hyperactivity disorder, it is not surprising to find these significant associations.In terms of custodial grandchildren's peer problems, grandchildren with disabilities are at higher risk of experiencing discrimination and social isolation [52,53], and these experiences could contribute to their problems with peers. Similar to much of the prior literature, custodial grandparents' depression was associated with the higher odds of grandchildren's emotional symptoms, peer problems, and abnormal prosocial behaviors [12,23,54].Depression hampers positive parenting and the interactions between a caregiver and child [55].Prior research also has indicated that caregivers with depressive symptoms are more likely to report children's socioemotional and behavioral problems [56].This highlights the importance of addressing grandparents' mental health to improve grandchildren's socioemotional and behavioral well-being. It was important to note the impact of COVID-19 on the relationships between grandparents and grandchildren and their influences on grandchildren's elevated socioemotional and behavioral problems because our survey data were collected from May 2021 to February 2022 in the context of the COVID-19 pandemic.For instance, prior empirical evidence has suggested that the increased tension in the relationship between caregivers and children exacerbated the children's socioemotional and behavioral problems during COVID-19 [57,58]. Limitations and Directions for Future Research This study has some limitations that should be noted when interpreting results.First, the current study did not control for some COVID-19-related stressors in the analyses.This is a limitation of our study because we cannot overlook the impact of COVID-19 on key variables of the study.Second, the generalizability of the study is limited due to our sampling strategy and data collection methods.Although this study involved grandparents from across the United States, combining the two samples resulted in almost a fourth of participants being from South Carolina.Additionally, the racial demographic in South Carolina was mostly Black and White grandparents, whereas the survey via Qualtrics Panels was open to all races and ethnicities; therefore, more Black grandparents were identified in the South Carolina sample.Further, the average age of South Carolina grandparents was older than the Qualtrics sample.These differences might have skewed our findings.Another limitation of our study is that grandparents were the sole source of data.Grandparents assessed grandparent-grandchild relational closeness and conflict and grandchildren's socioemotional and behavioral problems.Ideally, having more than one data source would reduce the likelihood of bias in data collection.Further, grandparents reported on their relationships with their grandchildren and their grandchildren's socioemotional and behavioral problems retrospectively (i.e., recalled these in the past six months when they filled out the survey), which might have influenced the accuracy of their assessments.Moreover, the validity of the grandparent-grandchild relational closeness and conflict scales were not examined in the present study.The reliability of the SDQ peer problems subscale was relatively low (0.65).Last, the study was cross-sectional; therefore, causal linkages between grandparent-grandchildren relationships and grandchildren's socioemotional and behavioral problems could not be determined. These limitations point out some directions for future research.First, researchers should use a more rigorous sampling strategy to recruit grandparents when feasible.Second, future studies could consider using multiple informants (e.g., grandchildren, teachers) to collect data when feasible, reducing the risk of reporting bias.In particular, incorporating grandchildren's perspectives to understand their relationships with grandparents would be important.Third, a longitudinal study would lead to a better understanding of the association between the grandparent-grandchild relationship and grandchildren's socioemotional and behavioral problems over time.In addition, future research can be enhanced by exploring more patterns of grandparent-grandchild relationships and their associations with grandchildren's socioemotional and behavioral outcomes.For instance, future research can examine factors that lead to negative family relationships between grandparents and grandchildren and their relationships with biological parents.It would be interesting to examine the factors moderating the relationship, such as child age and timing of custodial grandparent household formation.Lastly, future research could examine the construct validity of the grandparent-grandchild relational closeness and conflict scales. Implications for Practice Our study has several important implications for practitioners serving grandparents raising grandchildren.First, our results emphasize the importance of improving the relationship between grandparents and grandchildren, which may help decrease custodial grandchildren's socioemotional and behavioral problems.Due to the potentially severe consequences of deteriorating family relationships on both grandparents and grandchildren, social workers, family counselors, educators, and other professionals in the community who work with grandparent-headed families need to understand the complexities of these familial relationships.In addition to a thorough and shared understanding of the relationships between grandparents and grandchildren, social workers and other professionals who aim to improve grandchildren's socioemotional and behavioral problems may pay attention to the relational closeness and conflicts between grandparents and grandchildren.To improve the relational closeness between grandparents and grandchildren, providing relationshipfocused interventions could be an option.Practitioners and intervention researchers could consider implementing existing evidence-based interventions with grandparent-headed families.For example, it might be beneficial to implement child-parent relationship therapy (CPRT) among grandparent-headed families.CPRT, adapted from Guerney's filial therapy [33], is intended for parents of children aged 3 to 10 years who experience emotional or behavioral problems [33].The primary purpose of CPRT is for parents to learn therapeutic ways of responding to their children's socioemotional and behavioral problems via improving the parent-child relationship [33,59].Prior research has indicated that CPRT enhances parent-child relationships (e.g., [59]) and reduces children's problem behaviors.Adapting this intervention to meet the needs of grandparent-headed families may benefit grandchildren and grandparents. Social workers, school counselors, and teachers should also advocate for more schoolbased and health care-based screenings for emotional and behavioral problems among custodial grandchildren who are experiencing family conflict to serve as a preventive measure [60].School counselors and teachers may serve as good partners with custodial grandparents to monitor children's behaviors.According to our results, screenings should focus on custodial grandchildren of other races and ethnicities, male and older grandchildren, and grandchildren with disabilities.Furthermore, our study also identified that grandparents with depressive symptoms were more likely to report grandchildren's emotional symptoms, peer problems, and abnormal prosocial behaviors, which is consistent with prior research (e.g., [56]).Therefore, it is important for them to have access to mental health services and other informal emotional support resources, such as online or face-to-face support groups. Conclusions The current study examined the associations of grandparent-grandchild relational closeness and conflict with custodial grandchildren's socioemotional and behavioral problems.The findings show that relational closeness was associated with the lower odds of emotional symptoms, conduct problems, peer problems, and abnormal prosocial behaviors, whereas relational conflict was linked to the higher odds of emotional symptoms, conduct problems, peer problems, and hyperactivity among grandchildren.These findings provide implications for social workers to improve relational closeness and decrease relational conflict between grandparents and grandchildren, which may decrease custodial grandchildren's socioemotional and behavioral problems.
2023-10-01T15:06:50.826Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "a882dcdaf1b495294055f9c6211a97ccb73f0868", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/10/1623/pdf?version=1695975657", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "3636d3b7e8411d1cb8e3937e5e174b91a8095f59", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
250326640
pes2o/s2orc
v3-fos-license
Energy Consumption and Quality of Pellets Made of Waste from Corn Grain Drying Process : The aim of this study was to assess the possibility of managing the waste resulting from the corn grain drying process as a biofuel characterized by low energy consumption in the compaction process and to evaluate the quality of the pellets made of this waste. The waste was agglomerated in the form of corn grain (CG), husks (CH), and cobs (CC), and their mixtures were prepared in a 4:1 volume ratio. The results of the analyses showed that CH was the most advantageous material for agglomeration due to the process’s low energy consumption (47.6 Wh · kg − 1 ), while among the prepared mixtures, CC-CH was the most energy-efficient (54.7 Wh · kg − 1 ). Pellets made of the CH-CC mixture were characterized by good quality parameters, with a satisfactory lower heating value (13.09 MJ · kg − 1 ) and low energy consumption in the agglomeration process (55.3 Wh · kg − 1 ). Moreover, data analysis revealed that the obtained pellets had density (1.24 kg · dm − 3 ) and mechanical durability (89%), which are important in their transport and storage. The findings of this study suggest that the use of waste from the corn grain drying process, in the form of pellets, may allow obtaining granules with different quality. Introduction Biomass derived from the agricultural sector can primarily serve as a source of food and fodder, as well as raw material for the production of bioenergy [1][2][3][4]. The direct use of crops for energy purposes is promoted by the increasing demands of global and local food and energy markets, which are often regulated by authorities [5,6]. However, due to the sustainable development of the agricultural and energy sectors [7], research and industrial projects are being carried out to enable the extensive use of inedible plant parts [8,9] and to minimize residues [10,11] and residual biomass [12,13]. In recent years, there has been an increasing interest in the use of waste generated during the harvest of biomass, as well as from raw plant material processing and food or feed production, to meet specific quality or technological requirements [14][15][16]. On the other hand, legal regulations have been imposed, defining the parts of plants that can be used for energy purposes and the quality standards that should be fulfilled by the fuels obtained from plant parts [11,17,18]. The thermal conversion of biomass is associated with waste combustion [19,20] and makes use of other thermochemical processes [21,22], such as pyrolysis, gasification, or torrefaction [23,24]. Among crops, the largest amount of waste during agricultural production [20,25] or postharvest processes [26] can result from corn [27,28]. Corn has a high yield potential and a variety of applications [29][30][31]. The increase in corn production observed in recent years is mainly due to the rising demand for biofuel production in the industrial processing sector [32,33]. Corn grains are of different shapes. The harvesting of these grains, usually with the use of a combine harvester, results in impurities and damaged grain [34,35]. Because the The mixtures were prepared in a volume ratio (v/v), which allowed simplifying their preparation for other processes, such as pelleting, and avoiding the need to weigh material for fuel preparation. On the other hand, the use of mass proportions would require additional work for biofuel preparation, which would reduce the profitability of waste disposal. The characteristics of the materials used are described in detail in the paper by Maj et al. [37]. The corn cobs required grinding to 3 mm fractions before pelletizing. Prior to agglomeration, the moisture content of the material (Table 1) was assessed by thermogravimetry in accordance with EN ISO 18134. The complete research procedure is illustrated in Figure 1. The analyzed material was granulated using a granulator with a rotary flat matrix (BRICOL ZSJ25, Człuchów, Poland), and the maximum yield of granules was up to 100 kg·h -1 . The flat matrix had channels with a diameter (D) of 6 mm and a length (L) of 30 mm (L/D = 5.0). The matrix worked at 260 rpm, and the material flow was 1.6 kg·min -1 . The analyzed material was granulated using a granulator with a rotary flat matrix (BRICOL ZSJ25, Człuchów, Poland), and the maximum yield of granules was up to 100 kg·h −1 . The flat matrix had channels with a diameter (D) of 6 mm and a length (L) of 30 mm (L/D = 5.0). The matrix worked at 260 rpm, and the material flow was 1.6 kg·min −1 . Granulation tests began after the matrix was preheated to 85 • C. The temperature of the matrix was monitored throughout compaction and maintained constant (±5 • C). During granulation, the total energy consumption EC (Wh·kg −1 ) was quantified using a power analyzer (KYORITSU KEW6310) in order to determine the energy used for maximum granule production. The data from the analyzer were evaluated in KEW PQA MASTER software. The pellets obtained from the granulation process were assessed for their physical characteristics and quality. Laboratory samples were collected for measurements in accordance with ISO 18135 and prepared for testing in accordance with ISO 14780. The mean value of at least three replications was calculated. Geometrical parameters (diameter Di (mm) and length L (mm)) were measured in accordance with ISO 17829. Pellet samples with a minimum mass of 100 g were randomly chosen for the measurement of these parameters. All the parameters of the pellet were determined in triplicate, using a caliper with a measurement accuracy of ±0.1 mm, and the mass of the pellets was determined using a Radwag PS.6000.3Y laboratory balance, with a measurement accuracy of ±0.01 g. Pellet density (D, kg·m −3 ) was determined in accordance with ISO 18847 by the hydrostatic method, using a density determination kit for solids and liquids (KIT-85, Radwag, Radom, Poland). The kit included a Radwag X3.Y analytical balance with a measurement accuracy of ±0.0001 g. A mixture of water and a wetting agent (Triton X-100) at a concentration of 1.5 g·L −1 was used as the reference liquid. The mean value of five replications was calculated. The bulk density (DB kg·m −3 ) was determined in accordance with ISO 17828. This parameter was measured using a 5:l vessel. The mean value of five replications was calculated. The mechanical durability (DU%) of the pellets was measured using a Ligno pellet tester (NHP100, Tekpro, Norfolk, UK). For the Holmen test, samples of 100 g of pellets, collected after 60 s of sieving through a 2.5 mm sieve, were pneumatically conveyed around in a closed cylinder loop for 1 min. The DU was calculated as a ratio of the mass of pellets remaining in the chamber to the initial sample mass. The mean value of five replications was calculated. The energy density ED (GJ·m −3 ) of the pellet samples was determined by Equation (1). where: LHV is the lower heating value (GJ·kg −1 ); BD is the bulk density (kg·m −3 ). The obtained results were subjected to statistical analysis. The normal distribution of the examined traits was verified by the Shapiro-Wilk test. The influence of the type of material on the individual examined traits was analyzed using one-way analysis of variance. The homogeneity of variance was checked by Levene's test, and significant differences between the tested groups of traits were checked by Tukey's honest significant difference (HSD) test. Statistical analysis was performed at a significance level (α) of 0.05. Figure 2 shows the images of pellets obtained in the compaction process. Table 2 presents the results of the qualitative assessment of the obtained pellets. 51731, and DIN Plus standards. The pellets made of CC, as well as of the CC-CH and CC-CG mixtures, were characterized by the lowest density, while the pellets made from CH had the highest density (17% higher than the lowest value). The results of the statistical analysis revealed significant differences in all the assessed traits between the analyzed raw materials. Regarding the density of the obtained pellets, similarities were found between CC and CC-CG, as well as among CC, CC-CH, and CG, and CG-CC, CH-CC, CG-CH, and CH-CG. The results of the quality analysis of the pellets produced from residues resulting from the corn grain drying process ( Table 2) showed that the pellet density ranged from 1.12 to 1.35 kg·dm −3 . This confirmed that the obtained pellets met ÖNORM M 7135, DIN 51731, and DIN Plus standards. The pellets made of CC, as well as of the CC-CH and CC-CG mixtures, were characterized by the lowest density, while the pellets made from CH had the highest density (17% higher than the lowest value). The results of the statistical analysis Sustainability 2022, 14, 8129 6 of 15 revealed significant differences in all the assessed traits between the analyzed raw materials. Regarding the density of the obtained pellets, similarities were found between CC and CC-CG, as well as among CC, CC-CH, and CG, and CG-CC, CH-CC, CG-CH, and CH-CG. Results and Discussion Jiang et al. [55] obtained pellets of lower densities (0.863-1.217 kg·dm −3 ) from rice straw, Chinese fir, and camphor waste, sewage sludge, and their mixtures. Similarly, Zawiślak et al. [16] obtained pellets of lower densities (1.03-1.15 kg·dm −3 ) from soybean, pea, chamomile, and birch sawdust waste and their mixtures. Tan et al. [56] also obtained pellets of lower densities (1.03-1.11 kg·dm −3 ) from Camellia oleifera Abel. shell. In addition, Zhang T. et al. [11] determined lower density values (0.87-1.05 kg·dm −3 ) for other types of waste, such as apple tree residues and corn straw, mixed with frying oil in different proportions. By contrast, using various mixtures of paper, wood, and organic and plastic waste, Rezaei et al. [57] obtained pellets of higher densities, ranging from 1.15 up to 1.5 kg·dm −3 . Using garden waste, Pradhan et al. [2] produced pellets with a wide range of densities (0.5-1.4 kg·dm −3 ), depending on the moisture content of the compacted waste biomass. Yang et al. [24] obtained pellets characterized by lower values of density (0.6-1.0 kg·dm −3 ) from a mixture of corn straw and big bluestem, but the process of pelletizing was simultaneously supported by the process of torrefaction in their study. From commonly used wood waste, Garcia et al. [58] obtained pellets with comparable densities of 1.01-1.27 kg·dm −3 . On the other hand, Alam et al. [14] obtained pellets of high densities, in the range of 1.03-1.27 kg·dm −3 , from medical waste and plastics, in admixture with rice straw after the pyrolysis process. Kirsten et al. [59] obtained pellets with comparable densities, in the range of 1.18-1.29 kg·dm −3 , from hay. In turn, by briquetting the remains of corn cobs, Orisaleye et al. [52] obtained pellets with densities in the range of 1-1.4 kg·dm −3 by varying the process parameters. These findings suggest that the densities of the pellets tested in this study did not differ from the values determined for pellets made from the biomass of plant origin, and thus, the tested material can be used in the agglomeration process. The bulk densities of the studied pellets varied widely, in the range of 444.6-610.9 kg·dm −3 . The lowest bulk density value was observed for pellets made from the CC-CH mixture, while the highest value was obtained for pellets made from the CH mixture (27% higher than the lowest value). In terms of bulk density, the pellets made of CG, CH, and the CG-CH mixture met only the SS 18 71 20 standard. The results of the statistical analysis showed similarities in bulk density between the CG and CG-CH pellets, as well as between the CH and CG-CH pellets. The values of bulk density recorded in this study were slightly lower than those determined for wood biomass in the studies by Garcia et al. [58] (470-671 kg·dm −3 ) and Stasiak et al. [60] (732 ± 10 kg·dm −3 ). Furthermore, the study by Alam et al. [14] reported higher density values for the mixtures of biomass with various types of waste (540-737 kg·dm −3 ). Moreover, Djatkov et al. [51] determined the bulk density of corn waste to be in the range of 547-719 kg·dm −3 , which was similar to the values determined by Miranda et al. [40] for corn cob waste (525-685 kg·dm −3 ). On the other hand, Niedziółka et al. [61] obtained comparable bulk density values for maize straw pellets (561-572 kg m −3 ). Higher values of bulk density were reported by Keppel et al. [62] for various cereal and wood straw waste materials (520-718 kg·dm −3 ). Kirsten The analysis of the mechanical strength of pellets made of CC, CG, and CH, as well as their mixtures, indicated high diversity (57.38-96.19%). The highest mechanical strength was recorded for the pellets made of CH, which was over 1.5-fold higher than that recorded for the pellets made of CG-CC. The results of the statistical analysis showed significant differences in mechanical strength between the studied materials. It was found that the durability of CG grain waste, and its mixtures with cobs and husks, was much lower compared to other materials and mixtures. The addition of CG reduced the durability of the pellets, which may be due to the chemical composition of this material and its physical properties, as well as mineral impurities, as indicated by other authors [51,54,62]. For further comparison of the obtained results with those of other works, mixtures with an undamaged grain base were considered. Comparable values of durability in the process of pelletizing, aided by torrefaction, were reported by Yang et al. [24] for corn straw and big bluestem (71-90.5%). Similarly, Ghasemi et al. [63] observed a wide range of durability (70-95%) for pellets made from different types of biomass. In turn, Ríos-Badrán et al. [18] obtained pellets with durability at a level of 95% and higher for rice husks, and in the range of 89.5-92% for a wheat husk and straw mixture, which were similar to the values determined by [16] for soybean, pea, chamomile, and birch sawdust waste and their mixtures (90.2-97.1%). In the study by Alam et al. [14], pellets obtained from a mixture of rice straw and medical waste were also characterized by a durability of 91.1-98.3%. Comparable values of durability were obtained by Ishii and Furuichi [64] for pellets made of rice straw. Niedziółka et al. [61] found that pellets obtained from rape, wheat, or maize straw had high durability at a level of 95.4-98.9%. Kirsten et al. [59] also reported that the durability of hay pellets was 95-97.5%. Similar high durability values, at a level of over 99%, were obtained by Zhang T. et al. [11] for a mixture of wood waste from apple trees, corn straw, and waste cooking oil, while Garcia et al. [58] reported durability values above 97% for various types of wood waste. For mixtures with different proportions of wood sawdust from leaf residue and ground nut shells, Rajput et al. [65] obtained widely varying durability values, ranging from 82.6 to 97.3%. However, for waste materials intended for reuse, such as cardboard, wood, and various types of plastics mixed with organic waste as a binder, Rezaei et al. [57] obtained higher durability values of 93.1-99.9%. Thus, it can be concluded that pellets produced from mixtures have low durability, which was confirmed by the results presented in Table 2 and by the studies of Zawiślak et al. [16], Yang et al. [24], and Alam et al. [14]. The evaluation of the length of the obtained pellets showed that the pellets had lengths of 18.5-37 mm. The shortest pellet was obtained from CG, and the longest from the CC-CG mixture. The lengths of the pellets obtained from CG, CG-CC, CG-CH, CH-CG, and CC-CH were similar. The pellets obtained in this study differed significantly in length relative to each other, while in the study by Ríos-Badrán et al., the pellets made from rice husk and its mixture with wheat straw had similar length values of 24-26 mm [18]. However, the introduction of mixtures of different types of biomass as material for pelletizing increases the differences in obtained test results, as observed in the study by Zawiślak et al. [16], in which lengths ranging from 24.4 to 31.3 mm were obtained. On the other hand, Garcia et al. [58] found that pelletizing wood waste in combination with food industry waste allowed achieving a smaller length distribution of pellets, in the range of 21.7-27.1 mm. By contrast, much shorter lengths of pellets, with slight differences, were obtained by Tan et al. [56] from waste in the form of C. oleifera Abel. shell (16.1-16.8 mm). The pellets made of homogeneous materials were characterized by similar diameters, ranging from 6.20 (for CG) to 6.25 mm (for CC). The pellets obtained from the mixtures of the tested materials had larger diameters than those made from base materials. Pellets with the highest diameters were obtained from the mixtures containing husks with other materials, while slightly lower values were obtained for the pellets made from mixtures based on CC. The increase in the diameters of the pellets can be attributed to the content of the fibers in agglomerated particles, which cause elasticity and expansion after the agglomeration process. An analogous conclusion regarding the influence of the applied biomass on the diameter of pellets was presented by Ríos-Badrán et al. [18], who obtained a diameter of 6.01 mm for pellets from rice husks, which was lower compared to that of the pellets made from corn husks. However, the diameters of pellets obtained from a mixture of rice husks with wheat straw were similar (6.30-6.38 mm) to those of the pellets made from corn waste. Due to their elasticity, both corn husks and wheat straw form pellets with larger diameters. After the agglomeration process, these materials expand, which increases the diameter of the pellets. Similar values of diameter were obtained for pellets of wood and food waste in the study by Garcia et al. [58] (6.18-6.47 mm). Zawiślak et al. [16] obtained pellets with higher diameters from waste materials pelletized in different proportions (8.30-8.50 mm) using an 8 mm matrix. However, when using a homogeneous material containing cellulose fibers, the content of lignin and moisture in the material is important. In the study by Tan et al. [56], when C. oleifera Abel. shell was pelleted using a 7 mm matrix, pellets with diameters of 7.07-7.18 mm were obtained, while pellets of higher diameter values were obtained from non-humidified materials. The results of the statistical analysis showed similarities in the diameter of the obtained pellets, whereas the pellets from CG and CH-CC showed significant differences in this parameter. Table 3 shows the results of the evaluation of energy consumption for pellet production from corn grain drying residues. Table 3. Results of the evaluation of energy consumption for the process of pellet production from corn grain drying residues. The results of the analysis of energy consumption during pelletizing showed significant differences in the energy input required to obtain pellets, which ranged from 47.6 to 78.6 Wh·kg −1 . The lowest energy demand was found for pellet production from CH, while CG was the most energy-consuming material in the process of pressure agglomeration, for which the increase in energy input exceeded 65%. It should be noted that the addition of corn husks positively influenced the energy consumption of the pelletizing process. In the case of the pelletizing of the CC and CH mixture, the energy consumption was lower by 12%, and for the CH and CG mixture, it was lower by 16%. In turn, the addition of damaged grains to cobs increased the energy demand by 11%, and to husks, by over 40%. This increase in energy consumption was primarily due to the properties of the biomass used. When subject to pressure, flexible husks adopt the shape of other molecules (CC and CG), and the friction between agglomerated particles is also lower due to the husks' smooth surface. In the case of grains, individual particles are uneven and sharp [37], often contaminated with mineral substances (dust and soil remnants), and clearly visible, which results in increased friction between biomass particles, as well as between agglomerated particles and matrix walls. The results of the statistical analysis showed significant differences between the tested materials in energy consumption during the pelletizing process. A comparison of the results of the analysis of the energy consumption of the pelletizing process indicated that the values were lower than those reported by Garcia et al. [58]. In the cited study, the authors performed densification on a mixture of pine sawdust and waste obtained from the food and forestry industry in the form of coffee residues, cocoa shells, grape marc, and pinecone fragments, and the energy consumed for their pelletizing was in the range of 80-250 Wh·kg −1 . In addition, Kirsten et al. [59] observed a higher energy demand (157-290 Wh·kg −1 ) for the hay compaction process, taking into account the grinding of waste into pieces of various sizes (2-6 mm). In the study on waste resulting from the pelletizing of corn cobs [40], the energy consumption was at the level of 80-400 Wh·kg −1 . However, the authors emphasized that the process was not optimized, and the high values refer to the lower efficiency of the pelletizing process, which is related to the experiment they conducted. Zawiślak et al. [16] determined similar levels of energy consumption for the compaction of biomass in the form of chamomile and birch sawdust waste (108 and 100 Wh·kg −1 , respectively). The reason for this similarity could be the low humidity of these materials. However, for mixtures of these materials with pea and soya waste in various proportions, the energy consumption was lower by 38.4-44.7 Wh·kg −1 , which is due to the content of fats in the materials added. A relatively low level of energy consumption (10 Wh·kg −1 ) was reported by Tan et al. [56] for the compaction of waste in the form of C. oleifera Abel. shell. In this case, the reduction of the compaction energy was due to the high content of lignocellulose in the raw material, as well as to the corresponding improvement of the plastic properties of lignocellulose by the proper hydration of the surface and the interior of the particles of the agglomerated material. The energy density of the pellets obtained in this study ranged from 5.81 (CC-CH) to 8.19 GJ·m −3 (CC). The analysis of this parameter showed that the largest amount of energy was accumulated in the volume of pellets from CC, with average amounts in the pellets from CC-CG and CH-CC, and the lowest in the pellets from CC-CH. The differences in energy density between the pellets from CG, CG-CC, and CG-CH, as well as from CH-CG and CH-CC, were not statistically significant. Moreover, it was noted that, in the case of mixtures, the content of CC had an apparent effect on the energy density of the tested pellets. With regard to the results of other authors, it should be emphasized that different methods were adopted for determining energy density. The calculation carried out by Garcia et al. [59] corresponded to the methodology adopted in the present study. The authors obtained higher energy density values, in the range of 7.7-12.0 GJ·m −3 , which was due to the use of wood waste, in combination with waste from the agri-food industry, as raw material for pellet production. However, considering biomass combustion heat and pellet density for the calculation of lower heating values, as was performed by Jiang et al. [55] and Zhang T. et al. [11], for pellets obtained from corn drying process waste, the energy density was found to range from 14.8 (for CH) to 21.3 GJ·m −3 (for CG), and the values were intermediate for CC and all the mixtures. All pellets from the mixtures containing CG had a higher energy density, and the addition of CH contributed to the decrease in this parameter. A similar energy density in the range of 15.5-20.4 GJ·m −3 was obtained by Jiang et al. [55] for pellets made from a mixture of waste biomass and sewage sludge, and by Zhang T. et al. [11] for a mixture of wood waste from apple trees, corn straw, and frying oil. Figure 3 shows the energy parameters of the obtained pellets in relation to the energy needed for their production from the tested raw materials. The energy density of the pellets obtained in this study ranged from 5.81 (CC-CH) to 8.19 GJ·m -3 (CC). The analysis of this parameter showed that the largest amount of energy was accumulated in the volume of pellets from CC, with average amounts in the pellets from CC-CG and CH-CC, and the lowest in the pellets from CC-CH. The differences in energy density between the pellets from CG, CG-CC, and CG-CH, as well as from CH-CG and CH-CC, were not statistically significant. Moreover, it was noted that, in the case of mixtures, the content of CC had an apparent effect on the energy density of the tested pellets. With regard to the results of other authors, it should be emphasized that different methods were adopted for determining energy density. The calculation carried out by Garcia et al. [59] corresponded to the methodology adopted in the present study. The authors obtained higher energy density values, in the range of 7.7-12.0 GJ·m -3 , which was due to the use of wood waste, in combination with waste from the agri-food industry, as raw material for pellet production. However, considering biomass combustion heat and pellet density for the calculation of lower heating values, as was performed by Jiang et al. [55] and Zhang T. et al. [11], for pellets obtained from corn drying process waste, the energy density was found to range from 14.8 (for CH) to 21.3 GJ·m -3 (for CG), and the values were intermediate for CC and all the mixtures. All pellets from the mixtures containing CG had a higher energy density, and the addition of CH contributed to the decrease in this parameter. A similar energy density in the range of 15.5-20.4 GJ·m −3 was obtained by Jiang et al. [55] for pellets made from a mixture of waste biomass and sewage sludge, and by Zhang T. et al. [11] for a mixture of wood waste from apple trees, corn straw, and frying oil. Figure 3 shows the energy parameters of the obtained pellets in relation to the energy needed for their production from the tested raw materials. The high density of pellets generally leads to, among other things, a reduction in the demand for storage volume, as well as a lower level of combustion chamber filling at a constant fuel weight. In this study, the pellets made from the CG-CH mixture, at the average lower heating value (12.90 MJ·kg -1 ), showed a higher density (1.29 kg·dm -3 ) and The high density of pellets generally leads to, among other things, a reduction in the demand for storage volume, as well as a lower level of combustion chamber filling at a constant fuel weight. In this study, the pellets made from the CG-CH mixture, at the average lower heating value (12.90 MJ·kg −1 ), showed a higher density (1.29 kg·dm −3 ) and average energy consumption (65.7 Wh·kg −1 ) compared to the other tested materials. This indicated that pellets produced from the mixture of corn grain and husks are a good-quality biofuel, and their production does not require a large amount of electricity. However, the low mechanical strength of these pellets (78%) can lead to their massive crushing, which, in turn, can trigger problems both in the fuel feeder and during the combustion process in the boiler burner. Considering the above, pellets made from the CH-CC mixture produced from the tested raw materials can be considered as a fuel with good quality parameters, with a satisfactory lower heating value (13.09 MJ·kg −1 ) and low energy consumption in the manufacturing process (55.3 Wh·kg −1 ), as well as a high density (1.24 kg·dm −3 ) and high mechanical strength (89%). These features allow a positive energy balance for pellets made from the CH-CC mixture due to the positive ratio of lower heating value to energy consumption. Among the tested pellets, those made from CH showed the worst properties as a fuel due to their low lower heating value (9.69 MJ·kg −1 ), despite the lowest energy input required for their production (47.6 Wh·kg −1 ). The use of CC pellets as a biofuel is also advantageous due to their favorable lower heating value (14.94 MJ·kg −1 ), as well as high mechanical strength (88.65%) and low energy consumption for pelletizing (62.4 Wh·kg −1 ). Despite the low density of these pellets (1.14 kg·dm −3 ), they can be a potentially attractive biofuel due to their other favorable characteristics. Figure 4 shows the quality parameters of the obtained pellets in terms of the energy properties of the examined biomass. average energy consumption (65.7 Wh·kg -1 ) compared to the other tested materials. This indicated that pellets produced from the mixture of corn grain and husks are a goodquality biofuel, and their production does not require a large amount of electricity. However, the low mechanical strength of these pellets (78%) can lead to their massive crushing, which, in turn, can trigger problems both in the fuel feeder and during the combustion process in the boiler burner. Considering the above, pellets made from the CH-CC mixture produced from the tested raw materials can be considered as a fuel with good quality parameters, with a satisfactory lower heating value (13.09 MJ·kg -1 ) and low energy consumption in the manufacturing process (55.3 Wh·kg -1 ), as well as a high density (1.24 kg·dm -3 ) and high mechanical strength (89%). These features allow a positive energy balance for pellets made from the CH-CC mixture due to the positive ratio of lower heating value to energy consumption. Among the tested pellets, those made from CH showed the worst properties as a fuel due to their low lower heating value (9.69 MJ·kg -1 ), despite the lowest energy input required for their production (47.6 Wh·kg -1 ). The use of CC pellets as a biofuel is also advantageous due to their favorable lower heating value (14.94 MJ·kg -1 ), as well as high mechanical strength (88.65%) and low energy consumption for pelletizing (62.4 Wh·kg -1 ). Despite the low density of these pellets (1.14 kg·dm -3 ), they can be a potentially attractive biofuel due to their other favorable characteristics. Figure 4 shows the quality parameters of the obtained pellets in terms of the energy properties of the examined biomass. Analyzing the results of the study presented in Figure 5a, it was found that the pellets made of the CH-CC mixture were the most beneficial in terms of energy use. Their high mechanical strength (89.64%) and density (1.24 kg·dm -3 ), together with the average lower heating value (13.09 MJ·kg -1 ), indicated that they can be optimal biofuel. Additionally, the high bulk density of these pellets (566.9 kg·m -3 ) and high lower heating value testify to their efficiency as a biofuel. Although favorable physical properties and a relatively high lower heating value were found for the pellets produced from CC-CH, CC, and CC-CG, these had the lowest density (<1.15 kg·dm -3 ), which indicated their low susceptibility to compaction during pressure agglomeration. On the other hand, corn husk pellets showed very good physical properties, but a low lower heating value (9.69 MJ·kg -1 ), and hence, were deemed unattractive. Similarly, due to their low mechanical strength, the pellets made from CG-CC, CG-CH, and CG, despite having a satisfactory lower heating value, were regarded as unattractive fuel, as these materials did not meet the quality criteria of the energy market, and they can only be used for the own needs of the pellet producers. Analyzing the results of the study presented in Figure 5a, it was found that the pellets made of the CH-CC mixture were the most beneficial in terms of energy use. Their high mechanical strength (89.64%) and density (1.24 kg·dm −3 ), together with the average lower heating value (13.09 MJ·kg −1 ), indicated that they can be optimal biofuel. Additionally, the high bulk density of these pellets (566.9 kg·m −3 ) and high lower heating value testify to their efficiency as a biofuel. Although favorable physical properties and a relatively high lower heating value were found for the pellets produced from CC-CH, CC, and CC-CG, these had the lowest density (<1.15 kg·dm −3 ), which indicated their low susceptibility to compaction during pressure agglomeration. On the other hand, corn husk pellets showed very good physical properties, but a low lower heating value (9.69 MJ·kg −1 ), and hence, were deemed unattractive. Similarly, due to their low mechanical strength, the pellets made from CG-CC, CG-CH, and CG, despite having a satisfactory lower heating value, were regarded as unattractive fuel, as these materials did not meet the quality criteria of the energy market, and they can only be used for the own needs of the pellet producers. The least advantageous, from the perspective of the logistics of pellet distribution to the recipient, were the pellets made from the CC-CH mixture. Despite the low energy input for their production (54.7 Wh·kg -1 ) and the high mechanical strength of the resulting granules (93.52%), these pellets had a low bulk density (444.6 kg·m -3 ). Therefore, the transportation of a certain mass of these pellets would require a larger loading and storage volume, which may be reflected in the profitability of distribution. However, the tested mixture could be used for local needs, such as in grain-drying plants. The most advantageous, from the perspective of the distribution logistics of biofuel on the energy market, were the pellets made from CH. The lowest energy input for the production of these pellets (47.6 Wh·kg -1 ) would be reflected in profits from their sale. In addition, the high mechanical strength of these pellets (96.19%) would allow minimizing their crushing during transport, as well as during handling and storage. It is also worth noting that these pellets had the highest bulk density (610.9 kg·m -3 ) among the ones tested. Such high bulk density would allow a large amount of pellets to be stored and transported in limited cargo space. In turn, the optimal quality parameters for the distribution process were found for the pellets made from CC and CH-CC. The low energy input required for the production of these pellets (62.4 and 55.3 Wh·kg -1 , respectively), as well as their satisfactory mechanical strength (88.65 and 89.64%, respectively) and average bulk density (547.9 and 566.9 kg·m -3 , respectively), pointed out that these materials are preferred for distribution and have good physical properties. Conclusions The findings of this study suggest that the use of waste resulting from the corn grain drying process in the form of pellets may allow obtaining granules with different quality characteristics. Taking into account the density (1.35 kg·dm -3 ), mechanical strength (96.19%), and bulk density (610.9 kg·m -3 ), the pellets made from CH were the most advantageous, while those made from CH-CG and CC-CG were beneficial in terms of pellet diameter (6.46 mm) and length (37 mm), respectively. The study proved that pellets produced from mixtures have low durability. During the production of pellets, it is important to ensure that the energy consumed for the granulation process is low, so that Figure 5 shows the quality parameters of the obtained pellets in terms of their properties related to storage and distribution processes. The least advantageous, from the perspective of the logistics of pellet distribution to the recipient, were the pellets made from the CC-CH mixture. Despite the low energy input for their production (54.7 Wh·kg −1 ) and the high mechanical strength of the resulting granules (93.52%), these pellets had a low bulk density (444.6 kg·m −3 ). Therefore, the transportation of a certain mass of these pellets would require a larger loading and storage volume, which may be reflected in the profitability of distribution. However, the tested mixture could be used for local needs, such as in grain-drying plants. The most advantageous, from the perspective of the distribution logistics of biofuel on the energy market, were the pellets made from CH. The lowest energy input for the production of these pellets (47.6 Wh·kg −1 ) would be reflected in profits from their sale. In addition, the high mechanical strength of these pellets (96.19%) would allow minimizing their crushing during transport, as well as during handling and storage. It is also worth noting that these pellets had the highest bulk density (610.9 kg·m −3 ) among the ones tested. Such high bulk density would allow a large amount of pellets to be stored and transported in limited cargo space. In turn, the optimal quality parameters for the distribution process were found for the pellets made from CC and CH-CC. The low energy input required for the production of these pellets (62.4 and 55.3 Wh·kg −1 , respectively), as well as their satisfactory mechanical strength (88.65 and 89.64%, respectively) and average bulk density (547.9 and 566.9 kg·m −3 , respectively), pointed out that these materials are preferred for distribution and have good physical properties. Conclusions The findings of this study suggest that the use of waste resulting from the corn grain drying process in the form of pellets may allow obtaining granules with different quality characteristics. Taking into account the density (1.35 kg·dm −3 ), mechanical strength (96.19%), and bulk density (610.9 kg·m −3 ), the pellets made from CH were the most advantageous, while those made from CH-CG and CC-CG were beneficial in terms of pellet diameter (6.46 mm) and length (37 mm), respectively. The study proved that pellets produced from mixtures have low durability. During the production of pellets, it is important to ensure that the energy consumed for the granulation process is low, so that the process itself is energetically justified. The assessment of the tested pellets showed that the most energy-consuming material during pelletizing was CG (78.6 Wh·kg −1 ), while CH was the least energy-consuming (47.6 Wh·kg −1 ). The study also proved that the fuel characterized by good quality parameters was the pellet made of the CH-CC mixture, with a satisfactory lower heating value (13.09 MJ·kg −1 ) and low energy consumption in the manufacturing process (55.3 Wh·kg −1 ). Moreover, data analysis revealed that the obtained pellets had a high density (1.24 kg·dm −3 ) and high mechanical strength (89%), which are important for their distribution and handling. Regarding the quality parameters for the distribution process, the pellets made of CC and CH-CC were found to be the most favorable, with a relatively low energy input required for their production (62.4 and 55.3 Wh·kg −1 , respectively) and high mechanical strength (88.65 and 89.64%, respectively). In addition, the bulk density of these pellets (547.9 and 566.9 kg·m −3 , respectively) confirmed their good physical properties, which are critical for efficient fuel distribution. However, the CC-CH mixture was identified as the least beneficial in terms of pellet distribution logistics. Although this material was characterized by a low energy input for pellet production (54.7 Wh·kg −1 ), and the obtained pellets had a high mechanical strength (93.52%), the assessment showed that the pellets had a low bulk density (444.6 kg·m −3 ). The results presented in this paper may form the basis of further works undertaken to optimize the compaction process for selected raw materials, in order to obtain pellets of the highest quality. Funding: This research was performed within the project "Developing of innovative air purification methods for grain and seed drying, along with pollutant emissions reduction-ECO-Dryer", cofunded by the National Center for Research and Development (NCBiR), within the framework of the Strategic Research and Development Program "Environment, Agriculture and Forestry"-BIOSTRATEG3/344490/13/NCBR/2018. Publication was co-financed with the project entitled 'Excellent science' program of the Ministry of Education and Science as a part of the contract No. DNK/513265/2021 Role of agriculture in implementing concept of sustainable food system "from field to table". Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
2022-07-07T15:12:45.309Z
2022-07-03T00:00:00.000
{ "year": 2022, "sha1": "74fd453aa002136c29edea15d0f78013fdbe67fc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/13/8129/pdf?version=1656989769", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "68b162d41764bc5aac4ec9cab11051f3af0fe0f1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
118800096
pes2o/s2orc
v3-fos-license
Black holes and fundamental fields: hair, kicks and a gravitational"Magnus"effect Scalar fields pervade theoretical physics and are a fundamental ingredient to solve the dark matter problem, to realize the Peccei-Quinn mechanism in QCD or the string-axiverse scenario. They are also a useful proxy for more complex matter interactions, such as accretion disks or matter in extreme conditions. Here, we study the collision between scalar"clouds"and rotating black holes. For the first time we are able to compare analytic estimates and strong field, nonlinear numerical calculations for this problem. As the black hole pierces through the cloud it accretes according to the Bondi-Hoyle prediction, but is deflected through a purely kinematic gravitational"anti-Magnus"effect, which we predict to be present also during the interaction of black holes with accretion disks. After the interaction is over, we find large recoil velocities in the transverse direction. The end-state of the process belongs to the vacuum Kerr family if the scalar is massless, but can be a hairy black hole when the fundamental scalar is massive. I. Introduction. Black holes (BHs) are known to be abundant objects in our universe, with a major role in the evolution of galaxies and star formation. As truly relativistic objects, they are powerful sources of gravitational waves and key-players in the nascent field of gravitational wave astronomy. Astrophysical observations of BHs will give us unprecedented information about our universe, by mapping the BH mass and spin with exquisite precision [1][2][3][4]; by testing General Relativity in the strongfield regime [4][5][6]; by constraining the dark-energy equation of state [7]; or by providing information on dark matter distribution around BHs [8,9]. Supermassive BHs also have the unexpected ability to provide information on fundamental ultra-light bosonic degrees of freedom, generic predictions of beyond the standard model physics and of modified gravity theories [10][11][12]. For boson masses in the range 10 −21 eV µ S 10 −8 eV, the Compton wavelength of these fields is of the order of the BH size, the gravitational coupling of these two objects is strongest, and long-lived quasibound states arise [13][14][15]. Depending on the efficiency with which the bosonic cloud is accreted, one might observe gravitational wave "light houses" or find gaps in the BH-Regge plane [11,[15][16][17][18][19]. The end-state of the superradiant instability is not known, but the prospect of finding long-lived -or even truly stationary-"hairy" BH solutions deserves all the attention possible [20][21][22]. Fundamental fields are also a useful proxy for more complex interactions and matter. In this context, the interaction between BHs and bosonic fields can teach us about BH formation from gravitational collapse, interaction with accretion disks, magnetic fields, etc. The rich phenomenology of such natural theories prompted a flurry of activity in the field, mostly confined to the linearized regime where the spacetime is a fixed Kerr BH background. The purpose of this Letter is to take the first step towards understanding the nonlinear development of the interaction between BHs and fundamental fields. Unless stated otherwise, we use geometrical units G = c = 1. II. Numerical Setup and Analysis Tools. We consider a minimally coupled, gravitating, complex scalar field Φ of mass µ S described by the action where (4) R is the four-dimensional Ricci scalar. We employ standard Numerical Relativity techniques based on the 3 + 1 splitting to solve the fully nonlinear problem [23,24]. In this approach the time evolution of the 3-metric γ ij and scalar field Φ are governed by where the extrinsic curvature K ij and Π are their conjugated momenta, and d t = ∂ t − L β . The 3 + 1 decomposition of the equations of motion yields time evolution equations for the extrinsic curvature and scalar field momentum where R ij and R are associated to the 3-dimensional metric and ρ, j i , S ij are, respectively, the energy density, energy-momentum flux and spatial components of the energy momentum tensor. In practice, we employ the Baumgarte-Shapiro-Shibata-Nakamura formulation [25,26] of the evolution equations (2) and (3), together with the moving puncture gauge [27,28]. We solve the Cauchy problem for Einstein's equations using the COSMOS code [20]. Time evolution is realized by a 4th order Runge-Kutta method, spatial derivatives are computed through a 4th order finite differencing method in Cartesian grids. The Adaptive Mesh Refinement algorithm of moving boxes is employed in order to keep a good resolution near the BH [29][30][31]. Apparent horizons are tracked using the methods outlined in Refs. [32,33]. Parallelization is implemented with OpenMP. We measure the scalar field amplitude Φ and the Newman-Penrose scalar Ψ 4 encoding the gravitational radiation, at coordinate spheres of fixed radius r ex , where we project them with spherical or s = −2 spin-weighted spherical harmonics. We estimate the numerical discretization error to be of order 6% in both the scalar and gravitational waveforms. In addition, we monitor the apparent horizon (AH) area A AH , the irreducible mass and the ratio of equatorial to polar circumferences to estimate the BH mass and spin [34,35]. III. Initial data construction. In general, one needs appropriate initial data to perform reliable and realistic simulations within numerical relativity. Following the initial data construction in Refs. [20,36], we simplify the constraint equations by the conformal transformation where η ij is the flat metric. Assuming conformal and asymptotic flatness, the maximal slicing condition and setting scalar field Φ(t = 0) = 0, the constraints (4) become Let us consider first a single, non-rotating BH (Ã ij = 0) and a nonzero scalar field. The momentum constraints (6b) are trivially satisfied and the Hamiltonian constraint (6a) yields Using the same ansatz for the Gaussian-type spherical scalar wave packet described in Ref. [20], where we take the radial coordinate r = x 2 + y 2 + z 2 , the location of BH is described by r BH ≡ (x − x BH ) 2 + y 2 + z 2 and M 0 denotes the BH bare mass parameter. A regular, analytical, solution to the Hamiltonian constraint is then where we have imposed that u 0 → 0 at r = 0. Thus, eqs.(8)-(9) describe a spherically symmetric scalar cloud and a BH a distance x BH apart. Addition of linear and angular momenta complicates the procedure, but can be done as follows. The momentum constraints (6b) can be also solved analytically and we obtain the so-called Bowen-York extrinsic curvaturẽ where P i , S i and n i are the momentum, the spin and the unit normal vector n i ≡ x i /r, respectively. The remaining Hamiltonian constraint is then given by To solve the Hamiltonian constraint, we use the ansatz where M 0 is the BH bare mass parameter and u(x i ) is a regular function. We set a boosted, rotating BH initially at the origin and a scalar pulse located at x = x 0 apart from the BH. The scalar cloud is again described by the Gaussian profile where A P and w denote the amplitude and width of the scalar cloud and r 0 = (x − x 0 ) 2 + y 2 + z 2 . Eq. (11) becomes an elliptic partial differential equation for u which is regular everywhere and can be solved with a common elliptic-equation solver [37]. III. Collision of BHs with scalar "clouds". We have evolved a variety of different initial configurations, varying the BH mass momentum and spin, and varying the scalar-field width, location and mass µ S . The collision process is gravity-dominated, and we find that timescales are well approximated by Newtonian free-fall estimates. The evolution proceeds in different stages, depending on the scalar-field mass. Consider the massless or small µ S M regime first. For low scalar-field amplitudes, a fraction of the initial scalar cloud is unbounded and scatters to infinity. As we increase the Gaussian amplitude, we find that the scalar field starts self-gravitating and a larger fraction is accreted by the BH. These features are summarized in Fig. 1, specialized to a width w = 20M 0 , which we take as representative for the rest of this work. For this setup, typically 99% of the scalar-field mass escapes to infinity: the "cloud" is initially located far away from the BH and is dispersing away from it. Our results are in quantitative agreement with Bondi-Hoyle accretion rates, they depend only weakly on BH spin, but scale like the square of the scalar field amplitude, as expected. There are two pronounced accretion phases, related to scalar cloud evolution and its density profile, as shown in the lower panel of Fig. 1. The final state is a Kerr BH in vacuum. The accretion and evolution of configurations where the fundamental scalar is massive shows qualitatively different behavior. Massive fields are harder to disperse and try to bound. Accordingly we find substantial more amount of scalar field being accreted onto the BH in the massive-scalar case. An intriguing alternative to this scenario is that -if the BH is rotating-superradiance prevents absorption of the scalar at the horizon and instead forms a hairy BH, with a non-trivial external scalar field configuration and quadrupole moment [21,22]. Even in the absence of rotation, extremely long-lived modes have been shown to be possible [13][14][15]20]. Our results point to a possible formation scenario: a cloud of scalar field scattering off a non-rotating BH leaves behind, at late times, a BH surrounded by an external long-lived scalar condensate. Snapshots of the evolution for a scalar with µ S M 0 = 0.4 are shown in Fig. 2. The pattern oscillates FIG. 2. Snapshots of the accretion flow to a non-rotating BH on x − y plane. The field is initially described by a gaussian with A0M0 = 0.2, xBH = −8M0, w = 6M0 and has a mass term µSM0 = 0.4. Colors depict intensity of the scalar field, at late-times the configuration settles to a very longlived dipole condensate outside the horizon, extending to distances of order ∼ 20M0 [15]. with a frequency compatible with linearized calculations which also describe well the spatial extent of the scalar condensate. In other words, we have strong evidence of a possible mechanism for formation of what for practical purposes is a "hairy" BH in asymptotically flat spacetime. IV. The (anti-)"Magnus" effect in BH physics. As the BH pierces through the cloud, accretion of matter ensues. It is well-known that the absorption crosssection for co-and counter-rotating particles and waves is different for spinning BHs [38,39], causing a kinematic drift of general-relativistic origin in the perpendicular direction to the flow. Consider the BH initially at the center of the reference frame, spinning with angular momentum J aligned with the z−axis and moving in the x−direction through the scalar field cloud. The BH accretes mass as it moves, in a spin-dependent manner. For low-velocity collisions, accretion is governed by the marginally bound circular orbit of radius (in "Brill-Lindquist" coordinates [38]) The upper (lower) sign applies to co-(counter-) rotating orbits. Modeled in this way then, as the BH moves through the medium with relative velocity v, it sweeps up a distorted, non-symmetric "tube" composed by two half cylinders with radii R − , R + . Finding the shape of this distorted tube is an interesting geometrical problem, which in its simplest version amounts to equating the centroid of the projected figure to the BH location, a problem similar in many respects to that found in twodimensional rocket motion [40]. A simple estimate can be obtained by noting that when the BH moves a distance δx, the y−position of the center-of-mass of this distorted cylinder is located at ∼ 2ρ(R 3 2 − R 3 1 )δx/3M , with ρ the energy density of the scalar configuration. Thus, after accretion of the material the BH has to sit in the CM, at δy 2ρv(R 3 − − R 3 + ) δx/3M ∼ 100M aρδ x . This new effect in BH physics, triggered by asymmetric accretion, is responsible for the motion in the direction orthogonal to the initial BH velocity. In this respect, it is similar to the "Magnus effect" in fluid dynamics, a well-known corollary of hydrodynamics with important consequences in sports, aeronautics, etc [41] 1 . However, (i) the original Magnus effect is a consequence of delicate boundary-layer effects close to the body's surface, whereas the BH drift we described is a pure consequence of spacetime drag and kinematics and (ii) the Magnus effect results, generically but not always, in a motion in the y−direction but in the opposite sense to the BH drift that we predict via spacetime drag and kinematics. Asymmetric accretion is potentially concurrent with three other effects present in our simulations. The first is an overall momentum in the transverse direction triggered by scalar or gravitational waves, potentially displacing the entire BH+cloud system. The second competing effect is the frame-dragging of the scalar cloud, which again by momentum conservation would rigidly rotate the system. Both effects could mask the asymmetric accretion deflection. However, we find that whereas the asymmetric accretion is expected to scale with the scalar cloud density (for a fixed total mass say), this is no longer the case for the other two competing effects. Finally, nonhomogeneous media would give rise to asymmetric accretion and a consequent transversal motion which would be rotation independent. In our setup the BH lies along the symmetry axis and such effect is non-existent. Our numerical results are summarized in Fig. 3. The upper panel shows the puncture position along the y-axis as a function of time. These results are gauge-dependent and a simple overall coordinate shift in the negative ydirection would masque it. We have therefore also measured the proper distance in the y-direction from the BH to the outer boundaries of the cloud, defined as the points for which the density decreases to 1% of its central value. If the BH really moves downwards with respect to the cloud, then the distance to the upper part of the cloud should be larger than the distance to the lower part of the cloud. This is in fact the overall pattern seen in the middle panel of Fig. 3. Thus, an overall shift of the system does not explain our numerical results. The lower panel of Fig. 3 shows the time evolution of the total angular momentum J of the BH. In line with the predicted preferred absorption of counter-rotating particles, J decreases. Finally, our results are proportional to density, spin and velocity, as expected for the asymmetric accretion scenario we propose. This effect is not a particularity of scalar fields, but a rather general feature of BH interaction with matter. An order of magnitude estimate for astrophysically realistic sources is given by where we take as reference value the density of thin accretion disks close to supermassive BHs, f Edd is the Eddington luminosity, α is the disk's viscosity parameter andr ≡ GM r/c 2 is the distance of the BH from the center of the disk [46,47]. These numbers are encouraging, and open-up the possibility to actually observe the gravitational anti-Magnus effect. This deflection is all the more interesting as it can in addition provide an evidence for the existence of horizons: compact stars or any other object with a surface will presumably be subjected to an ordinary Magnus effect. We also observe large "kicks" in the transverse direction after the BH ceased interacting with the scalar cloud. These kicks, presumably imparted by gravitational waves, are already apparent in the puncture position. Their magnitude depends on the scalar cloud amplitude and width. Further exploration of this effect is necessary to understand whether it is a viable recoil mechanism in realistic astrophysical scenarios. V. Conclusions. We reported on the first steps towards understanding the interaction between fundamental fields and BHs. Much remains to be understood, but we think our setup will be useful in exploring fundamental issues such as fully nonlinear investigations of gravi-tational drag, turbulent wakes, spin alignment and spin precession during the interaction of BHs with matter.
2014-05-19T20:00:06.000Z
2014-05-19T00:00:00.000
{ "year": 2014, "sha1": "2632bf3686228914126ffffebcea17c84d8992bb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1405.4861", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e56a53c6ee038fd21ede071a4054b9c5124f638e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229392238
pes2o/s2orc
v3-fos-license
Rapid and sustained environmental responses to global warming: the Paleocene–Eocene Thermal Maximum in the eastern North Sea . The Paleocene–Eocene Thermal Maximum (PETM; ∼ 55.9 Ma) was a period of rapid and sustained global warming associated with significant carbon emissions. It coincided with the North Atlantic opening and emplacement of the North Atlantic Igneous Province (NAIP), suggesting a possible causal relationship. Only a very limited number of PETM studies exist from the North Sea, despite its ideal position for tracking the impact of both changing climate and NAIP activity. Here we present sedimentological, mineralogical, and geochemical proxy data from Denmark in the eastern North Sea, exploring the environmental response to the PETM. An increase in the chemical index of alteration and a kaolinite content up to 50 % of the clay fraction indicate an influx of terrestrial input shortly after the PETM onset and during the recovery, likely due to an intensified hydrological cycle. The volcanically derived zeolite and smectite minerals comprise up to 36 % and 90 % of the bulk and clay mineralogy respectively, highlighting the NAIP’s importance as a sediment source for the North Sea and in increasing the rate of silicate weathering during the PETM. X-Ray fluorescence element core scans also reveal possible hitherto unknown NAIP ash deposition both prior to and during the PETM. Geochemical proxies show that an anoxic to sulfidic environment persisted during the PETM, particularly in the upper half of the PETM body with high concentrations of molybdenum (Mo EF > 30), uranium (U EF up to 5), sulfur ( ∼ 4 wt %), and pyrite ( ∼ 7 % of bulk). At the same time, export productivity and organic-matter burial reached its maximum intensity. These new records reveal that negative feedback mechanisms including silicate weathering and organic carbon sequestration rapidly began to counteract the carbon cycle perturbations and temperature increase and remained active throughout the PETM. This study highlights the importance of shelf sections in tracking the environmental response to the PETM climatic changes and as carbon sinks driving the PETM recovery. Introduction The early Cenozoic was a period characterized by longterm warming, punctuated by transient periods of rapid global hyperthermal events (Zachos et al., 2008;Hollis et al., 2012;Cramwinckel et al., 2018). The most pronounced of these periods was the Paleocene-Eocene Thermal Maximum (PETM; ∼ 55.9 Ma;Kennett and Stott, 1991;Thomas and Shackleton, 1996;Westerhold et al., 2018), during which global surface temperatures rose rapidly by 4-5 • C (Thomas et al., 2002;Dunkley Jones et al., 2013;Frieling et al., 2017). The PETM is associated with a large input of 12 C-rich carbon to the ocean-atmosphere system resulting in a 2.5 ‰-7 ‰ negative carbon isotope excursion (CIE) in the terrestrial and marine sedimentary record (McInerney and Wing, 2011). The PETM CIE lasted up to 200 kyr and is characterized by a rapid onset (∼ 1-5 kyr; Kirtland-Turner et al., 2017), followed by a stable body (∼ 100 kyr; van der Meulen et al., 2020) and a gradual recovery towards background conditions (McInerney and Wing, 2011). There were a number of smaller-magnitude hyperthermals in the early Eocene, but the PETM differs from these events with both its greater magnitude and longer duration Bowen, 2013). However, there is still no consensus on the ultimate PETM cause or whether several mechanisms contributed to prolong the PETM duration (e.g. Zeebe et al., 2009;Bowen et al., 2015). Several 12 C-enriched carbon sources may have contributed to the PETM CIE: the dissociation of methane clathrates (Dickens et al., 1995), a bolide impact activating terrestrial carbon reservoirs (Kent et al., 2003;Schaller et al., 2016), and volcanic and thermogenic degassing from the North Atlantic Igneous Province (NAIP; Fig. 1; Eldholm and Thomas, 1993;Svensen et al., 2004;Storey et al., 2007a). The hydrological cycle also changed substantially during the PETM (e.g. Carmichael et al., 2017), with modelling studies suggesting an overall increase in extreme weather events (Carmichael et al., 2018). Proxy evidence indicates a more humid climate, particularly at higher latitudes and in marginal marine areas such as Antarctica (Robert and Kennett, 1994), the northeast US coast (Gibson et al., 2000;John et al., 2012), the Tethys (Bolle et al., 2000;Egger et al., 2003;Khozyem et al., 2013), the North Atlantic (Bornemann et al., 2014), the North Sea (Kender et al., 2012;Kemp et al., 2016), and the Arctic (Dypvik et al., 2011;Harding et al., 2011). In contrast, areas such as the Pyrenees (Schmitz and Pujalte, 2003) and the US interior (Kraus and Riggins, 2007) show evidence of more arid climates. There seems to be con-siderable regional and temporal variation in the hydrological changes, with an increased meridional transport of water vapour from low to high latitudes leading to an overall dry-dryer, wet-wetter climate response to the global warming (Carmichael et al., 2017). The 4-5 • C PETM temperature increase (Thomas et al., 2002;Dunkley Jones et al., 2013;Frieling et al., 2017) is comparable to that predicted in response to the current anthropogenic carbon emissions (Riahi et al., 2017). The PETM is therefore an important natural analogue for future greenhouse conditions, as the environmental and ecological response may hold clues for the consequences of present-day global warming Alley, 2016;Penman and Zachos, 2018;Svensen et al., 2019). Model predictions suggest that the current global warming will lead to an enhanced hydrological cycle, akin to that indicated by PETM proxy records (Held and Soden, 2006;Seager et al., 2010;Trenberth, 2011). The intensification of both droughts and extreme weather events is already occurring in parts of the world, with substantial consequences for human settlements (e.g. Riahi et al., 2017). Similarly, a decrease in ocean oxygenation has been observed for the last 50 years, most likely resulting from the current global warming (Bograd et al., 2008;Stramma et al., 2012). The spread of marine anoxia is a well-known consequence of global warming, negatively affecting marine ecosystems as a whole (Stramma et al., 2008;Gilly et al., 2013). Understanding the timing and regional distribution of the environmental response to global warming in the past is therefore vital to meet the challenges of the future. The Stolleklint section on the island of Fur in northwest Denmark offers an excellent opportunity to study the environmental response to temperature changes during the PETM in detail (Fig. 1). Denmark is placed in the eastern part of the epicontinental North Sea, which during the latest Paleocene became a highly restricted basin due to NAIP thermal uplift . During the PETM the North Sea was characterized by bottom-water deoxygenation (Schoon et al., 2015) and a high sedimentary input, significant surface water freshening, and the development of halocline stratification reflecting an intensified hydrological cycle (Zacke et al., 2009;Kender et al., 2012;Kemp et al., 2016). At Stolleklint, the PETM is recognized by a negative 4.5 ‰ CIE and the appearance of the diagnostic dinoflagellate Apectodinium augustum at the base of the earliest Eocene Stolleklint Clay (Fig. 2;Heilmann-Clausen, 1994;Schmitz et al., 2004;Schoon et al., 2013;Jones et al., 2019). The Stolleklint Clay -which covers the PETM interval in Denmarkis a thermally immature and expanded clay-dominated unit, making this a unique and particularly well-suited section for detailed geochemical analyses. Located in a likely downwind direction and within proximity to the NAIP, Denmark was also ideally placed to record the contemporary volcanic activity from the NAIP (Fig. 1) that lasted between ∼ 63-54 Ma (Jones et al., 2019;Stokke et al., 2020b;Wilkinson et al., Abdelmalak et al., 2016, Horni et al., 2017, and Jones et al., 2019. The orange dot notes the position of core 22/10a-4 described by Kender et al. (2012) and Kemp et al. (2016). Blue lines: plate boundaries. Black lines: present-day coastlines. Light and dark blue areas: shelf and deep marine areas respectively. Light red areas: known extent of subaerial and submarine extrusive volcanism from the NAIP. Dark red: individual volcanic centres. Black areas: extent of known NAIP sill intrusions in sedimentary basins. The total extent of intrusions beneath the extrusive volcanism is not known. The dashed square indicates the position of 2017). This is evidenced by the hundreds of NAIP tephra layers interbedded in the Danish and North Sea stratigraphy, mainly deposited during the most voluminous phase of the NAIP between ∼ 56-54 Ma (Fig. 2;Bøggild, 1918;Knox and Morton, 1988;Larsen et al., 2003). The NAIP importance in the PETM initiation and termination is a topic of much discussion (Svensen et al., 2004;Jolley and Widdowson, 2005;Storey et al., 2007a;Frieling et al., 2016;Saunders, 2016;Gutjahr et al., 2017;Jones et al., 2019). To refine this relationship, better constraints on the relative timings of volcanic activity and climatic and environmental changes are needed. A high-resolution sea surface temperature (SST) reconstruction from Stolleklint based on the organic palaeothermometer TEX 86 found that SSTs increased by about 10 • C across the CIE onset and then gradually decreased during the CIE body and recovery (Stokke et al., 2020a). In the present study, we combine mineralogical, sedimentological, and geochemical proxies to investigate the relationship between changes in temperature and variations in both basin oxygenation and sediment input; the latter typically inferred to indicate changes in terrestrial erosion and runoff. Both increased weathering of siliciclastic rocks and enhanced se-questration of organic carbon have been proposed as important negative feedback mechanisms, potentially driving the PETM recovery (Speijer and Wagner, 2002;Bowen and Zachos, 2010;Ma et al., 2014;Penman, 2016;Dunkley Jones et al., 2018). Better constraints on the timing and global extent of increased silicate weathering and organic-matter sequestration are therefore vital for understanding the PETM termination. Field area and stratigraphy Stolleklint is located on the northern shore of the island of Fur in northwest Denmark (Fig. 1). In the Palaeogene, Fur was part of the Norwegian-Danish Basin, a marginal basin in the eastern semi-enclosed epicontinental North Sea (Rasmussen et al., 2008;Knox et al., 2010). The Norwegian-Danish Basin forms a NW-to SE-striking depression, bounded by the Fennoscandian Shield and the Sorgenfrei-Tornquist Zone to the NE and basement blocks in the Ringkøbing-Fyn High to the SW (Schiøler et al., 2007). Salt diapirs of Zechstein salt create additional restrict- King (2016) and Schiøler et al. (2007). The δ 13 C and δ 18 O curves indicate the stratigraphic position of two periods of carbon perturbation; the Paleocene-Eocene Thermal Maximum (PETM) and the Eocene Thermal Maximum 2 (ETM2). Carbon and oxygen isotope data from Cramer et al. (2009) andLittler et al. (2014) are plotted on the GTS2012 timescale (Ogg, 2012). The base of the section at Stolleklint likely comprises the Holmehus Formation, which correspond to the Lista Formation offshore in the North Sea (Figs. 2,3,4). This is a hemipelagic, bioturbated fine-grained mudstone, representing the culmination of a long period of transgression in the latest Paleocene Denmark (Heilmann-Clausen, 1995). In the latest Paleocene, a combination of thermal uplift around the NAIP (Knox, 1996) and tectonic uplift along the Sorgenfrei-Tornquist Zone led to a relative sealevel fall and almost complete isolation of the North Sea Basin . In Denmark, this resulted in either erosion of the latest Paleocene strata, a hiatus in deposition, or deposition of the informal Østerrende clay unit above the Holmehus Formation. However, the Østerrende clay unit has a very limited regional distribution, and it is uncertain how much is present at Stolleklint despite its presence further south in Denmark ( Fig. 2; Schmitz et al., 2004;King, 2016). Schoon et al. (2015) correlated the uppermost Paleocene stratigraphy at Fur with the Østerrende clay similar to that seen at Store Baelt (Fig. 1a). However, the Østerrende clay is absent in cores drilled at Mors ∼ 20 km to the west and at Ølst ∼ 80 km to the SE ( Fig. 1a; Heilmann- Clausen, 1995), suggesting that a hiatus of uncertain duration followed the Holmehus Formation at Stolleklint. Still, due to the uncertainty of this boundary, we will henceforth refer to the lowermost unit as the Holmehus/Østerrende Formation. The Paleocene-Eocene transition is seen as a lithological shift from the Holmehus/Østerrende Formation biotur-bated clays to the dark, laminated clays of the Stolleklint Clay (Figs. 3, 4;Heilmann-Clausen et al., 1985;Heilmann-Clausen, 1995;King, 2016). The lithological change is accompanied by the almost complete absence of benthic fauna and preferential dissolution of remaining calcareous organisms within the Stolleklint Clay (Heilmann- Clausen, 1995;Mitlehner, 1996). The Stolleklint Clay is an informal unit, representing the lower Ølst Formation in northern Denmark and correlating with the offshore Sele Formation (Fig. 2;Heilmann-Clausen, 1995). A condensed, glauconiterich silty horizon marks the Stolleklint Clay base (Heilmann- Clausen, 1995;Schmitz et al., 2004;Schoon et al., 2015). This glauconite-rich silt contains mainly authigenic and biogenic grains and was likely deposited in an upper bathyal to outer neritic environment with low sedimentation rates (Nielsen et al., 1986;Schoon et al., 2015). A relative sealevel rise is recorded in PETM sections in the Atlantic, Pacific, Tethyan, and Arctic oceans (Sluijs et al., 2008;Harding et al., 2011;Pujalte et al., 2014;Sluijs et al., 2014). It was likely caused by thermal expansion of seawater due to global warming (Sluijs et al., 2008) and may pre-date the PETM by up to 20-200 kyr (John et al., 2012). Although this transgression was overprinted by regional tectonically forced regression in the latest Paleocene, the earliest Eocene Stolleklint Clay is deposited in an outer neritic environment (between 100-200 m; Knox et al., 2010;Schoon et al., 2015) during a gradual transgression (Heilmann- Clausen, 1995). The Stolleklint Clay is overlain by the ∼ 60 m thick clayrich Fur Formation diatomites (Figs. 3,4), correlating to the offshore Sele and Balder formations (Fig. 2). At Stolleklint, the PETM is defined by a negative CIE of −4.5 ‰ based on stable carbon isotopes of bulk samples (δ 13 C TOC ). The CIE is characterized by a sharp (2 cm) onset above Ash SK2 at the base of the Stolleklint Clay, a thick stable body phase (∼ 24 m), and a gradual recovery (∼ 4.5 m) from about Ash −33 to around Ash −21a (Figs. 3, 4;Jones et al., 2019). Recent glaciotectonic activity has resulted in a relatively steep bedding with internal small-scale folding and thrusting (Pedersen, 2008), complicating stratigraphic thickness estimates. Jones et al. (2019) used trigonometry to estimate a local true thickness of 24.4 ± 2 m (24.2 m excluding ash layers) for the PETM onset and body at the Stolleklint beach: from the top of Ash SK2 to the base of Ash −33. An overall sedimentation rate was then calculated for the PETM body based on the estimated true thickness and an assumed 100 kyr duration for the PETM body (van de Meulen et al., 2020). The PETM at Stolleklint is consequently associated with a substantially increased sedimentation rate from the condensed glauconitic base to a maximum sediment accumulation rate in the Stolleklint Clay of about 24 cm/kyr (Stokke et al., 2020a). More than 180 tephras up to 20 cm thick have been identified in the stratigraphy exposed at Fur, with the majority (∼ 140) within the post-PETM Fur Formation (Fig. 2;Bøggild, 1918;Pedersen and Surlyk, 1983). Tephra is a general term for all airborne volcanic fragmented material, but the grain sizes of all the Fur tephras are < 2 mm and therefore within the ash fraction. Heavily altered ashes are called bentonites, and while this applies to some of the lowermost ashes, we will for simplicity use the term ash for all. The volcanic ashes are grouped in a negative and positive ash series based on variations in outcrop appearance and geochemistry (Bøggild, 1918), with additional ash layers SK1-4 identified later at the base of the Stolleklint Clay (Schmitz et al., 2004;Jones et al., 2019). The SK ashes and the negative series are a heterogeneous mix of ash compositions, whereas the positive series are largely comprised of tholeiitic basalts (Morton and Evans, 1988;Larsen et al., 2003). All the ashes are believed to be sourced from NAIP explosive volcanism during the northeast Atlantic opening (Larsen et al., 2003;Storey et al., 2007a;Stokke et al., 2020b). These ash layers are found throughout the North Sea and North Atlantic (Knox and Mor- (Stokke et al., 2020a). Sedimentological log from Stolleklint with legend below. Red squares labelled A-D indicate the stratigraphic position of XRF box-core scans shown in Figs. 5 and 6. Bulk rock mineralogy and clay fraction mineralogy (< 2 µm) is presented as percentages. Note that the clay fraction represents separate analyses with a much lower detection limit than the bulk analyses, and therefore does not directly reflect the results of the bulk mineralogy. Ages indicated on the right are based on the following sources, indicated by superscript numbers: 1 -estimated age for PETM onset from Westerhold et al. (2018); 2 -age of bentonite from Charles et al. (2011) marking the middle of the PETM recovery assuming the Svalbard and Fur CIEs timing is coeval; 3 -corrected Ar-Ar age of Ash −17 from Storey et al. (2007a) recalibrated to the Fish Canyon Tuff (FCT) calibration from Renne et al. (2010Renne et al. ( , 2011 to 55.66 ± 0.12 Ma. ton, 1988;Haaland et al., 2000) with some of the major layers traced all the way to Austria (Egger et al., 2000). Sampling Samples were mostly collected from the Stolleklint beach (56 • 50 29 N, 8 • 59 33 E; Figs. 1b, 3), with some additional samples from a quarry near Fur Camping (Quarry FQ16 at 56 • 49 51 N, 8 • 58 45 E; Fig. 1b). At Stolleklint, the Stolleklint Clay and the Fur Formation are exposed in the cliff side (Fig. 3). However, the base of the Stolleklint Clay and the Paleocene-Eocene transition were not exposed at the time of field work due to coastal erosion (Fig. 3c). We therefore excavated a 43 m long and 0.5 m deep trench along the beach (Fig. 3b). The estimated true thickness of 24.4 m from the top of Ash SK2 to the base of Ash −33 is used as the depth-scale for stratigraphic presentation (e.g. Fig. 4). The scale is measured as positive and negative depth relative to the base of the main marker bed Ash −33. As the PETM was the main target, samples are collected at highest resolution across the PETM onset and then at lower resolution within the PETM interval and in the post-PETM section: discrete 1 cm thick samples were collected at 1 cm intervals (i.e. 100 % sampling) from ∼ 0.25 m below to ∼ 0.9 m above Ash SK2 and then at 0.5 m intervals (0.2-0.3 m when converted to the estimated true thickness) up to Ash −33. Samples above Ash −33 were collected from the cliff face at Stolleklint at 10-20 cm intervals. Additional samples from −5.6 to +1.9 m relative to the base of Ash −33 were included from the quarry FQ16, sampled at ∼ 30 cm intervals. All samples were oven-dried at ≤ 50 • C and powdered in an agate hand mortar or an agate disc mill before further analysis. The sediments' unconsolidated character enabled the collection of four box cores up-section. The box cores were collected in 50 cm long and 5 cm wide and deep aluminium boxes. These were pushed into the sediments before surrounding material was removed and the box cut away with its content intact using a steel wire. Box cores were collected in order to get complete recovery of selected intervals for XRF core scanning (Fig. 4). Two box cores were collected across the PETM onset (−24.90 to −24.40 m and −24.63 to −24.20 m stratigraphic depth intervals), and two from the PETM body with one from the lower laminated part (−14.47 to −14.17 m) and one from the more homogenous upper part (−10.81 to −10.48 m). XRD bulk and clay mineralogy Bulk rock mineralogy was conducted on eight samples from −24.81 to +5.35 m depth, while 13 samples were analysed for clay minerals. The mineralogy of both bulk rock and clay fraction of Fur sediment samples were determined by Xray diffraction (XRD) analyses on a Bruker D8 ADVANCE diffractometer with a Lynxeye one-dimensional positionsensitive detector (PSD) and CuKα radiation (λ = 0.154 nm; 40 mA and 40 kV) at the Department of Geosciences, University of Oslo. The bulk rock fraction was wet milled in a McCrone micronizing mill, prepared as randomly oriented samples, and analysed with a step size of 0.01 • from 2 to 65 • (2θ ) at a count time of 0.3 s (2θ ). The detection limit for bulk rock analyses is about 2 %. The software DIFFRAC-EVA (v. 2.0) was used for phase determination, and phase quantities were determined by Rietveld refinement (Rietveld, 1969) using PROFEX (v. 3.13.0;Doebelin and Kleeberg, 2015). The clay fraction (< 2 µm) was separated from the crushed whole-rock sample (before wet milling) by gravity settling and then prepared as oriented aggregate mounts using the Millipore filter transfer method (Moore and Reynolds, 1997). As the dried samples had to be powdered prior to separation, they contain some minor contribution from the coarser fraction. XRD clay data were recorded with a step size of 0.01 • from 2 to 65 • (2θ) at a count time of 0.3 s (2θ ) in airdried samples and a step size of 0.01 • from 2 to 34 • (2θ ) at a count time of 0.3 s (2θ) on treated samples. Three rounds of treatments were applied: 24 h of ethylene glycol saturation, 1 h heating at 350 • C, and 1 h heating at 550 • C. The software NewMod II (Reynolds and Reynolds, 2012) was used for semi-quantification of the XRD patterns of inter-stratified clay minerals. XRF elemental core scanning Non-destructive geochemical measurements and radiographic images were obtained from the box cores with an ITRAX X-ray fluorescence (XRF) core scanner (Croudace et al., 2006) from Cox Analytical Systems at the EARTH-LAB facilities, Department of Earth Science, University of Bergen. The core scanner was fitted with a molybdenum Xray tube run with power settings at 30 kV and 30 mA. The box cores were scanned with 10 s exposure time at 0.5 mm sampling intervals. Rock-Eval pyrolysis A total of 39 samples were analysed between −24.81 and +0.01 m depth. Analyses were conducted at the University of Oxford on a Rock-Eval 6 (Vinci Technologies SA, Nanterre, France; Behar et al., 2001) with pyrolysis and oxidation ovens, a flame ionization detector, and an infra-red cell. Powder aliquots of 50 mg were weighed into crucibles and heated first at a temperature profile of 300-650 • C in a pyrolysis furnace and then at 300-850 • C in an oxidation oven. For a detailed methodology on the Rock-Eval 6 application, see Lafargue et al. (1998). The bulk organic-carbon characteristics including the hydrogen index (HI), oxygen index (OI), and T max were investigated using Rock-Eval data. The HI corresponds to the quantity of hydrocarbons per gram TOC (expressed as mgHC/gTOC), and the OI corresponds to the quantity of oxygen released as CO and CO 2 per gram TOC (expressed as mgCO 2 /gTOC). Sediment records of HI and OI provide information on both organic-matter sources and processing. The HI typically reflects the relative distribution of terrestrially and marine derived organic matter, while the OI index indicates the degree of oxidation of the organic matter. Although the values are tentative, an HI < 100 may indicate a predominantly terrigenous source, while HI 100 indicates the presence of a significant amount of aquatic algae (marine and/or freshwater) and/or microbial biomass (e.g. Stein et al., 2006). An upper range of HI values of 200-300 mgHC/gTOC typically indicates primarily marine-derived organic matter (Hare et al., 2014). The ratio of HI/OI can be used to indicate the degree of organic-matter alteration. An HI/OI > 2 in sediments typically indicates fresh organic matter, while a high degree of alteration commonly result in an HI/OI < 1 as oxic degradation preferentially removes hydrogen-rich compounds (Hare et al., 2014). T max is the temperature at which the maximum amount of S2 hydrocarbons are generated. It can be used to indicate the degree of organic-matter maturation with T max temperatures < 435 • C typically indicating immature organic matter (Peters, 1986). ICP-MS and element analyser Analyses were conducted on 24 samples between −24.82 to +5.50 m depth. Dried and crushed marine sediment samples were digested in hydrochloric, hydrofluoric, and nitric acids to give a total dilution of ∼ 4 × 10 6 -fold by volume. Major and trace element analyses of digested bulk sediment samples were performed on a PerkinElmer NexION 350D inductively coupled plasma mass spectrometer (ICP-MS). Total sulfur concentrations were analysed on a Coulomat 702 coulometric analyser. Sample digestion and analyses were all conducted at the Department of Earth Sciences, University of Oxford. The method detection limit, accuracy, and precision of the analyses are given in the Supplement (Table 6, Supplement 1). Major elements analysed on ICP-MS were converted from elemental mass units to weight percent oxide equivalents. The trace metals Cu, Ni, Mo, U, and V were calculated as enrichment factors (EFs, Eq. 1) to account for possible dilution using standard element values of the average upper crust from McLennan (2001). ICP-MS analyses of major elements were used to calculate the chemical index of alteration (CIA; Nesbitt and Young, 1982). The CIA is based on the relative distribution of mobile cations relative to aluminium oxide and indicates the extent of the conversion of feldspars (which dominate the upper crust) to clays such as kaolinite (Nesbitt and Young, 1982). While the CIA may directly represent the rate and intensity of weathering when measured in situ, when measured in marine sediments it becomes more complex as it also reflects changes in the type of sediment sources and the transport sorting processes (Eq. 2; Nesbitt and Young, 1982). The CIA is expressed as where Al 2 O 5 , CaO * , K 2 O, and Na 2 O are given as wholerock molecular proportions and CaO * is the total silicate fraction of CaO corrected for the presence of carbonates and phosphates following the approach of McLennan (1993). First, apatite was accounted for using the molecular proportions: Sedimentology The base of the beach section comprises the Holmehus/Østerrende Formation (Figs. 3, 4), which is composed of dark, blueish clay with pervasive bioturbation. It is overlain by a greenish silty layer indicative of glauconite (marked as G in Fig. 5), with up to coarse sand-sized aggregates of glauconite scattered within the silt. The silt's lower boundary is unclear, but it appears conformable and possibly gradational. The 5 cm thick ash layer SK1 is deposited above the glauconitic silt, with a sharp undulating lower boundary (Fig. 5). About 4.5 cm of structureless, grey clay conformably overlies SK1 and is followed by the ∼ 8 cm thick ash layer SK2 (Fig. 5). Both ashes are light grey and heavily altered. They are normally graded from medium sand to clay-sized particles. Both ashes are partially reworked and become gradually more clay-rich toward the top, with the highest bioturbation intensity at the top of Ash SK2 (Fig. 5). About 2 cm of strongly bioturbated and ash-rich clay overlying the ash is abruptly ended by the initiation of dark laminations (Fig. 5 section B). The exact level of the Stolleklint Clay base is uncertain as the boundary is blurred by ashes SK1 and SK2, but the start of the laminations is included in the Stolleklint Clay, placing the boundary no higher than at the base of the laminations (Fig. 5 section B). Laminated dark clay continues for ∼ 10 cm before the deposition of two ash layers SK3 and SK4, ∼ 1 cm and ∼ 0.4 cm thick respectively (Fig. 5 section B). They are separated by 2 cm of clay with slightly undulating laminations. Above the ash, laminated clay contin-ues about half-way up the beach (Fig. 4), with increasingly folded and disturbed layering ( Fig. 6 section C). The PETM body is dominated by hemipelagic clay. Above the lower laminated part, it appears to have an upper part (from about −10 m depth) comprising very dark grey clay with no visible laminations in field exposures (Fig. 4). However, the XRF radiographic image reveals that there are intermittent diffuse laminations and patchy structures/colour differences within the clay (Fig. 6 section D). The cause of these colour patches is uncertain but could be a result of depositional variations and/or post-depositional deformation. Between about −6 and −2 m depth there are some highly pyritized concretions or likely broken up concreted layers (Fig. 4). Ash layers reappear from about −5 m depth with deposition of the thin (∼ 2 cm), black Ash −39 (Fig. 4). Ashes −34, −33, −32, and −31 are deposited relatively closely spaced between −0.85 to +0.05 m depth, with thicknesses of 2, 20, 2, and 3 cm respectively. The thickest layer, Ash −33, is repeated at the Stolleklint Beach, due to a small glaciotectonic thrust fault (Fig. 4). The boundary between the Stolleklint Clay and the Fur Formation is formally placed at Ash −33 (Heilmann- Clausen, 1995), although there is no sharp lithological boundary (Figs. 3, 4). Dark clays continue upward with a gradually increasing diatomite content. Laminations reappear at about +6 m depth, as the lithology becomes dominated by clay-rich diatomite (Fig. 4). Mineralogy The bulk mineralogy comprises six main phases: zeolites, mica, clay (including smectite, chlorite, kaolin minerals, and glauconite), feldspars, quartz, and pyrite ( Fig. 4; Table 1, Supplement 1). Phyllosilicates dominate the bulk mineralogy in the lower laminated part of the stratigraphy. While the mica phase remains relatively stable throughout, the clay phase reaches its maximum of 50.6 % at ∼ 13 cm above Ash SK2 and the CIE onset (−24.24 m depth), before decreasing substantially upward from about −22 m depth to nearly 0 % at −10 m depth (Fig. 4). Zeolites (of the heulanditeclinoptilolite type) dominate the CIE body, comprising up to 36 % of the bulk mineralogy at −10.48 m depth (Fig. 4). The feldspar phase is largest within the Holmehus/Østerende Formation (37 % at −24.81 m depth) and during the CIE recovery (35 % at +5.35 m depth), while quartz increases in the upper part of the stratigraphy up to 26 % at −0.28 m depth (Fig. 4). Pyrite makes up the smallest fraction of the bulk mineralogy (Fig. 4). It increases from 1.9 % in the lower Holmehus/Østerrende Formation (−24.81 m depth) to 5.3 % ∼ 13 cm above the CIE onset (−24.24 m depth). The highest fraction of pyrite (6.1 %-7.5 %) is reached during the CIE body, before values decrease during the PETM recovery to a minimum of 0.11 % at +5.35 m depth. The clay fraction comprise grain sizes < 2 µm, and reflect the relative distribution of the different clay mineral phases, not the total amount of clay. XRD analyses identified four major clay mineral phases: kaolinite, chlorite, mixed-layer illite-smectite with only minor illite layers indicating almost pure smectite, and illite + fine-grained mica ( Fig. 4; Table 2, Supplement 1). Illite-smectite is the dominating clay mineral within the studied section. It comprises 84 %-90 % of the total clay from the base and up to 13 cm above the CIE onset (−24.24 m depth), before decreasing in the lower PETM body to a minimum of 32 % about 1.5 m above the CIE onset (−22.86 m depth). The illite-smectite content increases throughout the upper CIE body and recovery with values between 50 %-77 % (Fig. 4). Illite + fine-grained mica comprise a smaller part of the total clay fraction, with 10 % during the CIE onset, and a maximum of 33 % at −5.93 m depth (Fig. 4). Kaolinite increases substantially from 5 % about 13 cm above the CIE onset (−24.24 m depth) to 37 % at 62 cm above the CIE onset (−23.75 m depth). Kaolinite dominates the clay fraction in the lower laminated PETM body with a maximum of 52 % at −20.60 m depth, before disappearing in the upper PETM body and re-emerging with 11 % during the recovery phase (+5.35 m depth; Fig. 4). Chlorite only appears in 4 of 13 samples and makes up the smallest part of the clay fraction, with a maximum of 7 % at −10.48 m depth (Fig. 4). Box-core major element variations Two box cores cross the PETM onset, covering the transition from Holmehus/Østerrende Formation into Stolleklint Clay, and the ash layers SK1-SK4 (Figs. 4, 5, 6; Supplement 2). Low K/Ti and Fe/Ti ratios suggest that the ashes are highly Ti-rich basalts. The gradual decrease in both K/Ti and Fe/Ti below Ash SK1 may subsequently suggests a potential gradual increase in volcanically derived material before the first ash layers in the Danish Basin appear (Fig. 5). Sulfur counts show a slight overall increase from below to above the ashes. Above Ash SK3, there are several peaks of S, Fe/Ti, and Fe/K (although the latter signal is swamped by the iron-rich ashes in Fig. 5 section B) that correlate with each other and with the dark laminations. Principal component analysis reveal a distinct difference in chemistry between the Holmehus/Østerrende Formation and the Stolleklint Clay (Fig. 7). It also indicates that both the clay between ashes SK1 and SK2 and parts of the glauconitic silt likely include a large ash component. While the glauconitic silt is chemically closer to the underlying Holmehus/Østerrende Formation than the Stolleklint Clay, the less ash-rich inter-ash clay appears to have a composition closest to the Stolleklint Clay. This suggests that this is indeed a part of the Stolleklint Clay base, and we therefore propose that Ash SK1 marks the lower Stolleklint Clay boundary. The correlation circle indicates that differences in Ca and Ti on one hand and K on the other are the main controlling factors, reflecting the variation between volcanic and clay-dominated fractions respectively (Fig. 7). . XRF element core scans and radiographic images of two box cores crossing the PETM onset at the Stolleklint beach. Note that despite some cracks in the surface below Ash SK1, the sample preservation is overall good and XRF core scanning were conducted on a smoothed surface along the centre avoiding any substantial irregularities. Interpretive logs next to the images indicate the lithological changes. G: glauconite; B: burrow; I-A cl: inter-ash clay. The corrected stratigraphic depth relative to Ash −33 of each section is indicated on the left. XRF scanning length seen on the right indicate actual box-core length. XRF data are given as counts per second (cps) or as dimensionless ratios. The box core in Fig. 6 section C, covering the lower Stolleklint Clay, shows that the sediments are strongly laminated and slightly folded. Elevated S concentrations and high Fe/Ti and Fe/K ratios correlate with the dark laminations ( Fig. 6 section C) suggesting that there was some regular variability in basin oxygenation. The K/Ti ratio remains relatively stable, suggesting no dramatic lithological changes. Figure 6 section D shows the non-laminated upper Stolleklint Clay, which displays relatively minor elemental variations. However, drops in the K/Ti ratio could indicate areas of increased volcanically derived material, potentially as cryptotephras ( Fig. 6 section D). The biplot of S and Fe (Fig. 8) shows that there is an overall increase in S upward from the pre-PETM Holmehus/Østerrende Formation and throughout the Stolleklint Clay. It also indicates that their upper PETM body has a more homogenous high sulfur content, while there is significant variation in the sulfur counts between dark and light laminations in the lower laminated PETM body, as also observed in the XRF core scans (Fig. 8 section C). . XRF element core scans and radiographic images of two box cores within the PETM body from the Stolleklint beach. The sample preservation of these box cores was excellent with no substantial irregularities. Interpretive logs next to the images indicate the lithological changes. The corrected stratigraphic depth relative to Ash −33 for each section is indicated on the left, while the XRF scanning length on the right indicates actual box-core length. Grey bands in section D indicate potentially tephra-rich horizons. XRF data are given as counts per second (cps) or as dimensionless ratios. Organic geochemistry The thermal immaturity of the Stolleklint Clay has previously been suggested, based on the dominating odd over even preference in long-chained n-alkanes (Stokke et al., 2020a). This immaturity is now also supported by the low Rock-Eval T max values of < 422 • C (Table 3, Supplement 1). The low degree of organic-matter alteration is also shown by the HI/OI ratios consistently > 2, with the exception of a short interval covering the glauconitic silt and the PETM onset (−24.58 to −24.32 m depth) where values vary between 0.74-1.33. The HI peaks at up to 150 mgHC/gTOC in the glauconitic silt between −24.61 to −24.59 m depth, but is otherwise < 100 mgHC/gTOC pre-PETM (Fig. 9). The HI increases > 100 mgHC/gTOC about 13 cm above the CIE onset at −24.05 m depth. A second major increase in HI occurs above −14 m depth, after which values remain high and reach maximum values of 303 mgHC/gTOC at −0.78 m depth (Fig. 10). The OI values are relatively even and low with values between 25-42 mgCO 2 /gTOC for most of the section. The main exception is an elevated interval with OI values between 51-94 mgCO 2 /gTOC starting at the base of the glauconitic silt (OI rise from 37 to 78 mgCO 2 /gTOC at −24.58 m depth) and lasting during the PETM onset up to about −24.32 m depth. Both the terrigenous aquatic ratio (TAR) and TOC data are taken from Stokke et al. (2020a). TAR is defined as the ratio of the primarily land-plant-derived long-chain n-alkanes to the short-chain n-alkanes mainly derived from marine algae (Peters et al., 2005). There is a considerable peak in TAR ∼ 5 cm below Ash SK1, within the glauconitic silt. A second increase in TAR values shortly after the PETM onset (about −24.2 m depth) is followed by gradually decreasing TAR values during the PETM body and recovery. The TOC data show a pronounced increase from ∼ 0.45 to ∼ 1.3 wt % TOC about 2 cm above the PETM CIE onset (Fig. 9). TOC concentrations remain relatively stable for the lower CIE body, before increasing again in the upper CIE body (from about −13 m depth) up to a maximum of 3.9 wt % at −0.78 m depth (Fig. 10). At the start of the CIE recovery, TOC drops down again to around 1 wt %. Major and trace elements of single samples The CIA in the pre-PETM sediments is generally at around 67 but has one peak of 80 just before the pre-PETM cooling event (−24.64 m depth; Fig. 9; Table 5, Supplement 1). Following the onset, the CIA rises to a maximum of about 79 at −20.60 m depth, before returning to pre-PETM values in the upper PETM body (Fig. 10). The recovery phase shows increasing CIA values again towards Ash −19, with 84.5 at +5.50 m depth. Both S and pyrite concentrations start to rise before Ash SK1 and the CIE onset, with S increasing from about 1 wt % in the Holmehus/Østerrende Formation to about 3 wt % in the glauconitic silt ( Fig. 9; Table 4, Supplement 1). Sulfur concentrations remain high throughout the CIE body, with maximum values of 4.6 wt % reached at −8.6 m depth (Fig. 10). While Cu EF and V EF have values consistently > 1 indicating a constant relative enrichment, Ni EF shows overall lower EF values and some depletion with five samples < 1. Before the CIE onset Cu EF and V EF values rise from the base of the glauconitic silt (at −24.58 m depth), while Ni EF remains relatively stable ( Fig. 9; Table 5, Supplement 1). Vanadium, Cu, and Ni are elements typically associated with volcanic ash, and they all show a relative enrichment within the lower ashrich interval, with particularly Cu and Ni peaking around ash layers SK1 and SK3. All three elements decrease to lower values in the lower PETM body, before increasing in the upper half (Fig. 10). However, while Cu EF and V EF continue to increase until the recovery, Ni EF decreases slightly about 5 m below Ash −33. Uranium appears to be depleted below the PETM onset with U EF values consistently < 1 and particularly low values above ash layers SK1 and SK3 ( Fig. 9; Table 5, Supplement 1). Immediately above the CIE onset U EF rises > 1 (at −24.36 m depth), although it does not become consistently enriched until about −23.75 m depth. Molybdenum is also depleted in the lowermost part of the section, with Mo EF rising > 1 at the base of the glauconitic silt (−24.8 m depth; Fig. 9). However, Mo EF still does not increase substantially until after the CIE onset (at −24.36 m depth) similar to U EF . Both U EF and Mo EF remain relatively stable around 1.3 and 10 respectively in the lower part of the PETM body but increase dramatically in the upper, non-laminated part before decreasing again during the recovery (Fig. 10). While U EF reaches maximum values of about 5.2 (at −1.34 m depth), Mo EF increases substantially with values of about 30-38 between −8.56 to −0.11 m depth (Fig. 10). Kaolinite and changes in weathering across the PETM onset At Fur, there is a substantial influx of kaolinite in the lowermost 10 m of the PETM CIE (Fig. 4). The pulse of kaolinite initiates shortly after the CIE onset and again in the CIE recovery, in both instances concordant with a rise in the CIA and in the bulk mineralogy clay fraction (Figs. 4, 10). As the clay fraction in the strata above and below the kaolinite pulse is dominated by smectite, it suggests some major change in climate and/or sediment supply occurs within the lower part of the PETM stratigraphy. Clay mineral assemblages have been used as indicators of palaeoclimate, most commonly using kaolinite as a proxy for humid tropical climates and smectite for warm climates with seasonal humidity and longer dry spells (Bolle et al., 2000;Thiry, 2000). However, soil formation is a slow process, and the subsequent long duration between formation and deposition in a marine basin suggests that clay mineralogy is an unreliable palaeoclimate proxy at resolutions shorter than 1 Myr (Thiry, 2000). Changes in the clay mineral assemblage in the marine sediments is therefore unlikely to reflect an increase in continental soil production induced by changing temperatures and humidity (Carmichael et al., 2017). An increase in kaolinite content during the PETM is observed globally (Robert and Kennett, 1994;Dypvik et al., 2011;John et al., 2012;Khozyem et al., 2013;Bornemann et al., 2014;Kemp et al., 2016), yet the timing and magnitude vary considerably even within the North Sea (Kender et al., 2012;Kemp et al., 2016). In the western North Sea, the kaolinite content increases earlier before and during the CIE onset and again during the CIE recovery but is relatively low in the CIE body (Kender et al., 2012;Kemp et al., 2016). However, at Fur a rise in the kaolinite content is not observed until after the CIE onset (Fig. 4), and southward in the Bay of Biscay in the North Atlantic the kaolinite content does not significantly change until the PETM recovery (Bornemann et al., 2014). It would be expected that changes in the climate and the hydrological cycle would be broadly similar within such a narrow region. It is therefore reasonable to assume that the timing and extent of kaolinite deposition depends just as much on the availability and proximity to potential source areas as the climatic conditions. Kender et al. (2012) suggested that the initiation of the kaolinite pulse before the CIE in the central North Sea was due to the thermal uplift and shortlived regression in the latest Paleocene. A drop in sea level exposes larger areas to erosion and brings river mouths closer to the marginal marine areas, which could subsequently trigger an influx of terrestrially derived material. A peak in the HI, TAR, and CIA just below the glauconitic silt suggest that a similar short-lived regression is recorded also in Denmark (Fig. 9). However, as the kaolinite pulse at Fur occurs after the CIE and major temperature increase (Fig. 10), it substantially post-dates sea-level fall and major tectonic uplift in the latest Paleocene. This suggest that some other trigger than sea-level fall activated the shift to kaolinite deposition in Denmark. Kaolinite particles are relatively large and heavy and typically deposited closer to the source than finer clays like smectite (Gibbs, 1977;Nielsen et al., 2015). Nielsen et al. (2015) found that deposition of kaolinite in the Paleocene-Eocene North Sea thickens substantially towards the Fennoscandian shield and suggest that this is the main source area for the Danish sediments. The Fennoscandian shield was characterized by deeply weathered bedrocks in the Paleogene, reflecting the warm tropical Mesozoic climate (Nielsen et al., 2015), and would therefore be enriched in kaolinite. Considering the typically shorter transport of kaolinite (Gibbs, 1977) and the Danish areas distal position in relation to the NAIP (Figs. 1, 11) it seems likely that the main source of kaolinite was from the Fennoscandian Shield to the north and northeast, despite the main sediment source for the North Sea as a whole during this period being from the west and northwest ( Fig. 11; Jordt et al., 2000;Anell et al., 2012). A North Sea surface water freshening is suggested from palynology and shark-tooth apatite δ 18 O values in the central North Sea (Zacke et al., 2009;Kender et al., 2012). The global influx in kaolinite has generally been attributed to an intensified hydrological cycle leading to enhanced erosion and sediment transport of older deeply weathered bedrock and soils (Schmitz and Pujalte, 2003;John et al., 2012;Bornemann et al., 2014;Carmichael et al., 2017). It therefore seems likely that the observed influx of kaolinite, increased CIA, and rapid intensification of sedimentation rates after the CIE onset at Fur is the result of increased erosion and runoff from the deeply weathered Fennoscandian bedrock. Considering the potential time lag between increased runoff and final marine deposition, this indicates a rapid response in the hydrological cycle to changes in temperature and carbon emissions across the PETM onset. Illite-smectite and zeolites -importance and origin Smectite is the dominant clay mineral within the pre-PETM and most of the earliest Eocene strata at both Fur (Fig. 4) and generally in the North Sea (Nielsen et al., 2015). Smectite is a common weathering product of mafic volcanic material (Stefánsson and Gíslason, 2001), and previous studies have suggested that smectites in the Danish stratigraphic record are of predominantly volcanic origin (Nielsen and Heilmann-Clausen, 1988;Pedersen et al., 2004). Although smectite may precipitate in situ from hydrothermal fluids, this has largely been discounted in the North Sea due to the wide geographic extent of smectite and the overall lack of indices of hydrothermal influence (Huggett and Knox, 2006;Kemp et al., 2016). In situ post-depositional alteration of volcanic ash also probably contributed only minor amounts of smectite, as the ashes are mostly well-preserved (Nilsen et al., 2015). While clay minerals only make up a trace fraction of the bulk mineralogy in the upper PETM body (4 %-8 %), zeolites comprise up to 36 % (Fig. 4). Zeolites are another typical weathering product of volcanic materials (Stefánsson and Gíslason, 2001;Nielsen et al., 2015), supporting a volcanic provenance. Major flood basalts were emplaced in East Greenland and the Faroe Islands between 56.0 and 55.6 Ma, producing large uplifted areas several kilometres high of easily eroded material (Larsen and Tegner, 2006;Storey et al., 2007b;Wilkinson et al., 2017). This is reflected in Os isotopes and CIA records in the Arctic Ocean, which record an influx of weathered volcanic material both prior to and during the PETM (Wieczorek et al., 2013;Dickson et al., 2015). Moreover, the first phase of ash deposition was identified within Late Paleocene strata in the North Sea, well before the PETM onset (Knox and Morton, 1988;Haaland et al., 2000). Erosion and redeposition of altered tephra deposited around the North Sea likely constituted a highly important source for the volcanic material in the North Sea basins (Pedersen et al., 2004;Nielsen et al., 2015;Kemp et al., 2016). Smectite is found in abundance throughout the North Sea stratigraphic record, and decreases as ash deposition ceases upward in the Eocene (Nielsen et al., 2015;Kemp et al., 2016). It seems likely that the dominance of smectite and abundance of zeolites reflect this extensive extrusive volcanism around the NAIP (Nielsen et al., 2015;Kemp et al., 2016), highlighting the importance of the NAIP in augmenting silicate weathering during the PETM. Volcanic indices Although the principal component analysis indicates that the glauconitic silt is most like the Holmehus/Østerrende Formation (Fig. 7), the gradual increase in Ti relative to Fe and K shown by the XRF element core scans suggest a gradual change in lithology towards Ash SK1 (Fig. 5). Titanium is generally considered a stable element directly reflecting the coarse-grained terrigenous fraction (Rothwell and Croudace, 2015), but the highly Ti-rich nature of the ashes SK1-4 renders Ti an unreliable proxy for terrigenous input in this particular section. Titanium can also be used to indicate volcanic provenance, where K and Ti reflect felsic and mafic sources respectively (Rothwell and Croudace, 2015). In fact, the K/Ti ratio has been applied as a useful proxy for felsic/mafic provenance in the North Atlantic (Richter et al., 2006) and could indicate a gradual rise in mafic volcanic derived material prior to the main ash deposition (Fig. 5). Estimates of the timing and duration of the East Greenland lava eruptions suggests that a 5-6 km thick lava pile was emplaced between 56.0 and 55.6 Ma (Larsen and Tegner, 2006), indicating that there was voluminous mafic NAIP activity during this period. The trace metals Cu, Ni, and V are found to increase in a similar manner within the glauconitic silt (Fig. 9), all of which are typically associated with volcanic material and maintain high concentrations within the SK1-SK2 interval. An amplified influx of weathered basaltic material such as smectite could cause the gradually increased Ti flux. However, smectite is already the dominant clay phase in the Holmehus/Østerrende Formation (Fig. 4) and does not show a significant rise in the glauconitic silt. It therefore seems that the augmented Ti concentrations within the glauconitic silt were caused by an increased ash component rather than from basaltic weathering. Such volcanic ash deposits that are not visible to the naked eye are called cryptotephras and typically include glass shards and crystals together with non-volcanic deposits (Cassidy et al., 2014). It is possible that the glauconitic silt includes an increasing portion of cryptotephras prior to the large eruptions producing ashes SK1 and SK2. A previous study found that SSTs cooled prior to the PETM onset in Denmark (Stokke et al., 2020a). Although the cooling started within the glauconitic silt below Ash SK1 (Fig. 9), they proposed that it could be a result of volcanic cooling induced by SO 2 aerosols from NAIP eruptions (Stokke et al., 2020a). The observation of a potentially increasing ash component already within this glauconitic unit would be supportive of a volcanically induced cooling. The identified tephra layers in Denmark represent explosive eruptions with an unusually large magnitude in order to be transported such a long distance (Stokke et al., 2020b). The absence of visible tephra layers therefore does not automatically mean that there was no tephra producing eruptions during the PETM. Besides two thin ash layers about 10 and 12 cm above Ash SK2, there are no visible ash layers during the PETM body until the deposition of Ash −39 at about −5 m depth (Fig. 4). However, the box cores allow for a detailed and high-resolution overview that might reveal the presence of ash-rich intervals earlier. Figure 6 section D shows correlating changes in S, Ca, Fe/Ti, Fe/K, and K/Ti between about −10.81 to −10.48 m depth. The presence of cryptotephra layers is particularly likely when low K/Ti ratios correlate with increases in Fe/Ti and Ca, and to some extent S. Relative changes in Fe/Ti -and to some extent Ca -depend strongly on the source of the volcanic material. These results indicate four possible ash-rich horizons within the dark clays, which could be cryptotephras of slightly variable chemistry (Fig. 6 section D). This suggests that explosive volcanic eruptions at a scale substantial enough for some material to reach Denmark may also have occurred during the PETM body. However, much more detailed work is needed in order to confirm the presence or absence of tephra fall deposits during this interval. Changes in basin oxygenation The lithological shift from the bioturbated Holmehus/Østerrende Formation into the laminated Stolleklint Clay reflects a change in the oxygen content in the bottomwater environment. An increase in S, pyrite, and V EF within the glauconitic silt could indicate reducing conditions had already initiated below the laminations, prior to the CIE onset ( Fig. 9). However, this is contrary to the low organic content, abundant bioturbation, and high content of glauconite, suggesting that an oxygenated environment prevailed pre-PETM at Fur. An oxic environment has also been indicated by the relatively high values of the organic biomarker pristane/phytane indicating oxidation of phytol to pristane (Stokke et al., 2020a). Sluijs et al. (2014) explained a similar co-occurrence of oxic and euxinic proxies within a section in the Gulf of Mexico as the result of seasonal to decadal variations in basin oxygenation. Alternatively, it could be attributed to post-depositional authigenic enrichment due to deposition of ash layer SK1, as volcanic tephra deposits can reduce the sediment pore water oxygen levels just below an ash layer (Hembury et al., 2012). However, the highly redox-sensitive elements Mo and U do not show a similar increase below the ashes and the CIE. On the contrary, U EF < 1 suggests that the sediments are rather depleted in U (Fig. 9). Both the high-resolution XRF element core scans and ICP-MS analyses of Ni and Cu indicate an increased ash component within the sediments just below Ash SK1. These contradictory observations could be explained by a large component of volcanic ash with the sediments, which is expected to have relatively high concentrations of both V and S. Laminations occur only 2 cm above the CIE onset, together with an increase in S and Fe/Ti in XRF element core scans (Fig. 5 section B). Iron and Ti in marine sediments commonly co-vary, and elevated Fe/Ti ratios therefore indicate excess Fe over basaltic lithogenic values (Marsh et al., 2007). Fe is redox-sensitive and may reflect changes in basin oxygenation post-deposition (Rothwell and Croudace, 2015). An increase in Fe relative to Ti or K may therefore indicate suboxic conditions, particularly in concert with increased S content (e.g. Sluijs et al., 2009). TOC, Mo EF , and U EF (Fig. 10) also rise substantially at the base of the laminations, with Mo EF above 6. The burial rate of Mo in-creases by 3 orders of magnitude in sulfidic environments relative to oxic, as Mo becomes highly reactive in the presence of hydrogen sulfide (Tribovillard et al., 2004;Scott and Lyons, 2012). And, although the values are somewhat tentative, Mo EF and U EF values between about 3-10 have been related to suboxic conditions (Algeo and Tribovillard, 2009;Tribovillard et al., 2012). A previous study from the exact same section at Stolleklint also found that photic zone euxinia may have occurred just after the CIE onset, indicated by the presence of sulfur bound isorenieratane; a diagenetic product of green sulfur bacteria (Schoon et al., 2015). We therefore conclude that while there may be some uncertainty as to when oxic conditions started to deteriorate due to the high content of ash, the start of laminations about 2 cm above Ash SK2 and the CIE onset indicate the initiation of anoxic to sulfidic conditions at Stolleklint. In addition, the XRF element core scans document a direct correlation between elevated S and Fe/Ti, and the dark laminations ( Fig. 6 section C), suggesting regular fluctuations in basin deoxygenation in the lower part of the PETM body. In the upper part of the PETM body the Stolleklint Clay becomes apparently structureless and almost black. Here XRF S counts show continuous high values (Fig. 8), suggesting that reducing conditions become more or less continuous. The TOC content and the trace metals Cu EF , V EF , and particularly Mo EF and U EF increase similarly in the dark upper half, with Mo EF up to ∼ 37 and U EF up to ∼ 5 (Fig. 10). Pyrite, S (wt %), and Ni EF values show a similar increase initially but decrease above Ash −39 (Fig. 10). Mo EF > 10 may indicate euxinic conditions (Algeo and Tribovillard, 2009;Tribovillard et al., 2012), although the comparatively lower enrichment of U may also suggest that some other factor is enhancing Mo enrichment in the sediments such as the "particle shuttle" effect (Algeo and Tribovillard, 2009) or enhanced Mo-trapping by sulfurized marine organic matter (Tribovillard et al., 2004;Algeo and Tribovillard, 2020). Schoon et al. (2015) argued that photic zone euxinia prevailed during the entire PETM interval both at Stolleklint and Store Baelt (Fig. 1) based on the presence of green sulfur bacteria. Unfortunately, their data from Stolleklint only cover the lowermost 2.5 m and uppermost 0.5 m of the Stolleklint Clay and therefore exclude most of the PETM body. It therefore seems there is little independent evidence in support of a euxinic environment prevailing throughout the PETM. We therefore conclude that the PETM in Denmark was characterized by an anoxic to sulfidic environment that become increasingly prevalent during the PETM body. NAIP uplift in the latest Paleocene led to closing of ocean seaways and North Sea Basin restriction prior to the deposition of Ash SK1, resulting in poor circulation and halocline stratification that could explain an early deoxygenation Kender et al., 2012). While this could explain a decrease in basin oxygenation below the SK ashes and CIE onset, there is no further evidence supporting regional uplift and North Sea Basin restriction follow-ing the PETM onset. On the contrary, high HI (Fig. 10) and low input of branched glycerol dialkyl glycerol tetraethers and long-chained n-alkanes (Stokke et al., 2020a) suggest that marine-derived organic matter increases up-stratigraphy, as the Stolleklint Clay was likely deposited during a relative sea-level rise (Heilmann-Clausen, 1995). Kender et al. (2012) found evidence of low surface water salinity and extensive stratification in the North Sea and suggested this as the main cause of anoxia during the PETM. The dark massive clays in the upper part of the PETM body are strongly enriched in organic matter (TOC up to 2.9 wt %) and Cu (Cu EF up to 5.1), which both may indicate an increase in productivity (Tribovillard et al., 2006). A combination of ocean stratification and increased productivity would efficiently contribute to the increase in basin anoxia in the upper PETM body. Carbon drawdown -the PETM recovery At Stolleklint, temperatures rose at least 10 • C immediately after the onset, followed by a shift to gradually decreasing SSTs throughout the PETM body and recovery ( Fig. 9; Stokke et al., 2020a). Silicate weathering is highly sensitive to runoff, temperature, and topography (Gislason and Oelkers, 2011). The warming combined with the increased runoff, indicated in the North Sea by enhanced surface water freshening (Zacke et al., 2009;Kender et al., 2012), would result in a climate ideal for increased silicate weathering and/or denudation. At Stolleklint, we see a rapid response in continental weathering and runoff to changes in carbon cycle and temperature. This is indicated by the large increase in sedimentation rate and the influx of weathered material from the Fennoscandian shield suggested by the rise in kaolinite and the CIA shortly after the PETM onset (Figs. 4,10). While both the kaolinite content and the CIA decrease in the upper PETM body, the sedimentation rate likely remained high suggesting a relatively rapid influx of other minerals such as the volcanically derived zeolites and smectite. Fresh basaltic volcanic terrains are particularly prone to weathering and constitute one of the main sources of weathered suspended material in the world's oceans (Gislason and Oelkers, 2011). The extensive NAIP flood basalt volcanism before and during the PETM (e.g. Larsen and Tegner, 2006) may therefore have played an important role in enhancing silicate weathering during the PETM, as reflected in the dominance of smectite within the North Sea as a whole (Nielsen et al., 2015). The augmented organic-matter burial (increased TOC) in concert with high Cu EF (Fig. 10) suggest a possible rise in primary productivity in the upper PETM body. This could have been prompted by an influx of nutrients to the basin, which could have been caused by an enhanced terrestrial sediment influx. However, the low TAR values and the dominance of marine organic matter (high HI; Fig. 10) suggest that sea-level rise and a decrease in the terrigenous influx dominate upwards in the PETM body. Alternatively, the de-position of volcanic ash can work as a fertilizer, supplying key nutrients to the marine environment resulting in augmented productivity (Jones and Gislason, 2008). The post-PETM section at Fur is dominated by diatomite deposition, which could be a result of periodic rise in nutrient supply due to the voluminous ash deposition from the PETM recovery and onwards (Stokke et al., 2020a). While there is limited evidence of ash deposition during the PETM body at Stolleklint, additional ash deposition below Ash −39 has now been revealed by the possible cryptotephras in XRF element core scans (Fig. 6 section D). It could be that volcanic ash had an added fertilizing effect promoting a rise in primary productivity and organic carbon sequestration during the later stages of the PETM body and recovery. A key PETM feature is the rapidity of the CIE recovery (e.g. Bowen and Zachos, 2010). Carbon cycle recovery occurs through a combination of natural carbon sequestration and negative feedback mechanisms reducing the atmospheric CO 2 content (McInerney and Wing, 2011). Silicate weathering and denudation are perhaps the most important negative feedback mechanisms driving CO 2 drawdown (Gislason and Oelkers, 2011) and have been proposed as one of the most important drivers during the PETM recovery Torfstein et al., 2010;Penman, 2016). However, Penman and Zachos (2018) found that the δ 11 B and B/Ca records of ocean acidification recover within a similar time frame as the δ 13 C record and far more rapidly than suggested by carbon cycle models that rely on silicate weathering alone (e.g. Zeebe et al., 2009). Similarly, Bowen and Zachos (2010) found that the rate of recovery is an order of magnitude faster than expected for carbon drawdown by silicate weathering alone and suggested that terrestrial carbon sequestration may have played an important part. While our data can neither support not contradict this theory, we have documented a rise in nutrient supply and enhanced export productivity that would also contribute significantly to the increased organic carbon sequestration attributed to the accelerated PETM recovery (Bowen and Zachos, 2010;Komar and Zeebe, 2017;Bridgestock et al., 2019). Enhanced export productivity has also been observed in PETM sites globally (Bains et al., 2000;Egger et al., 2003;Stein et al., 2006;Soliman et al., 2011;Ma et al., 2014;Bridgestock et al., 2019), and average Ba burial rates approximately tripled during the PETM . Our results show that negative feedback mechanisms responded rapidly to changes in carbon cycle and SSTs and remained highly active from PETM onset to recovery. While the δ 13 C values remained low until the PETM recovery, SSTs decreased gradually throughout the PETM CIE. This gradual decline may reflect a temperature response to the continued carbon drawdown by the alternating increases in both silicate weathering and export productivity during the PETM. Conclusions We present new mineralogical and geochemical data from Stolleklint, an expanded marine section at Fur in northwest Denmark covering the PETM onset, body, and recovery. Here, the PETM is defined by a negative 4.5 ‰ CIE and at least 10 • C temperature rise across the PETM onset. The CIE onset is followed by an increase in kaolinite and the overall clay content, the chemical index of alteration, and substantially enhanced sedimentation rates. This reflects a rapid response in silicate weathering and transport patterns to changes in the carbon cycle and elevated temperatures, likely due to an intensified hydrological cycle leading to enhanced erosion and sediment transport from the deeply weathered Fennoscandian shield. Large volumes of easily weathered NAIP flood basalts and widespread tephra deposits likely contributed to accelerate the degree of silicate weathering and carbon drawdown. This is reflected in the dominance of volcanogenic minerals such as smectite and zeolite in large parts of the stratigraphy. Basin deoxygenation also begins to become widespread across the PETM onset, indicated by a shift from bioturbated to laminated sediments and extensive geochemical proxy evidence. Although the exact onset of deoxygenation is somewhat blurred pre-PETM due to ash deposition, our data show anoxic to sulfidic bottom-water conditions were prevalent from the CIE onset and became increasingly pervasive throughout the PETM body. Proxy evidence also indicates augmented export productivity towards the upper PETM body, coinciding with the reappearance of volcanic ash in XRF element core scans and in field exposures. Such a correlation highlights the fertilizing effect of volcanic nutrients and its potential importance in increasing primary productivity. The continued deoxygenation throughout the PETM was likely caused by a combination of the basin's already restricted nature, increased halocline stratification, and intensified export productivity. The results presented in this study show the potentially rapid environmental response to changes in carbon cycle and temperature. Our data also show that negative feedback mechanisms were active throughout the PETM and illustrate the important role of enhanced silicate weathering and organic-matter burial in driving the carbon drawdown leading to the PETM recovery. This highlights the importance of such marginal marine areas in carbon sequestration and recovery from carbon cycle perturbations. Data availability. All of the research data presented in this paper are publicly available in the Supplement. Author contributions. EWS, MTJ, and HHS conceptualized and laid out the methodology of the project. EWS, MTJ, LR, HH, IM, BPS, and HHS contributed to data collection and interpretations. MTJ, HH, and HHS contributed with funding acquisition. The original draft was prepared by EWS and MTJ. All authors contributed to the writing in the review and editing stage. Competing interests. The authors declare that they have no conflict of interest. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2020-12-03T09:06:27.632Z
2020-11-27T00:00:00.000
{ "year": 2021, "sha1": "81ee41dcba68bfda4bfd5ceab6dd9478add39ac0", "oa_license": "CCBY", "oa_url": "https://cp.copernicus.org/articles/17/1989/2021/cp-17-1989-2021.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "c3186db3235eaaa14fb62f0dc02910610d0280cd", "s2fieldsofstudy": [ "Geography", "Environmental Science" ], "extfieldsofstudy": [] }
2796161
pes2o/s2orc
v3-fos-license
Bis(pyridinium) naphthalene-1,5-disulfonate dihydrate The asymmetric unit of the title organic salt, 2C5H6N+·C10H6O6S2 2−·2H2O, consists of a pyridinium cation, half a naphthalene-1,5-disulfonate dianion and a water molecule. The dianion has a crystallographically imposed centre of symmetry. In the crystal, N—H⋯O and O—H⋯O hydrogen bonds link cations, anions and water molecules into a three-dimensional network. The asymmetric unit of the title organic salt, 2C 5 H 6 N + Á-C 10 H 6 O 6 S 2 2À Á2H 2 O, consists of a pyridinium cation, half a naphthalene-1,5-disulfonate dianion and a water molecule. The dianion has a crystallographically imposed centre of symmetry. In the crystal, N-HÁ Á ÁO and O-HÁ Á ÁO hydrogen bonds link cations, anions and water molecules into a threedimensional network. Unfortunately,the dielectric constant of the title compound as a function of temperature indicates that the permittivity is basically temperature-independent, below the melting point (411k-412k) of the compound, we have found that title compound has no dielectric disuniform from 80 K to 405 K. Herein we descibe the crystal structure of this compound. The asymmetric unit of the title compound ( Fig. 1) consists of a pyridinium cation, a half of a naphthalene-1,5-disulfonate anion and a free water molecule, the anion having crystallographically imposed centre of symmetry. The pyridinium and naphthalene rings are oriented to form a dihedral angle of 13.89 (6)°. In the crystal, cations, anions and water molecules are connected by N-H···O and O-H···O intermolecular hydrogen bonds into a three-dimensional structure ( Fig. 2; Table 1) Experimental The title compound was obtained by the addition of naphthalene-1,5-disulfonate acid (3.62 g, 0.01 mol) to a solution of pyridine (1.6 g, 0.02 mol) in methanol, in the stoichiometric ratio 1: 2. Good quality single crystals were obtained by slow evaporation of the solvent after six days (yield 52%). Refinement The water H atoms were located in a difference Fourier map and refined as riding, with the O-H distances restrained to 0.82 Å and with U iso (H) = 1.5U eq (O). All other H atoms were placed in geometrically idealized positions and constrained to ride on their parent atoms, with C-H = 0.93 Å, N-H = 0.86 Å, and with U iso (H) = 1.2U eq (C, N). Computing details Data collection: CrystalClear (Rigaku, 2005); cell refinement: CrystalClear (Rigaku, 2005); data reduction: CrystalClear (Rigaku, 2005); program(s) used to solve structure: SHELXTL (Sheldrick, 2008); program(s) used to refine structure: The molecular structure of the title compound, with displacement ellipsoids drawn at the 30% probability level. Atoms labelled with suffix A are generated by the symmetry operator (2-x, -y, 1-z). Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq O4 0.4852 (
2016-05-04T20:20:58.661Z
2012-04-21T00:00:00.000
{ "year": 2012, "sha1": "7b677bca40f003fccd0a72547f80f028c79fff6c", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2012/05/00/rz2728/rz2728.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "082bfff1901f0dc4b903ed5662bf06af3b4a1edb", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
259859227
pes2o/s2orc
v3-fos-license
From frozen cell bank to product assay: high-throughput strain characterisation for autonomous Design-Build-Test-Learn cycles Background Modern genome editing enables rapid construction of genetic variants, which are further developed in Design-Build-Test-Learn cycles. To operate such cycles in high throughput, fully automated screening, including cultivation and analytics, is crucial in the Test phase. Here, we present the required steps to meet these demands, resulting in an automated microbioreactor platform that facilitates autonomous phenotyping from cryo culture to product assay. Results First, an automated deep freezer was integrated into the robotic platform to provide working cell banks at all times. A mobile cart allows flexible docking of the freezer to multiple platforms. Next, precultures were integrated within the microtiter plate for cultivation, resulting in highly reproducible main cultures as demonstrated for Corynebacterium glutamicum. To avoid manual exchange of microtiter plates after cultivation, two clean-in-place strategies were established and validated, resulting in restored sterile conditions within two hours. Combined with the previous steps, these changes enable a flexible start of experiments and greatly increase the walk-away time. Conclusions Overall, this work demonstrates the capability of our microbioreactor platform to perform autonomous, consecutive cultivation and phenotyping experiments. As highlighted in a case study of cutinase-secreting strains of C. glutamicum, the new procedure allows for flexible experimentation without human interaction while maintaining high reproducibility in early-stage screening processes. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-023-02140-z. : Batch times of cultures inoculated from cryo cultures stored at −80 • C and −20 • C for Escherichia coli. Experiments were conducted as described for Corynebacterium glutamicum in Section "Cell viability studies with C. glutamicum" of the main manuscript. Instead of CGXII medium, cultivation was performed in M9 medium modified from Sambrook et al., 1989, containing 10 g L −1 glucose, 0.001 g L −1 biotin and 20.93 g L −1 3-(morpholin-4-yl)propane-1-sulfonic acid (MOPS) buffer. Batch times were investigated weekly over the course of six weeks by cultivation and subsequent spline analysis of the growth curves. For each of the storage conditions and weeks, three different cryo cultures were used, each of those for inoculation of four wells to an optical density (OD) of 0.1. This leads to a number of 12 replicates per storage condition and week. A minimum error in time of 4 min was assumed since this is the cycle time used for measurements in the BioLector ® . Week Batch time (-80 ℃) 13.63 ± 0.10 11.94 ± 0.07 10-hour reference 13.62 ± 0.08 11.83 ± 0.07 9-hour evaporation 13.81 ± 0.15 12.09 ± 0.08 9-hour reference 13.73 ± 0.10 12.09 ± 0.08 5-hour evaporation 12.60 ± 0.09 11.16 ± 0.07 5-hour reference 12.44 ± 0.07 11.05 ± 0.08 3-hour evaporation 13.77 ± 0.16 12.11 ± 0.16 3-hour reference 12.91 ± 0.17 11.37 ± 0.10 Figure S1: Comparison of evaporation times. Cultivation was performed at 30 • C, 1 400 rpm, 85% relative humidity in a BioLector ® Pro, using CGXII medium containing 10 g L −1 glucose. A FlowerPlate ® without optodes was used, sealed with a gas-permeable sealing foil with perforated silicone layer for automation. 24 out of 48 wells were filled with 800 µL methanol, which was subsequently removed in two steps as described in the main manuscript. Evaporation took place under the above-mentioned cultivation conditions. After the respective evaporation time, fresh CGXII medium was filled in all 48 wells and wells were inoculated with C. glutamicum wild-type to OD 0.1 or 0.2. Wells without methanol serve as reference. Evaporation times of 10 h, 9 h and 5 h all led to growth times that do not deviate from reference cultures that were not treated with methanol before inoculation. In contrast, 3 h is not sufficient to evaporate methanol, which can be seen in the slower growth. To guarantee a buffer even for sub-optimal liquid handling in a cultivation process, 10 h were used for clean-in-place (CIP). Shorter times were not further tested in the complete process since the CIP with medium led to even shorter process times. Batch times were calculated for comparison as shown in Table S2. Figure S2: Influence of residual methanol on growth behaviour. Cultivation of C. glutamicum wild-type was performed at 30 • C, 1 400 rpm, 85% relative humidity in a BioLector ® I, using CGXII medium containing 10 g L −1 glucose and various amounts of methanol. Per concentration, six replicates were cultivated in separated wells of a FlowerPlate ® without optodes. The initial OD was 0.1 for all replicates. Even small amounts of 0.125% methanol in CGXII medium, which corresponds to 1 µL in 800 µL cultivation medium, lead to prolonged batch times and a change in signal. This also means that insufficient evaporation times can be easily detected in the backscatter signal, as shown in Fig S1. Figure S3: Comparison of CIP with CGXII medium and untreated wells. Cultivation was performed at 30 • C, 1 400 rpm, 85% relative humidity in a BioLector ® Pro, using CGXII medium containing 10 g L −1 glucose. FlowerPlates ® sealed with a gas-permeable sealing foil with perforated silicone layer for automation were used. Reference wells were filled with 800 µL CGXII medium inoculated to OD 0.1. For medium wash, medium was filled and removed repetitively as described in the main manuscript. Untreated wells (orange) and wells with several steps of medium washing (green, purple) show highly comparable growth behaviour. Since the few wells that show slightly delayed growth were those untreated, this effect is more likely to be caused by pipetting errors. A higher amount of residual medium due to accumulation during washing would lead to dilution of the cells at inoculation and thus slower growth, which was not observed here. Due to shorter process times, two instead of three washing steps were thus used in the final CIP procedure. Table S3: Batch times of three consecutive batches with CIP using CGXII medium. The calculated batch times are referring to data in Figure 5 of the main manuscript. Batch times were analysed by spline approximation and using the maximum of the first derivative for each replicate as the beginning of the stationary phase. For precultures, 12 biological replicates were analysed. Each of these were used to inoculate three main cultures, resulting in 36 main culture replicates. The second batch is the same data shown in Figure 3 in the main script. Batch time (preculture) [h] Batch time (main culture) [h] Batch 1 10.40 ± 0.08 8.48 ± 0.12 Batch 2 10.12 ± 0.14 8.48 ± 0.10 Batch 3 10.14 ± 0.10 8.54 ± 0.41 Mean 10.23 ± 0.17 8.50 ± 0.26 Figure S4: Exemplary spline analysis for batch time calculation. C. glutamicum wild-type cultures in 800 µL CGXII medium were inoculated to the stated OD. In one case, the CIP procedure with methanol was applied to wells before cultivation; in the other case, wells were not treated. Using this example, which shows that the CIP does not affect the batch times, it can be seen how the spline methodology is suited for batch time analysis. Univariate cubic smoothing splines (UCSS) were calculated with the bletl python package. The first derivative of the splines shows a clear peak at the entry of the stationary phase for C. glutamicum wild-type. This maximum was also used to calculate batch times for comparing different cultivation and CIP strategies throughout the paper. Figure S5: Front window cut-out. The resealable cut-out in the front window is needed to dock the transfer station of the freezer to the robotic platform. In the picture, the smaller door for automation purposes, located at the back of the freezer, is shown. The backscatter data in this figure corresponds to strategy 2 shown in Figure 6 in the main manuscript, meaning the cultures were spread over three different batch cultivations. Although cutinase activities were significantly higher for replicate 3 across several signal peptides, e.g. Mpr, analysis of backscatter did not reveal these effects. In addition, LipA showed a lower activity in the assay for replicate 3, but no evidence for this can be seen in the backscatter of main culture 3. The batch effects might thus be caused either by experimental error in the activity assay or variations in WCBs that cannot be detected in growth, but only influence the amount or activity of cutinase. Plasmid Description Reference
2023-07-15T13:09:29.685Z
2023-07-14T00:00:00.000
{ "year": 2023, "sha1": "a49eb39a9e6d673c210849c8311912732a14bd74", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "525571d41f4bd3331a58b56012fa865731346459", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
58657872
pes2o/s2orc
v3-fos-license
The morphology and histology study on rabbit degenerated medial meniscus after posterior cruciate ligament rupture The morphology and histology changes in the medial meniscus after posterior cruciate ligament (PCL) rupture are poorly understood. Forty-eight rabbits were divided into matched mode pairs; each rabbit had an experimental side, in which the PCL was transacted, and a control side. At the 4, 8, 16 and 24 weeks after the PCL transection, each of the 12 rabbits was killed. Histology was performed to detect the expression of the tissue inhibitors of metalloproteinases-1 (TIMP-1), matrix metalloproteinase (MMP)-1 and MMP-13 in the medial meniscus. We found that medial meniscus displayed significant degenerative characteristics in morphology. The histological evaluation of the degeneration found that the expression levels of TIMP-1, MMP-1 and MMP-13 in the medial meniscus were higher in the experiment side than those in the control side (P<0.05). The expression of both TIMP-1 and MMP-13 was initially elevated and then decreased. The MMP-1 expression reached its peak swiftly and then maintained a relatively high level. There were clear time-dependent degenerative changes in the histology of the medial meniscus after PCL rupture. The high expression of TIMP-1, MMP-1 and MMP-13 in the cartilage may be responsible for the degeneration, and PCL rupture may trigger meniscus degradation and ultimately osteoarthritis. Introduction The posterior cruciate ligament (PCL) is widely accepted to be the strongest ligament in the knee joint; it stabilizes the knee joint by restricting posterior tibial displacement [1]. The incidence of PCL damage reported in epidemiologic studies ranges from 3% to 44% of acute knee injuries [2][3][4], and almost 17% of them are isolated PCL injuries [5]. Joint pain, instability and functional degradation of the knee are the most common symptoms of PCL damage. Once PCL was totally ruptured, the meniscus and other structures had to compensate to maintain the normal function of the knee joint, which might result in meniscus damage and degradation and finally osteoarthritis (OA) of the knee [6,7]. The most important biochemical change in OA is the loss of collagen type II and aggrecan, a large aggregating proteoglycan [8]. Two main enzyme families are believed to be involved in the intrinsic mechanism of degenerative changes in OA: matrix metalloproteinases (MMPs), which mediate collagen type II and a broad range of other matrix components of degeneration, and the tissue inhibitors of metalloproteinases (TIMPs), which regulate the activity of these enzymes [9]. The balance between TIMP and MMP levels is vital for the pathogenic processes of OA [10]. TIMP-and MMP-related tissue damage and degradation of the cartilage have been demonstrated in previous studies [11][12][13]. An examination of the expression level of TIPMs and MMPs in the meniscus in a PCL rupture model may help us to understand how meniscus degeneration is induced by PCL injury and the pathogenesis of OA [14]. Our previous study found either partial or complete rupture of the PCL can increase in the radial displacement of the medial meniscus and cause degenerative changes of the medial meniscus [15]. As part of our PCL and meniscus research series, the present study investigates the morphological and histological changes and the expression levels of TIMP-1, MMP-1 and MMP-13 in the medial meniscus after a PCL rupture using a rabbit knee joint model; specifically, it examines the correlation of these expression levels with medial meniscus degeneration and may explain the mechanism of medial meniscus degeneration after PCL rupture. Animal model of PCL rupture The animal experiment was carried out in accordance with relevant guidelines and regulations, and was approved by the Medical Ethics Committee of Xiangya Hospital, Central South University (Grant number: 201212067). The present study included 48 mature male rabbits (2.6 + − 0.4 kg, 6 months), housed in separated cages at 25 • C and 50-60% humidity under a 12-h light-dark cycle. The animals had free access to a normal diet and fresh tap water. Surgical transection of the PCL was performed randomly to one knee and PCL of the contralateral side was exposed but not transacted [16,17]. Specifically, the rabbits were anesthetized via the intraperitoneal administration of 3% sodium pentobarbital (0.03 mg/kg) and fixed on the operating table in a supine position. The drawer test was used to examine the stability of both sides of the knee. A patellar medial incision was used to dissect the joint capsule. The patella was then put in the lateral dislocation position, and the PCL was exposed and transected at the flexion position of the knee. The articular cavity was flushed with 3% hydrogen peroxide and then normal saline. The incision was closed without fixation of the knee joint. The same surgery was conducted on the contralateral side without the PCL transection. Postoperative anti-infection procedures were intramuscular injections of penicillin (800,000 units) once per day for 7 consecutive days. Any animal with a wound infection or suspected infection was eliminated. At 4,8,16 and 24 weeks after PCL rupture, 12 rabbits were killed at each time point. The medial meniscus of both knees were harvested and their morphological characteristics were observed, including surface flatness, color, flexibility and intactness. Histology Each medial meniscus was fixed in 4% paraformaldehyde, decalcified in diethylpyrocarbonate-treated 0.2 M ethylenediaminetetraacetic acid (EDTA), dehydrated in ethanol and xylene with a grading of concentration and embedded in paraffin. Serial sections of 3 μm were collected for hematoxylin and eosin (H&E) and immunohistochemical staining. After dewaxing, dehydration and rinsing, the specimens were dyed with hematoxylin for 5 min, given a 1 min water soaking, differentiated with 1% hydrochloric acid ethanol for 30 s, given a 15 min water soaking, then dyed with 0.5% eosin for 3 min, given a distilled water soaking and finally sealed for observation after dehydration. Light microscopy was used to evaluate the histological changes of the medial meniscus sections, which were quantified with a scoring system [18,19] (Table 1). For the immunohistochemistry, the sections were dewaxed according to the previous steps. Next, they were repaired in pancreatin at 37 • C for 30 min and incubated overnight. Then they were rinsed with phosphate-buffered saline (PBS) and incubated with 1:300 rabbit polyclonal antibody TIMP-1, MMP-1 or MMP-13 at 4 • C overnight. After another rinse, they were incubated with rabbit IgG at 37 • C for 15 min. Finally, diaminobenzidine tetrachloride (DAB) was applied for color development, and the coverslips were counterstained with hematoxylin. A Motic Images System was used to evaluate the expression intensity of TIMP-1, MMP-1 and MMP-13 in the specimens. Areas of each specimen were observed for cell number correction using light microscopy (six non-overlapping meniscus sections and at least 10 non-overlapping fields per a side of each rabbit). The results showed a positive cell rate (PCR, PCR = positive staining cell number/total cell number × 100%). Statistical analysis SPSS (version 16.0 for Windows; SPSS Inc., Chicago, IL, U.S.A.) was used for the data management and statistical analysis. The data were expressed as the mean + − standard deviation (SD). Paired t-tests were used to evaluate the paired data. The SNK-q test (Student-Newman-Keuls test) was used to evaluate the pairwise comparison in the cases where the mean of the data met the homogeneity of variance, whereas Dunnett's-T3 test was used in cases where the mean of the data did not meet the homogeneity of variance. The Nemenyi rank-sum test and Wilcoxon ran-sum test were used to test the nonparametric values. Differences with P<0.05 were considered statistically significant. Morphological changes Compared with the control sides, the medial meniscus in the PCL rupture sides presented obvious degenerative characteristics ( Table 2), indicating that PCL rupture may act as a progressive degenerative factor for the medial meniscus. Histological changes The H&E staining of the medial meniscus in the PCL rupture sides revealed time-dependent abnormities and deterioration, whereas the collagen fibers and chondrocytes (meniscal cells) were morphologically normal on the control sides. We observed oval or fusiform shaped chondrocytes with big and round nucleus, orderly arranged thick collagen fibers in the control sides ( Figure 1A). In the PCL rupture side, we could found integrate surface structure and thick, loose and orderly arranged collagen fibers at 4 weeks after PCL rupture ( Figure 1B); uneven staining, rough surface, loose tissue; chondrocytes appeared disorderly arrangement and decreased in number at 8 weeks after PCL rupture ( Figure 1C); rough surface with dent; collagen fibers were unevenly thick and arranged disorderly; cartilage cells significantly reduced in number and behaved disorderly arrangement at 16 weeks after PCL rupture ( Figure 1D); fracture surface, loose tissue; collagen fibers were unevenly thick and arranged disorderly; meniscus cells were rarely seen at 16 weeks after PCL rupture ( Figure 1E). For the control sides, the histological scores (HS) of the medial meniscus were in the normal range, whereas the HS were much higher in the PCL rupture sides, with statistically significant differences at each time point These results indicate that PCL rupture initiates progressive degradation of the medial meniscus ( Figure 1F). Increased expression of TIMP-1 in the medial meniscus after PCL rupture After PCL rupture, more TIMP-1 staining positive cells were observe in the medial meniscus at each time point. We could found TIMP-1 positive-staining in cytoplasm of few cells in the control side (Figure 2A). We observed limited numbers of positive-staining cells at 4 weeks after PCL rupture ( Figure 2B); higher proportion of positive-staining cells 8 weeks after PCL rupture ( Figure 2C); weak-staining in surface, matrix and cytoplasm at 16 weeks after PCL rupture ( Figure 2D); matrix degradation, collagen fibers partly fracture, weaker staining and less positive-staining cells at 24 weeks after PCL rupture ( Figure 2E). The PCR of TIMP-1 on the PCL rupture side was significantly higher than that in the control side at every time point (P<0.05, Figure 2F). There was a very low level of TIMP-1 on the control side. On the rupture side, it first increased, reaching its peak at 8 weeks after PCL rupture, and then decreased in the later stages after PCL rupture. Increased expression of MMP-1 in the medial meniscus after PCL rupture After PCL rupture, more MMP-1 staining positive cells were observed in the medial meniscus at each time point. We could found few MMP-1 positive staining cells in the control side ( Figure 3A). We observed several oval positive cells at 4 weeks after PCL rupture ( Figure 3B); higher proportion of positive-staining cells, strong positive staining in cytoplasm at 8 weeks after PCL rupture ( Figure 3C); loose structure of collagen fibers, some cells deformed, strong-positive staining at both 16 and weeks after PCL rupture ( Figure 3D,E). The PCR of MMP-1 on the PCL rupture side was significantly higher than that in the control side at every time point (P<0.05, Figure 3F). MMP-1 was rarely detected in the control sides, whereas it showed time-dependent elevation in the cytoplasm and matrix in the PCL rupture sides of the medial meniscus. MMP-1 expression increased quickly in the early stage and then maintained a relative high level since 16 weeks after PCL rupture. Increased expression of MMP-13 in the medial meniscus after PCL rupture After PCL rupture, more MMP-13 staining positive cells were observed in the medial meniscus at each time point. We could found few MMP-13 positive-staining cells in the control side ( Figure 4A). We observed a small number of positive-staining cells at 4 weeks after PCL rupture ( Figure 4B); more positive-staining cells with strongly staining at 8 weeks after PCL rupture ( Figure 4C); high proportion of strongly staining cells at 16 weeks after PCL rupture ( Figure 4D); decreased number of positive-staining cells but still higher than the control side at 24 weeks after PCL rupture ( Figure 4E). The PCR of MMP-13 on the PCL rupture side was significantly higher than that in the control side at every time point (P<0.05, Figure 4F). There was a small amount of MMP-13 expression on the control sides, with an initial increase in the cytoplasm and matrix and then a low level in later stages after PCL rupture. Discussion Our study design built on previous research on articular cartilage degeneration secondary to PCL rupture in rabbit knees. These studies also exposed and transected the ligament using a patellar medial incision [20]. Wang and Ao [20] found mild, moderate and severe articular cartilage damage at 6, 12 and 24 weeks after PCL rupture. Our study examined 4, 8, 16 and 24 weeks after PCL rupture as time points for medial meniscus observation; consistent with Wang's results, our study found time-dependent meniscus tissue degeneration and a progressive degeneration after PCL rupture. No obvious signs of degradation were observed at the 4 weeks after PCL rupture, but progressive degradation was found at each subsequent time point. These morphological changes suggest that meniscus fibrous cartilage degradation concurrently after PCL rupture. Chondrocyte is the exclusive cell type in cartilage and is approximately 5% of the total cartilage volume; the remainder is primarily extracellular matrix (ECM), which provides tension and strength to cartilage [21]. The main components of ECM are collageneous materials and aggrecans, the expression of which is mediated by MMPs [8]. Different subtypes of MMPs play different roles in various stages of cartilage degradation. As interstitial collagenases, both MMP-1 and MMP-13 can specifically decompose collagen types I, II and III, and these two types of enzymes are involved in the metabolic changes in the collagen that makes up the cartilage matrix. MMP-1, also called collagenase-1, is secreted by the cells in the lining layer of the synoviocytes and was the first MMP to be identified and purified from human fibroblasts. During the development of OA, the expression of MMP-1 is profoundly elevated in response to interleukin (IL)-1 and tumor necrosis factor-α (TNF-α) stimulation; it decomposes collagen in the ECM and causes cartilage damage, playing an important role in the disease process [22]. Collagenase-3, which is MMP-13, has the unique ability to cleave the triple helix of collagen, and participates in ECM reconstruction during the early stage of cartilage damage; it mainly targets type II collagen degradation in cartilage and is able to attend the catabolic activities induced by other members of the MMPs family. MMP-13 is widely accepted as a biochemical marker of collagenase for cartilage degeneration [23]. MMP-13 presented a high expression level associated with the expression of cartilage matrix in the early stages of OA [24]. The mechanism may be related to the dysfunction of MMP-13 receptors [25]. TIPMs are the key regulators of the inhibition of MMPs activity. TIMP-1 is the most studied TIPM; it not only inhibits the activity of activated MMPs, but also prevents and delays enzyme prototype MMPs from turning into active types [26]. Articular cartilage damage in OA is the combined effect of MMPs, cytokines and other factors. In healthy physical subjects, there is a dynamic balance between MMPs and TIMPs in the cartilage, which maintains the integrity of the cartilage structure [9]. In the early stages of OA, the human body secretes equal amounts of TIMPs and MMPs in the cartilage as part of the self-healing mechanism. Unless the abnormal load is removed or an even greater burden is added, chondrocyte degeneration becomes more apparent, resulting in the release of large amounts of catabolic cytokines and a high expression of MMPs. In the late stages of OA, a decreasing volume of catabolic cytokines and lower MMPs expression levels have been observed in articular cartilage, due to extensive damage and apoptosis of chondrocyte, but the regression continues. The imbalance between the decomposition and synthesis of the ECM reduces the cartilage matrix and destroys the structure of the cartilage; there is no positive correlation between the decrease in the matrix and cartilage degradation. The former induces apoptosis of the chondrocytes, and the dead chondrocyte cannot reproduce the cartilage matrix. As this vicious circle proceeds, the degeneration of cartilage becomes increasingly profound [27]. Previous studies of MMPs and TIMPs have focused on the cartilage, synovium and synovial fluid; little is known about their role in the meniscus. Our study examined the association between the expression of TIMP1-and MMPs and medial meniscus degeneration in a PCL rupture model. We found significantly higher MMP-1 and MMP-13 expression in chondrocytes of the transected medial meniscus than in the control group at 8 weeks after PCL rupture, and the MMP-1 expression remained high. In contrast, a dramatically low expression of MMP-13 was observed, which is consistent with other studies [28,29]. These results suggest that MMP-13 mainly functions in the early stages of OA, whereas MMP-1 plays a role during the entire progress of the disease. No consensus has been reached about TIMP-1 expression in the cartilage and synovium during OA. The majority of scholars agree that there is mild elevation or no change in expression, but Tanaka et al. [30] have argued for reduced expression. Our study found that TIMP-1 expression initially increases after surgery, but decreases during later stages, indicating that TIMP-1 has a strong role in repairing damaged cartilage at the beginning of OA, but becomes less important at later stages, perhaps due to the regulation of other mediators. The exact mechanism needs further exploration. In conclusion, there are obvious time-dependent histological degenerative changes in the medial meniscus after PCL rupture. The high expression of MMP-1, MMP-13 and TIMP-1 in the cartilage may be responsible for the degeneration of the medial meniscus, and the PCL rupture may trigger meniscus degradation and ultimately OA.
2019-01-22T22:30:20.386Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "64a47c8428c76e86bc39f6b615137a2dea36c386", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/bioscirep/article-pdf/39/1/BSR20181843/843724/bsr-2018-1843.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64a47c8428c76e86bc39f6b615137a2dea36c386", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1006018
pes2o/s2orc
v3-fos-license
A Rare Case of Bilateral Agenesis of Central Lower Incisors Associated With Upper Impacted Canine- A Case Report This case of a female patient, 14 yr old with association of the two anomalies, which we came across with in 2014, is rarely met in the specialty practice. The impacted canines are part of the group of dental anomalies of position, while the agenesis is part of the group of dental number anomalies. The orthodontic treatment in the two arches has to be differentiated, the therapeutic objectives being, also different in the two arches. Introduction The development of the dental-maxillary device represents a long and complex process, in which can appear different abnormalities from normal, variable regarding the way they are produced, the manifestations, the moment of appearance and the consequences (1). The dental anomalies represent a special group of the dental-maxillary anomalies, in this category being included affection with a common characteristic: the dominant modification is that of the dental system, while the modification of the bone is discreet, sometimes hardly perceivable, and sometimes secondary to the dental disturbances (2,3). The dental anomalies can appear as freestanding, namely isolated dental anomalies, and in the orthodontic syndromes (4). Agenesis of bilateral (both right and left) mandibular central incisors is not well documented and literature shows paucity of data pertaining to this anomaly. The first report of congenitally missing two mandibular incisors was earlier (5). The prevalence of agenesia in European populations is estimated at 0.08% (6). Females have shown higher predilection then males (7). Certain discrete malpositions of the human canine tooth and agenesis of at least one tooth are abnormalities known to occur together, one of the situations being the association between agenesia with palatal displaced canine (8). Depending on research, it is estimated that on average there is a 1.6% incidence of maxillary impacted canines (9). Impactions are twice as common in females (1.17%) as in males (0.51%) (9). In patients who present with impacted maxillary canines, it is estimated that 8% of these are bilateral (9). Reasons for impactions can be varied and are categorized as both localized and generalized. The most common reasons for canine impaction are usually localized and are the result of any one or combination of the following factors: tooth size/arch length discrepancies, prolonged retention, or early loss of the deciduous canine, abnormal position of the tooth bud, the presence of an alveolar cleft, ankylosis, cystic or neoplastic formation, dilacerations of the root, iatrogenic origin, and idiopathic condition. Irradiation, febrile diseases, and endocrine deficiencies are some of the general causes (9). The aim of the present article is to report a rarely case of bilateral agenesis of central lower incisors, associated with upper impacted canine. The documentation of such case reports is necessary due to its rarity, to provide a review to minimize the clinicians challenge in diagnosing such cases and thus helpful in providing a multidisciplinary approach in treating the patient. Possibilities of treatment in this type of dental anomalies are multiple, from orthodontic, through prosthetic, till implants, depending on many factors, age of patient being the most important in our opinion. This kind of anomalies does not have typical treatment, the choice of choosing orthodontic, prosthetic or implant treatment relies only on clinician`s decision in order to obtain the best results possible. Case presentation The female patient, 14 yr old in 2014, comes for an orthodontic consultation, being brought by the mother, displeased with the physiognomic aspect of her teenage daughter. At the clinical examination are observed the following ( Fig. 1): Fig. 1: Initial clinical aspect -on the upper arch is found the persistence of both temporary canines and of the second temporary molars, over the physiological limit of replacing. -on the inferior arch is found the presence on the arch of both temporary inferior central incisors, more over the physiological limit of replacing. Orthopantomography (Fig. 2) and CBCT (Fig. 3) underlines: -intra-maxillary presence of germs 1.7, 1.5, 1.3, 2.3, 2.5, 2.7 -intra-maxillary presence of germs 3.8 and 4.8, in process of mineralization -intra-maxillary absence of germs 3.1 and 4.1 Following the clinical examination, carefully studied, the analysis of the study models, the beginning photos, orthopantomography (OPT) and CBCT, we gave the diagnosis of bilateral agenesis of inferior central incisors and the diagnosis of impacted right upper canine. The diagnosis of bilateral agenesis of inferior central incisors was given based on the radiologic examination: the intra-maxillary absence of the inferior central incisors germs, more over the physiological limit of the replacing period (10)(11)(12). The diagnosis of impacted right upper canine was given based on the radiologic examination: the intra-maxillary presence of the germ of 1.3, with the root completely formed and the apex closed, 2 years over the maximum age of the physiological replacing (13)(14)(15). The treatment of the two aches was differentiated, the therapeutic objectives being different in the two arches, thus: -In the inferior arch, after the extraction of the two temporary inferior central incisors, will be chosen the closing of the distance by physiological mesializations (16)(17)(18)(19). -In the upper arch, after the extraction of all the temporary teeth persistent on the arch over the physiological limit of replacing, we will wait a period of a few months for the spontaneous eruption of the definitive teeth on the arch and in case of 1.3 we will choose its surgical exposing and bringing it on the arch by the slow tractioning, to a fixed poly-aggregate orthodontic appliance (20,21). Thus were performed dental extractions of 5.5, 5.3, 6.3 and 6.5 in the upper arch and of 7.1 and 8.1 in the inferior arch. It followed then the applying of a fixed poly-aggregate bimaxillary metallic appliance (Fig. 4-6). Discussions The exact etiology of congenital agenesis of both central incisors is unknown, several factors like trauma, radiations, infection, metabolic disorders and idiopathic are the possible etiologic factors (22). Newman has given four main theories mainly for the cause of agenesis of incisors (23). Heredity or familial distribution is the primary cause. Second, anomalies in the development of the mandibular symphisys may affect the dental tissue forming the tooth buds of the lower incisors (24). Third, a reduction of the dentition regarded as nature's attempt to fit the shortened dental arches (an expression of the evolutionary trend) (25) and finally, localized inflammation or infec-tions in the jaw and disturbance of the endocrine system destroying the tooth buds (5,7). Genes MSX1, TGFA and PAX9 interaction sometimes play a role in human tooth agenesia (26).Mandibular incisor agenesis has a large effect on mandibular symphysis growth and morphology. Buschang demonstrated that, vertical and horizontal growth changes during childhood and puberty were most pronounced in the upper half of the mandibular symphysis, resulting in an increase in the height of the mandibular body (27). Hence patients with absence of mandibular both central incisors, exhibit significantly smaller mandibular symphysis area than the normal patients. They have also reported that, the growth of alveolar bone is also associated with continuous eruption of the dentition (27). Endo M. have concluded from their study that, before planning/implementing orthodontic treatment on a patient with congenital missing incisors, some factors like retroclination of alveolar bone and reduced mandibular alveolar bone area should be taken into consideration, as these may affect the treatment outcome (28). Some orthodontists say that congenital absence of both mandibular central incisor is advantageous, as the extraction of mandibular central incisors is sometimes considered as the treatment of choice in crowded class I malocclusion, especially when a preexisting tooth-size discrepancy (severe mandibular excess) prevents the achievement of an acceptable occlusion (29,30). The other consequence of agenesis of both mandibular incisors is disturbance in tongue-lip pressure balance and lack of lingual support. Severe malocclusion usually class II Div I malocclusion is also seen with severe anterior deep bite and absence of dental midline or sometimes wide spacing in the anterior region exists resulting in unaesthetic appearance for a child. The other problem encountered with congenital absence of incisors is the difficulty in identification of teeth. Because of the existing space resulting from missing teeth, the adjacent teeth move to this space, leading to difficulty in identification of incisors. Thus, for correct diagnosis of teeth, radiographic examination is mandatory in order to see the exact position of the root. Conclusion The association of the two anomalies is rarely met in specialty practice. The impacted canines are part of the group of position dental anomalies, while agenesis is part of the number dental anomalies. The treatment is differentiated, on the two arches, the therapeutic objective being different in the two arches, thus: -In the inferior arch we will choose the closing of the distance by physiological mesializations. -In the upper arch we will choose the surgical exposing of 1.3 and bringing it on the arch by the slow tractioning, to a fixed poly-aggregate orthodontic appliance. Due to the age of patient (only 14 years of age), we decided that orthodontic treatment is the proper choice in this particular case. Ethical considerations Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors.
2018-04-03T03:10:56.952Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "a94e80fdef704a0893ee387c121b58ec1d969cd1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a94e80fdef704a0893ee387c121b58ec1d969cd1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244077578
pes2o/s2orc
v3-fos-license
Study of statistical damage constitutive model of layered composite rock under triaxial compression The layered composite rock was subjected to triaxial compression tests under constant confining pressure and the stress–strain curves under different confining pressures were obtained. Based on the continuous damage theory and statistical strength theory, it is assumed that the strength of rock microelements obeys Weibull distribution by taking the defects such as random micro-cracks in the rock into account. The statistical constitutive model of layered composite rock with damage correction is established by taking the axial strain of rock as a random distribution variable of microelement strength. The model parameters were determined by the curve fitting method and referring to some test parameters. By comparing the experimental data and the constitutive model curve, the rationality and feasibility of the model are verified. Introduction With the continuous development of economic construction and national defence construction, and the continuous expansion of underground space development, the study of deep rock mechanics problems has been closely linked to my country's economic construction and national defence construction. Energy mining, water conservancy, hydropower, nuclear waste treatment, mine excavation and other projects all involve deep rock mechanics problems [1]. With the continuous enlargement of the engineering depth, the geological conditions have become more complex, and a series of engineering hazards such as severe roadway deformation and instability, rock bursts and surges of low pressure have become more and more serious. In addition, under the conditions of modern high-tech warfare and high-precision reconnaissance technology, precision-guided weapons and small ground-penetrating nuclear weapons continue to develop. Severe challenges are presented to the construction and survival of underground protection projects. The status and role of deep underground protection projects have become more important and prominent. Layered composite rock is one of the most common rock masses in various types of geotechnical engineering. Because composite rock is a natural material composed of many different properties, different thicknesses, different components and different combinations in a certain order, its characteristics are significantly different from that of a single rock [2,3]. Various rock-related projects are affected by the strength, deformation and destruction of composite rocks, which often cause instability disasters such as tunnel collapse, mine pressure manifestation, edge wave slip, ground subsidence and building cracking (influenced by rock mass foundation). In recent years, important research results have been obtained by using damage mechanics to study the properties of rock materials and explore the laws of deformation and failure of rocks. Kachanov [4] first introduced the concept of damage, and then he proposed the concept of the 'damage factor'. Later scholar Lemaitre [5] combined various aspects of mechanical knowledge (such as effective stress, strain and continuum mechanics) and established 'damage mechanics' based on the principle of irreversible thermodynamics. Bazant [6] proposed distributed fracture mechanics and discussed that geotechnical materials have some special characteristics, such as the scale effect of mesomechanical model failure, strain localisation or instability, and the sensitivity of the finite element network caused by distributed fractures [7][8][9][10]. Wengui and Sheng [11] started from the Mohr-Coulomb criterion, based on the representation method of the microelement strength of the rock that obeys the Weibull distribution, and established a damage-softening statistical constitutive model for the whole process of rock deformation and fracture. A large number of experiments have verified its rationality and correctness, and it has been applied in engineering practice. However, the Mohr-Coulomb criterion does not consider the effect of the intermediate principal stress on the strength of the microelement. Tao et al. [12] assumed that the strength of rock microelements obeyed a normal distribution and proposed the influence factors of the relationship between damaged materials and micro-defects that change due to material damage. The damage mechanics theory is used to analyse the change of rock strength with confining pressure, and a rock damage mechanics model under the new damage definition is established. The constitutive relation of rocks under low confining pressures is well described by this model, but it is not accurate for rocks under high confining pressures. Xiaofeng [13] proposed a new attenuation function -Harris function on the basis of previous studies. Assuming that the probability density of rock microelement strength obeys a new distribution function -the improved Harris function, a new constitutive model is established based on this, which better reflects the stress-strain relationship and the whole failure process of the rock under the three-dimensional stress state. The theoretical curve of the model has a high coincidence with the experimental curve at the stage before rock failure, but the theoretical curve of the model after rock failure does not have a good coincidence with the experimental curve. Due to the fact that the study of random damage of the under layered composite rock under load conditions is relatively rare, an effective statistical damage model is rarely proposed. Based on the Weibull distribution characteristic of rock microelement strength, the damage variable is modified and the damage variable correction coefficient is introduced in this study. A new statistical constitutive model of layered composite rock damage is established by defining the random distribution variable of the strength of rock microelement more concisely. MATLAB software was used to fit the experimental data with the constitutive model to determine the relevant model parameters and verify the correctness of the constitutive model of layered composite rock damage. Finally, the influence of relevant parameters on the accuracy of the model is analysed. Establishment of damage constitutive model of layered composite rock Assuming that no defects in the rock under ideal conditions, the constitutive relationship of layered composite rock under three-dimensional stress are given as: where σ 1 , σ 2 and σ 3 are the three principal stresses of the rock; ε 1 , ε 2 and ε 3 the principal strain of the rock in the three principal stress directions; G is the rock shear modulus; and λ is the Lame constant. Under the condition of constant confining pressure, the axial stress-strain relationship of the non-destructive rock material in triaxial compression is: where E is the rock elastic modulus; and ν is the Poisson's ratio. Due to the distribution of a variety of micro-cracks and structural planes in the rock, the rock may have many weak links with different strengths, and the strength of each element is not the same. Assuming the damage of rock material in the loading process is a continuous process, the following bases are made: (1) The rock material is isotropic in the macroscopic view; (2) Before the failure of the rock element, it obeys Hooke's law. The element has linear elastic properties and loses its bearing capacity after failure; (3) The intensity of each microelement F obeys the Weibull distribution, and its probability density function is: where F is the strength of the rock's infinitesimal element; m and F 0 are the Weibull distribution parameters, which reflect the mechanical properties of the rock. Assuming that the number of damaged microelements under a certain load is n, and the total number of microelements is N, the damage variable D can be defined as the ratio of the number of damaged microelements to the total. Then the statistical damage variable can be expressed as: The value of D reflects the degree of damage inside the rock material. When D = 0, the rock is in a nondestructive state, and when D = 1, the rock is in a completely damaged state. When reaching a certain load level F, the total number of damaged microelements can be expressed as: Thus the calculated damage variable is: Under uniaxial compression, from the continuous damage theory, we can get: It can be deduced that the basic relationship of the damaged rock under triaxial compression is: Heping [14] presented that the damage variable under three-dimensional conditions is the ratio of the damage equivalent area in a representative volume element to the total area of the section. If the rock is isotropic, the ratio has nothing to do with the section orientation, and the damage degree of each stress component is identical, which is the same damage. However, this is only satisfied by ideal rock materials. Therefore, it is not appropriate to assume that the damage situation satisfies the Weibull distribution. This paper attempts to introduce a correction factor δ ∈ (0, 1) to modify the damage variable D so that it can reflect the residual strength and the characteristics of the rock after rupture. It is corrected by multiplying the correction coefficient and the damage variable D so that the resulting rock damage statistical constitutive model can reflect the residual strength and the characteristics of the anisotropy of the rock material. Therefore, the obtained model is closer to the actual situation and can better simulate the stress-strain characteristics of the rock after reaching the peak point. The new statistical constitutive model of rock with the introduction of damage variable correction coefficient D is given as: The current studies on rock damage constitutive models have introduced different rock strength criteria (Mohr-Coulomb criterion [15], Hoek-Brown criterion [16,17] and Druckre-Prager criterion [18], etc.) as the rock microelement strength random distribution variables. It is found that if the rock yield criterion is used as the random distribution variable of the microelement strength, the derived expression is more complicated by researching. However, it is much simpler to take the axial strain of the rock as the random distribution variable of the microelement strength. Therefore, this paper adopts this simplified method and takes the axial strain of the rock as the random-distribution variable of the microelement strength with x = 1. The comprehensive elastic modulus E of the layered composite rock is calculated according to the equivalent elastic modulus of the layered composite rock in the literature [19]. As shown in Figure 1, the equivalent elastic modulus of the layered composite rock is: Therefore, the equivalent elastic modulus of the layered composite rock is: In the formula, L 1 , L 2 and L 3 are the heights of the upper, middle and lower three-layer rock samples, respectively; E 1 , E 2 and E 3 are the elastic moduli of the upper, middle and lower three-layer rock samples, respectively, where L 1 = L 3 = 4.5L 2 . Therefore, the constitutive model of the layered composite rock in this paper can be obtained: 3 Model test verification In order to verify the derived constitutive model, the test sample is selected as a cylindrical shape with a diameter of and a length of 100 mm without obvious cracks and uniform texture. The materials are three kinds of base rocks: blue sandstone, red sandstone and white sandstone, which are the upper, middle and lower layers of the sample (see Figure 2). A microcomputer-controlled rock triaxial test system (see Figure 3 Table 1 using the curve fitting method and MATLAB software. Finally, the statistical constitutive model of rock damage is obtained, and the model curve is fitted with the experimental data (see Figure 4). As shown in Figure 4, it can be found that the statistical damage constitutive model curves of layered composite rocks derived can well reflect the stress-strain process of rocks under different confining pressures by comparing the coincidence of model curves and experimental data. The experimental data are closely consistent with the model curve. The coincidence is good at not only the pre-peak stage of the rock but also the post-peak failure stage of the rock due to the introduction of the correction factor δ . It can be known from the experiment that as the confining pressure increases, the brittleness of the rock decreases and the corresponding ductility increases. The rock statistical damage constitutive model derived in this study has a high fitting accuracy for the post-peak stage of the rock, which is superior to the traditional linear fitting method. The main fitting parameters in this paper are E, ν, ε 0 , m and δ , among which E and ν can be obtained by experiments, and ε 0 , m and δ need to be fitted. In order to study the influence of these parameters on the stress-strain curve and damage variables, the confining pressure of 10 MPa is selected as the basic reference. The influence of the parameters is fitted as follows. As the two parameters of Weibull distribution, ε 0 and m reflect the physical and mechanical properties of rocks. When the other parameters are fixed and ε 0 is changed, the stress-strain curve changes as shown in Figure 5. It can be seen from Figure 5 that when ε 0 is changed, the elastic phase is extended, the peak point shifts to the right and the peak strength of the rock also increases. It can be concluded that ε 0 mainly reflects the change of peak strength. Therefore, the larger ε 0 is, the stronger the ability of the layered composite rock to resist deformation and failure. When the other parameters are fixed and m is changed, the stress-strain curve changes as shown in Figure 6. For the m parameter, it reflects the concentration of the strength distribution of the rock element. If m is increased, the strength distribution of microelements is more concentrated and the brittleness of the material is higher. When m is increased, the post-peak curve becomes steeper and the brittleness of the material increases. Conversely, if the value of m is decreased, the ductility of the material increases. It can be concluded that m reflects the brittleness of the rock. It can be seen from Figure 7 that when δ is set to 1, when the rock damage correction coefficient is not introduced, the model curve is obviously different from the actual situation, and it is difficult to reflect the stress-strain situation after the rock is broken. It shows that the introduction of the correction coefficient δ is a good representation of the failure of the layered composite rock after the peak point, and it optimises the shortcomings of the constitutive model in the previous research, which has good practical significance. It can also be seen from Figure 7 that the different values of the correction coefficient δ have little effect on the first half of the model curve, so the effect before the peak point of the rock is small, and the effect after the peak point of the curve is greater. In actual engineering applications, the value of δ will vary with lithology, confining pressure and initial defects. Therefore, the accurate value of δ is very important. It is a good way to use MATLAB software to perform curve fitting to determine the value of δ . Fig. 5 The influence of the parameter ε 0 on the constitutive model. Fig. 6 The influence of the parameter m on the constitutive model. Fig. 7 The influence of the parameter δ on the constitutive model. Conclusions In this study, based on the theory of continuous damage and the theory of statistical strength, the damage evolution equation of triaxial compression fracture of layered composite rocks under constant confining pressure is derived from the perspective of Weibull distribution of the strength of microelements of rocks, and a three-dimensional statistical damage constitutive model suitable for layered composite rocks is established. Through model validation and parameter discussion, the following conclusions can be drawn: (1) Through experimental verification, the established model curve and the measured curve have a good consistency. The results show that the model is reasonable, and the model has fewer calculation parameters, and so the method that use functions of several variables to find extreme values is abandoned. The model parameters are determined by using MATLAB software and the curve fitting method with higher accuracy, which can better describe the constitutive relation of layered composite rocks under the action of three-dimensional stress. It provides theoretical and technical support for rock load damage calculation, anisotropy study, stability analysis of surrounding rock and rock excavation. (2) The physical significance of Weibull distribution parameters ε 0 and m is studied, and it is concluded that the parameter ε 0 mainly reflects the change of peak strength of layered composite rocks. The larger the parameter ε 0 is, the higher the yield strength and peak strength of layered composite rocks are, and the stronger the resistance to deformation and failure is. The parameter m can reflect the brittleness of layered composite rock materials. When m is increased, the brittleness of the material increases and the yield strength also increases. (3) In this paper, the rock damage correction factor δ is introduced to further improve the fitting accuracy of the model curve in the post-peak stage of the rock, which well reflects the failure characteristics of the layered composite rock under the three-dimensional stress state. Through the influence of the different values of δ under the same conditions on the fitting accuracy of the model curve and the experimental data, it shows the necessity of introducing δ and provides a reference for theoretical research and practical engineering, which has good practical significance.
2021-11-14T14:06:58.915Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "310d0f69c59729915923cdfec5d1393b17d7b26b", "oa_license": "CCBY", "oa_url": "https://www.sciendo.com/pdf/10.2478/amns.2021.2.00048", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "76699bd5542c259aa129546fe13140b5c4fe65df", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
59297166
pes2o/s2orc
v3-fos-license
Results of CO 2 laser-assisted sclerotomy surgery (CLASS procedures) in eyes with primary open-angle glaucoma Purpose: This study aimed to examine the effectiveness of CO 2 laser-assisted sclerectomy surgery (CLASS) in eyes with primary open-angle glaucoma (POAG) showing progression in spite of maximal local antiglaucomatous therapy. Materials and methods: Patients with progressive POAG received CLASS treatment. We performed CLASS on 15 eyes (eight males and seven females). The primary endpoint was the change in the intraocular pressure (IOP), and additionally best spectacle-corrected visual acuity (BSCVA), C/D ratio (cup-to-disc), as well as use of antiglaucomatous drops were also investigated. Following the preoperative assessment, measurements were performed at 6-month follow-up. Results: The average preoperative IOP was 26.13 ± 6.79 mmHg that dropped to 9.57 ± 4.09 mmHg at 1 day. IOP was stable at 1 month, 3 months, and 6 months. The BSCVA decreased to the 1-day and 1-week follow-up but returned to its original value to the 1-month check-up. Preoperatively, all patients were on maximal antiglaucoma drop therapy, after CLASS none of the patients needed antiglaucomatous treatment at 1 month. However, at 3 months, one of them needed antiglaucoma drops. C/D ratio showed non-signi fi cant changes. Conclusions: CLASS procedure was found to be effective in decreasing IOP in POAG patients whose IOP was not compensated with maximal antiglaucomatous local therapy; patients needed signi fi cantly less local therapy following the CO 2 laser surgery. INTRODUCTION Primary open-angle glaucoma (POAG) is a progressive, chronic disease, which is characterised by gradual increase of the intraocular pressure (IOP) and damage of the small arterioles of the optic nerve, with a consecutive imbalance between IOP and circulation of the optic nerve head. Consequently, the ophthalmologist may observe cupping of the optic nerve head together with visual field defect. Undiagnosed POAG is relatively frequent in the Caucasian population and the disease may remain hidden until significant visual field defect. The problem is that this defect is irreversible; therefore, early diagnosis and efficient treatments have utmost importance halting the visual field deterioration and protecting visual acuity. The cause of POAG is multifactorial with family history certainly playing an important role in it. Usually, the resistance of the trabecular meshwork is increased; therefore, the aqueous humour produced by the ciliary processes cannot leave the eye properly through the trabecular meshwork and consequently the IOP increases. The increase is usually gradual and affects the two eyes differently, so sometimes the patient does not notice it, because they do not test their eyes (the visual acuity and visual field) with separate eyes. Conservative therapy means using eye drops in order to decrease the IOP. Several eyedrops are available in Europe, but in some patients conservative therapy cannot decrease IOP below abnormal values (usually below 21 mmHg) even with maximal therapy. For these patients, surgery is needed. Before antiglaucomatous eyedrops, trabeculectomy was the most frequently performed IOP-lowering surgical intervention, although there were several side effects, including postoperative hypotony, choroidal detachment, suprachoroidal bleeding, endophthalmitis, malignant glaucoma, and cataract [1]. Therefore, non-penetrating glaucoma surgery came into focus a decade ago [2]. Performing non-penetrating surgery manually is challenging and difficult, and sometimes around the last step of the surgery, the thin sclera perforates and the surgery is transformed into a penetrating procedure with the all possible side effects. Manual deep sclerectomy, according to the data published in literature [2][3][4][5], results in less reduction of IOP, and similarly the side effects are also significantly milder. The reason is that the trabeculo-Descemet membrane stays intact, so the decrease of IOP is more gradual in these patients. Because manual non-penetrating glaucoma surgery is difficult, CO 2 laser has been introduced to standardise the method and to increase predictability and safety. The new technique was first introduced and published by Assia et al. [6]. Several technical modifications have been made since the first prototype, and by now the CO 2 laser is available commercially [3,4,7]. In this article, we present the first clinical results with CO 2 laser-assisted sclerectomy surgery (CLASS). MATERIALS AND METHODS CLASS procedure was performed in 15 POAG eyes in the Department of Ophthalmology, Semmelweis University, Budapest, Hungary. Patients with progressive POAG glaucoma received CLASS treatment on the worse eye. Fifteen eyes of 14 patients were enrolled into the study. The average age of the patients was 62.97 ± 11.81 years. Among them, eight were males and seven were females. During preoperative assessment, medical and family histories were recorded, IOP, as well as uncorrected and best spectacle-corrected visual acuity (UCVA and BSCVA) were determined, and C/D ratio was documented. Similar measurements were performed during the follow-up visits at 1 day, 1 week, 1 month, 3 months, and 6 months postoperatively. Surgical method Most of the surgeries were performed under retrobulbar anesthesia, but in four cases, with less compliant patients, general anesthesia was applied. Fornix-based conjunctival flap was created at the beginning of the surgery, then a limbus-based 5 × 5 mm scleral flap was made with a special single-use crescent knife (scleral flap was 1 × 1 mm larger than in normal trabeculectomy procedures). Afterwards, flap preparation of 0.02% mitomycin-C (MMC) soaked using small sponges was tucked between the conjunctiva and the scleral flap for 2 min. Then, the MMC was washed out and the CO 2 laser was applied (IOPtima Ltd., Ramat-Gan, Israel). During the laser procedure, at first, a 4.0 × 1.6 mm aqueous reservoir was created, then around the Schlemmchannel, another 4.0 × 1.0 mm semiperforating scleral pool was created by the laser in a crescent-shaped form. When bubble formation in the aqueous humour occurred, CO 2 laser treatment was finished. The corners of the scleral flap were sutured with 10/0 nylon sutures and then the conjunctiva was also sutured with the 10/0 nylon, especially at the edges. At the end of the surgery, subconjunctival dexamethasone was administered. Patients were examined at 1 day, 1 week, 1 month, 3 months, and 6 months. If it was necessary, the conjunctival flap was checked more frequently (every week during the first postoperative month, then according to the protocol). UCVA and BSCVA were tested, IOP was measured by Goldmann tonometry, C/D ratio was determined, and the necessary antiglaucomatous drops were registered in the patient's file. Intraoperative and postoperative complications were also recorded and later analysed. Statistical analysis was performed using the IBM SPSS Statistics for Windows v21 (IBM Corp., Released 2012, Armonk, NY, USA). Preoperative and postoperative IOP values and visual acuity results were compared with paired t-test. RESULTS The average preoperative IOP was 26.13 ± 6.79 mmHg. Following the surgery, the IOP dropped already at Day 1 to 9.57 ± 4.09 mmHg (p < .001). The postoperative IOP was stable and significantly lower following the CLASS procedures (p < .05) than the preoperative values on each followup visit (1 week, 1 month, 3 months, and 6 months; Figure 1). With regard to IOP, there was no significant difference between the result of the 6-month follow-up. BSCVA decreased significantly on the first day following the surgery (p = .017), improved significantly by the 1-week check-up (p = .039), and returned to the original value at the 1-month visit (p < .001). During the follow-up period, there was no significant difference between the preand postoperative BSCVA results 1 week after the surgery (p > .05; Figure 2) The preoperative C/D ratio was 0.84 ± 0.17, with no significant change during the follow-up period (p > .05). Preoperatively, all patients were on antiglaucomatous drop therapy, most of them using two or more types of drops. One patient was consuming oral acetazolamide (250 mg) pills; after CLASS procedure, they did not need it anymore. None of the patients needed antiglaucomatous treatment at 1 month; however, at 3 months, three of them needed one type of drop therapy. During the first month, one of the patients showed an IOP higher than 18 mmHg. Therefore, the decrease of drop therapy was significantly smaller compared to preoperative antiglaucoma drop use. Among the patients, two had inadvertent scleral penetration during surgery; therefore, one case was converted into trabeculectomy and the other had a small injury of the ciliary body. No hyphaema or hypotony was experienced in these two cases, and IOP stayed within normal range during the whole follow-up period. On the other hand, in another patient, a transient choroidal detachment was found during the early postoperative period with lower IOP. This was resolved within 3 weeks and the IOP was in normal range (below 18 mmHg). In three cases, conjunctival vessel dilation and tortuosity were found without IOP rise. DISCUSSION The IOP decreasing effect is smaller in case of nonpenetrating glaucoma surgery, including the CLASS procedure as well, although severe postoperative complications are also much less common compared to filtration procedures [1,5]. Due to the milder IOP decrease, the danger of abrupt visual field loss is also rare, which can be experienced in advanced glaucoma cases following filtrating surgery, such as trabeculectomy. CLASS offers a reasonable solution for the technical problems of manual non-penetrating surgeries as well, because this is a titratable method, the juxtacanalicular trabecular meshwork can be opened without a real perforation [3,4,7]. The aqueous humour appearing in the ablated area prevents further laser tissue destruction. When the underlying sclera is very thin, inadvertent scleral perforation may occur; however, with meticulous scleral flap preparation (not too thick), this complication can be avoided. In one case, we could not explain the cause of choroidal detachment, because there was no complete scleral perforation and the scleral flap was also not too thick. The CO 2 laser is able to deliver even and planned size of tissue ablation; the surrounding tissue the thermal damage was minimal. Using the MMC sponge, the chance of scarring, tissue fibrosis, and adhesion are significantly smaller. In POAG cases with CLASS procedure in the literature, different results are published, in which some of the authors achieved a 45.1%, others only 19.0% decrease of IOP during the 6-month follow-up period [3,4,7]. Among our patients, IOP decrease was significant in patients who were on maximal antiglaucomatous topical therapy and showing glaucoma progression. Importantly, IOP stayed in the normal range; BSCVA did not decrease statistically in a significant way. During the first postoperative month, none of the patients needed antiglaucoma therapy, at months 3 and 6; three patients needed extra eye pressure-lowering drops (but only one type of drop). Based on our results, the CLASS procedure was found to be effective and efficient in decreasing IOP in patients whose IOP was not compensated with maximal antiglaucomatous local therapy, and patients needed significantly less local therapy following the CO 2 laser surgery. CONCLUSIONS We recommend the method in case of POAG. Possibly, the CLASS procedure can be applied in earlier glaucoma cases also to prevent serious progression and visual field deterioration. Nevertheless, we find it necessary to include a larger patient group with longer follow-up period to find out the real efficiency of the method.
2019-01-28T14:07:06.680Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "2a4a8fbb117937c06eb2c1776eb987cff31e1c8d", "oa_license": "CCBYNC", "oa_url": "https://akjournals.com/downloadpdf/journals/2066/1/3/article-p78.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "76cbb12bd184e25c79ffd038b8aadd6a3286349b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267146123
pes2o/s2orc
v3-fos-license
THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE Ergonomic Hazards Associated with Brick Making in a Tropical Wetland: The Case of Sironga Wetland, Kenya : Brick production processes involve a lot of manual handling which may lead to several ergonomic hazards. This is due to the nature of the work forcing them to bend or carry loads including frequency, time and weight of the load carried at a time. This study was done with the aim of identifying the ergonomic hazards in the brick making industry and the results compared with the local and international standards on load carrying. Different roles were played in brick production by both men and women. Females were mostly involved in carrying bricks on the head while males were involved in mixing and moulding of clay. However, both male and female workers required exerting force. Results revealed that workers were carrying loads more than 25kg against NOISH (National Institute of Occupational Safety and Health) and (International Labour Organization) ILO which stipulates the maximum weight to be lifted, carried on head or shoulders, pulled or pushed is 25 kg for men and 20 kg for women. Heavy load carrying exposes them to several ergonomic hazards such as musculoskeletal discomfort (MSD) and disability. The study recommends ergonomic intervention Introduction Work designs is the major contribution of static postures that may result in developmental problems of shoulder and upper limbs (Buckle and Stubbs, 1990). This could be due to the nature of their work that does not allow employees to rest (Kandoko, 2017). The nature of the job determines the workers mechanical exposure profile (Allreadet al., 2000). The type of work processes and types of tools may pose a hazard to employees. This is supported by Putz-Anderson (1988) who reported that ergonomic problems can be caused by production demand and fault work methods. Brick production processes involve a lot of manual handling which may lead to several ergonomic hazards. This is due to the nature of the work forcing them to bend or carry loads including frequency, time and weight of the load carried at a time. Heavy load carrying has been associated with musculoskeletal discomfort (MSD) and disability (Kadotaet al., 2020). Baoet al., (1997) highlighted that a production system with less production workers results in high body movements. This shows that respondents are working in one position for longer periods. One cause of musculoskeletal discomfort is heavy loading carrying, a common practise in developing countries (Kadotaet al., 2020). According to (Samuels, 2005), workers in manufacturing industries are often exposed to ergonomic hazards. This is because tasks in brick making involve a range of physical action from positions and postures that may not be ideal and this could place workers at risks for accidents and injuries (Manoharan, et al., 2012). Souza et al., (2002), indicates that manufacturing industries occupy the position on the frequency and severity of accidents. The nature of manufacturing processes presents ergonomic challenges (Smallwood, 2004). Manual handling therefore may expose workers to different ergonomic hazards. Carrying loads on a regular basis causes health problems such as musculoskeletal discomfort and disability (Kadotaet al., 2020). According to (Fabiano, et al., 2004and Sinclair, et al., 2013, workers in small business enterprises are exposed more to higher health and safety risks than workers of bigger enterprises. Further, (Abdallaet al., 2017), indicates that OSH laws and regulatory agencies are mostly designed for large enterprises in the formal economy and do not cover the small enterprises or informal economy; therefore, minimal reporting on the informal sector or little enforcement of laws and regulations. Since employees in small enterprises outnumber those in larger enterprises, it is important to address these gaps. Study Area and Design The study was conducted in Sironga wetland, Western Kenya. A multi-stage sampling was used. . Purposive nonprobability sampling was employed. The target population in the study area consisted of brick kiln owners and brick kiln employees. A sample of 233 respondents were randomly selected. Simple random sampling was used to select the respondents to be interviewed. Purposive sampling for focus g before interviews were conducted to enable the attainment of the objective of this study. Other than the questionnaires and interview schedules, data was also captured through observation using an o working hours between 9 a.m. to 6p.m. Results and Discussion Brick kiln workers are mostly labour force who are periodic and protecting their health at the work site may not be always on priority of the employer (Vaidya and women. A question on whether brick making has any health effects, 71.4 % of the respondents said they had no effect while 28.6% indicted they had some effect ( Figure Figure The process of making bricks is done manually and given the nature of the work most musculoskeletal disorders were reported. Most people in the brick industry are intensive work such as brick moulding using hands, mud loading of bricks on the head (World Bank, 2011). According to (OSH) musculo of the body such as the back, neck, upper and lower limbs whether caused by manual handling or not. It can be noted that most of the hazards are associated with manual handling of loads. It is a low technolog labour (Nakamya, 2008). Figure 2 shows the different ergonomic hazards on respondents during brick production. The research findings indicate that for the last 12 months, 9.1% of the respondents had suffered from muscle pain 4.6 % from injuries, 4% backache and muscle pain, 1.7 % both chest pain and neck pain and 1.1 % skin irritation. While men and women play different roles in overloading themselves leading to varying occupational hazards. For instance backache, muscle pain, chest pain in men can be attributed to the fact that digging and extraction of cl kilns involves bending and lifting. The minimum number of clay moulded per day was 300 and 700 was the maximum (Figure 3). Study done in Kajjansi, Uganda by Nakamya (2008), indicated back pain, ch brick making on human health. The study also found out that malaria resulted from the stagnant water in the open pits. Sahaet al., (2021), results on health hazards on people working in brick making activities in Bang respondents suffered from asthma, fatigue, headache and eye irritation. employees. A sample of 233 respondents were randomly selected. Simple random sampling was used to select the respondents to be interviewed. Purposive sampling for focus group discussion was used. Respondent's consent was sought before interviews were conducted to enable the attainment of the objective of this study. Other than the questionnaires and interview schedules, data was also captured through observation using an observation list. Data was collected during Brick kiln workers are mostly labour force who are periodic and protecting their health at the work site may not yer (Vaidya et al., 2015). Different roles were played in brick production by both men and women. A question on whether brick making has any health effects, 71.4 % of the respondents said they had no effect while 28.6% indicted they had some effect ( Figure 1). Figure 1: Effects of Brick Making on the Health of Sironga Wetland Respondents The process of making bricks is done manually and given the nature of the work most musculoskeletal disorders were reported. Most people in the brick industry are employed on contract basis and are used specifically for labour intensive work such as brick moulding using hands, mud-pugging by foot, monitoring and regulating fire in kilns and loading of bricks on the head (World Bank, 2011). According to (OSH) musculoskeletal injuries include injuries to all parts of the body such as the back, neck, upper and lower limbs whether caused by manual handling or not. It can be noted that most of the hazards are associated with manual handling of loads. It is a low technology enterprise requiring manual labour (Nakamya, 2008). Figure 2 shows the different ergonomic hazards on respondents during brick production. 2: The Ergonomic Hazards Experienced by Sironga Wetland Respondents during Brick Making dings indicate that for the last 12 months, 9.1% of the respondents had suffered from muscle pain 4.6 % from injuries, 4% backache and muscle pain, 1.7 % both chest pain and neck pain and 1.1 % skin irritation. While men and women play different roles in brick making, results show that respondents are either lifting, bending or overloading themselves leading to varying occupational hazards. For instance backache, muscle pain, chest pain in men can be attributed to the fact that digging and extraction of clay, mixing, moulding of clay and transporting bricks to the kilns involves bending and lifting. The minimum number of clay moulded per day was 300 and 700 was the maximum (Figure 3). Study done in Kajjansi, Uganda by Nakamya (2008), indicated back pain, chest pain and malaria as the effects of brick making on human health. The study also found out that malaria resulted from the stagnant water in the open pits. (2021), results on health hazards on people working in brick making activities in Bang respondents suffered from asthma, fatigue, headache and eye irritation. www.theijst.com September, 2021 employees. A sample of 233 respondents were randomly selected. Simple random sampling was used to select the roup discussion was used. Respondent's consent was sought before interviews were conducted to enable the attainment of the objective of this study. Other than the questionnaires bservation list. Data was collected during Brick kiln workers are mostly labour force who are periodic and protecting their health at the work site may not 2015). Different roles were played in brick production by both men and women. A question on whether brick making has any health effects, 71.4 % of the respondents said they had no effect The process of making bricks is done manually and given the nature of the work most musculoskeletal disorders employed on contract basis and are used specifically for labour pugging by foot, monitoring and regulating fire in kilns and skeletal injuries include injuries to all parts of the body such as the back, neck, upper and lower limbs whether caused by manual handling or not. It can be noted that y enterprise requiring manual labour (Nakamya, 2008). Figure 2 shows the different ergonomic hazards on respondents during brick production. dings indicate that for the last 12 months, 9.1% of the respondents had suffered from muscle pain 4.6 % from injuries, 4% backache and muscle pain, 1.7 % both chest pain and neck pain and 1.1 % skin irritation. While brick making, results show that respondents are either lifting, bending or overloading themselves leading to varying occupational hazards. For instance backache, muscle pain, chest pain in men ay, mixing, moulding of clay and transporting bricks to the kilns involves bending and lifting. The minimum number of clay moulded per day was 300 and 700 was the maximum est pain and malaria as the effects of brick making on human health. The study also found out that malaria resulted from the stagnant water in the open pits. (2021), results on health hazards on people working in brick making activities in Bangladesh showed that Other than backache and muscle pain, women may experience neck pain due to carrying bricks on their heads. The maximum number of bricks carried per trip on the head was 15 while minimum was 11 bricks (Plate 4.8). The minimum number of bricks carried per da women transport bricks on head load where 9 to 12 bricks are carried at a time as head load (Vaidya, 2015). According to (OSH), manual lifting tasks with high loads or frequencies Though (ILO) has no exact weight limit that is safe, ISO proposes a limit of 25kg for men and 15kg for women. Kandoko, (2017) indicates th pushing, pulling of lifting more than 25kg. This suggests that respondents are carrying more than 25 kg which is against ILO and the National Institute for Occupational Safety and Health (NOISH) regulations. According to (Ka carrying more than 25 kg by employees is due to the need of meeting their targets that may be unrealistic to meet forcing them to overload themselves. The Maximum Weight Convention, 1967 on labour standards in Thailand, Article 7 on female and young workers, stipulates the maximum weight to be lifted, carried on head or shoulders, pulled or pushed is 20 kg for young female employees between the ages of 15 and 18 years and 25 kg for adult female workers. Further, France has restrictions on weights to be handled by younger workers where it allows a maximum of 8 Kgs for a young woman of 14 to 15 years. A report on children working in brick kiln factories, children working in these fields miss their education damaging their futures prospects at the Prohibition Act 2000 strictly prohibits employment of children below 16 years of age working in brick production. It also strictly prohibits children who are under 14 years, but hours a week. In South Asia, the Child Labour (Prohibition and Regulation) Act, 1976 prohibits child labour below 14 years. Since respondents were being paid for the number of bricks carried, it result working for longer hours. This is supported by Kandoko (2017), who highlights that due to the work load, it forces employees to carry more bricks at the same time so as to cover daily work targets. For all respondents, th number of hours spent in a day was one hour while the maximum was 12 hours with a mean of 6.8 hours (Figure 3). The minimum working days was four while the maximum 6 days. Vaidya, (2015) in her studies suggests that carrying head loads on a regular basis causes health problems more especially in women. Studies conducted in India by (Sett and Sahu, 2008) shows that workers in brick making industries suffer from varying health problems caused by poor working conditions and lifting or carrying of heavy loads since it involves brick setting, brick packing and brick dispatching. The Bihar Labour regulations, the Factories Act, 1948 (applicable to brick making enterprises) of South Asia stipulates a mandatory application of a maximum of 8 hour established child labour involved in both brickmaking and transportation though (ILO) prohibits manual load handling by those under 16 years. Prior studies show children accompanying their parents t working days and child labour. Since the bricks are carried on the head by women and children for longer hours, this can lead to health problems such as spinal problems. This applies too for men who also spend more Protective clothing among respondents was uncommon since it wasn't applicable with the kind of activity in the wetland. For instance, mixing of mud and water was done with bare feet while loading of bricks done by bare hands. Even if had gloves, respondents would feel uncomfortable using them. All respondents preferred using bare hands. However, it was observed that most respondents especially men had cracked hand palms and the soles of their feet due to the constant contact of water and mud. Treating of clay is mostly done by foot while moulding of the bricks is done by hands. Similar situations are observed in Nepal, Zimbabwe, South Asia, Ethiopia, India among other countries where the process of brick making involves treating clay by foot and moulding bricks by hand (Maithel, 2012). In this study, ergonomic hazards have been associated with musculoskeletal symptoms. This is probably due to the repetitive motion which is a risk factor for Other than backache and muscle pain, women may experience neck pain due to carrying bricks on their heads. The maximum number of bricks carried per trip on the head was 15 while minimum was 11 bricks (Plate 4.8). The minimum number of bricks carried per day was 300 while 500 as maximum. A similar observation is made in India where women transport bricks on head load where 9 to 12 bricks are carried at a time as head load (Vaidya, 2015). According to (OSH), manual lifting tasks with high loads or frequencies may induce musculoskeletal disorders such as back pain. Though (ILO) has no exact weight limit that is safe, ISO-standard Ergonomics-Manual handling on lifting and carrying proposes a limit of 25kg for men and 15kg for women. Kandoko, (2017) indicates that brick making involves either pushing, pulling of lifting more than 25kg. This suggests that respondents are carrying more than 25 kg which is against ILO and the National Institute for Occupational Safety and Health (NOISH) regulations. According to (Ka carrying more than 25 kg by employees is due to the need of meeting their targets that may be unrealistic to meet forcing them to overload themselves. The Maximum Weight Convention, 1967 on labour standards in Thailand, Article 7 on female d young workers, stipulates the maximum weight to be lifted, carried on head or shoulders, pulled or pushed is 20 kg for young female employees between the ages of 15 and 18 years and 25 kg for adult female workers. Further, France has hts to be handled by younger workers where it allows a maximum of 8 Kgs for a young woman of 14 to 15 years. A report on children working in brick kiln factories, children working in these fields miss their education same time damaging their health. In India, The child Labour Regulations and Prohibition Act 2000 strictly prohibits employment of children below 16 years of age working in brick production. It also strictly prohibits children who are under 14 years, but under 16 years, working more than 6 hours a day and 36 hours a week. In South Asia, the Child Labour (Prohibition and Regulation) Act, 1976 prohibits child labour below 14 years. Since respondents were being paid for the number of bricks carried, it results to a tendency of overloading and working for longer hours. This is supported by Kandoko (2017), who highlights that due to the work load, it forces employees to carry more bricks at the same time so as to cover daily work targets. For all respondents, th number of hours spent in a day was one hour while the maximum was 12 hours with a mean of 6.8 hours (Figure 3). The minimum working days was four while the maximum 6 days. Vaidya, (2015) in her studies suggests that carrying head ar basis causes health problems more especially in women. Studies conducted in India by (Sett and Sahu, 2008) shows that workers in brick making industries suffer from varying health problems caused by poor working vy loads since it involves brick setting, brick packing and brick dispatching. The Bihar Labour regulations, the Factories Act, 1948 (applicable to brick making enterprises) of South Asia stipulates a mandatory application of a maximum of 8 hour-work day with a one day off. It cannot be ignored that field survey established child labour involved in both brickmaking and transportation though (ILO) prohibits manual load handling by those under 16 years. Prior studies show children accompanying their parents to the work place. It also reports on long working days and child labour. Since the bricks are carried on the head by women and children for longer hours, this can lead to health problems such as spinal problems. This applies too for men who also spend more Protective clothing among respondents was uncommon since it wasn't applicable with the kind of activity in the wetland. For instance, mixing of mud and water was done with bare feet while loading of bricks done by bare hands. Even if had gloves, respondents would feel uncomfortable using them. All respondents preferred using bare hands. However, it was observed that most respondents especially men had cracked hand palms and the soles of their feet due to the constant ter and mud. Treating of clay is mostly done by foot while moulding of the bricks is done by hands. Similar situations are observed in Nepal, Zimbabwe, South Asia, Ethiopia, India among other countries where the process of brick ay by foot and moulding bricks by hand (Maithel, 2012). In this study, ergonomic hazards have been associated with musculoskeletal symptoms. This is probably due to the repetitive motion which is a risk factor for www.theijst.com September, 2021 Other than backache and muscle pain, women may experience neck pain due to carrying bricks on their heads. The maximum number of bricks carried per trip on the head was 15 while minimum was 11 bricks (Plate 4.8). The y was 300 while 500 as maximum. A similar observation is made in India where women transport bricks on head load where 9 to 12 bricks are carried at a time as head load (Vaidya, 2015). According to may induce musculoskeletal disorders such as back pain. Manual handling on lifting and carrying at brick making involves either pushing, pulling of lifting more than 25kg. This suggests that respondents are carrying more than 25 kg which is against ILO and the National Institute for Occupational Safety and Health (NOISH) regulations. According to (Kandoko, 2017), carrying more than 25 kg by employees is due to the need of meeting their targets that may be unrealistic to meet forcing them to overload themselves. The Maximum Weight Convention, 1967 on labour standards in Thailand, Article 7 on female d young workers, stipulates the maximum weight to be lifted, carried on head or shoulders, pulled or pushed is 20 kg for young female employees between the ages of 15 and 18 years and 25 kg for adult female workers. Further, France has hts to be handled by younger workers where it allows a maximum of 8 Kgs for a young woman of 14 to 15 years. A report on children working in brick kiln factories, children working in these fields miss their education same time damaging their health. In India, The child Labour Regulations and Prohibition Act 2000 strictly prohibits employment of children below 16 years of age working in brick production. under 16 years, working more than 6 hours a day and 36 hours a week. In South Asia, the Child Labour (Prohibition and Regulation) Act, 1976 prohibits child labour below 14 s to a tendency of overloading and working for longer hours. This is supported by Kandoko (2017), who highlights that due to the work load, it forces employees to carry more bricks at the same time so as to cover daily work targets. For all respondents, the minimum number of hours spent in a day was one hour while the maximum was 12 hours with a mean of 6.8 hours (Figure 3). The minimum working days was four while the maximum 6 days. Vaidya, (2015) in her studies suggests that carrying head ar basis causes health problems more especially in women. Studies conducted in India by (Sett and Sahu, 2008) shows that workers in brick making industries suffer from varying health problems caused by poor working vy loads since it involves brick setting, brick packing and brick dispatching. The Bihar Labour regulations, the Factories Act, 1948 (applicable to brick making enterprises) of South Asia stipulates a ith a one day off. It cannot be ignored that field survey established child labour involved in both brickmaking and transportation though (ILO) prohibits manual load handling by o the work place. It also reports on long working days and child labour. Since the bricks are carried on the head by women and children for longer hours, this can lead to health problems such as spinal problems. This applies too for men who also spend more hours making bricks. Protective clothing among respondents was uncommon since it wasn't applicable with the kind of activity in the wetland. For instance, mixing of mud and water was done with bare feet while loading of bricks done by bare hands. Even if they had gloves, respondents would feel uncomfortable using them. All respondents preferred using bare hands. However, it was observed that most respondents especially men had cracked hand palms and the soles of their feet due to the constant ter and mud. Treating of clay is mostly done by foot while moulding of the bricks is done by hands. Similar situations are observed in Nepal, Zimbabwe, South Asia, Ethiopia, India among other countries where the process of brick ay by foot and moulding bricks by hand (Maithel, 2012). In this study, ergonomic hazards have been associated with musculoskeletal symptoms. This is probably due to the repetitive motion which is a risk factor for
2022-01-09T16:21:38.920Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "2a329804ee2b0a6530b1728997250abf867822ca", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/theijst/article/download/166486/114225", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d7e3ffad2c4bf3a779f1a90cfb610e60ade4af97", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
236640029
pes2o/s2orc
v3-fos-license
The Actin Cytoskeleton at the Immunological Synapse of Dendritic Cells Dendritic cells (DCs) are considered the most potent antigen-presenting cells. DCs control the activation of T cells (TCs) in the lymph nodes. This process involves forming a specialized superstructure at the DC-TC contact zone called the immunological synapse (IS). For the sake of clarity, we call IS(DC) and IS(TC) the DC and TC sides of the IS, respectively. The IS(DC) and IS(TC) seem to organize as multicentric signaling hubs consisting of surface proteins, including adhesion and costimulatory molecules, associated with cytoplasmic components, which comprise cytoskeletal proteins and signaling molecules. Most of the studies on the IS have focused on the IS(TC), and the information on the IS(DC) is still sparse. However, the data available suggest that both IS sides are involved in the control of TC activation. The IS(DC) may govern activities of DCs that confer them the ability to activate the TCs. One key component of the IS(DC) is the actin cytoskeleton. Herein, we discuss experimental data that support the concept that actin polarized at the IS(DC) is essential to maintaining IS stability necessary to induce TC activation. INTRODUCTION Dendritic cells (DCs) are the most potent antigen-presenting cells (APCs; Banchereau and Steinman, 1998). There are two main groups of DCs: conventional and plasmacytoid (Banchereau and Steinman, 1998;Merad et al., 2013). DCs are found in tissues in the immature differentiation stage. In the presence of pathogens, they undergo a process of differentiation called maturation, which involves multiple phenotypical changes, including the upregulation of major histocompatibility complex class I (MHC-I) and class II (MHC-II) and costimulatory molecules, like CD80 and CD86. Mature DCs migrate to the lymph nodes (LNs), where they present pathogen-derived peptides via MHC-I to CD8 + T cells (TCs) or via MHC-II to CD4 + TCs, resulting in the activation of these lymphocytes. Hereafter, unless otherwise indicated, when we use the word DCs, we refer to the conventional mature DCs. Several studies have shown that activation of naïve TCs in the LNs involves, first, brief serial DC-TC encounters, which are antigen independent, followed by prolonged and stable antigen-dependent contacts that last several hours (Delon et al., 1998;Iezzi et al., 1998;Stoll et al., 2002;Bajenoff et al., 2003;Bousso and Robey, 2003;Mempel et al., 2004;Miller et al., 2004;Shakhar et al., 2005;Celli et al., 2007). Finally, the TCs recover their motility and proliferate (Mempel et al., 2004;Celli et al., 2007;Scholer et al., 2008). The region of tight adhesion that connects DCs and TCs when they establish stable interactions is called the immunological synapse (IS). We call the DC and TC sides of the IS the IS(DC) and IS(TC), respectively. Most studies on the IS have centered on the IS(TC), and analyses of the IS(DC) are sparse (Riol-Blanco et al., 2009;Rodriguez-Fernandez et al., 2010a,b;Benvenuti, 2016;Verboogen et al., 2016;Gomez-Cabanas et al., 2019;Alcaraz-Serna et al., 2021). Herein, we analyze the role of the filamentous-actin (Factin) cytoskeleton of the IS(DC) in TC activation. ORGANIZATION OF THE PLASMA MEMBRANE PROTEINS COMPONENTS OF THE IS(DC) The first studies on the IS focused on the IS(TC) (Monks et al., 1998;Grakoui et al., 1999). In one of the experimental models used, the CD4 + TCs were plated on a glass-supported lipid bilayer that was converted into a surrogate APC by inserting the intercellular adhesion molecule 1 (ICAM-1), the ligand of the integrin lymphocyte function-associated antigen (LFA-1), and peptides bound to MHC (pMHC; Grakoui et al., 1999). In another model, the CD4 + TCs were allowed to form IS with B cells (BCs; Monks et al., 1998;Grakoui et al., 1999). Following the binding of the TCs either to the glass-supported lipid bilayer or to the BCs, the costimulatory molecule CD28 and the TC receptor (TCR) clustered together in a region called the central supramolecular activation cluster (cSMAC). Contiguous to this region are found LFA-1 molecules that form a ring called peripheral SMAC (pSMAC). Large negatively charged molecules like CD43 and CD45 organize in an outermost ring called distal SMAC (dSMAC; Monks et al., 1998;Grakoui et al., 1999;Freiberg et al., 2002). Interestingly, when the DCs form the IS with TCs (naïve or activated) at the IS(TC), instead of the monocentric organization described above, surface proteins form multiple protein clusters that include TCRs, adhesion proteins, and costimulatory molecules (Brossard et al., 2005;Rothoeft et al., 2006;Reichardt et al., 2007;Fisher et al., 2008;Tseng et al., 2008;Thauland and Parker, 2010). The ability of the DCs to promote multicentric IS(TC) could contribute to explain why they are such potent APCs. Numerous clusters of CD3 and costimulatory molecules multiply the signaling from these receptors, resulting in robust TC activation (Leithner et al., 2021). Supporting this concept, TCs plated on surrogate-patterned APCs that promote TCR or CD28 clusters show enhanced functionality (Mossman et al., 2005;Shen et al., 2008). POLARIZATION OF F-ACTIN AT THE IS(DC) The following examples, in which fixed cells were stained with phalloidin, show that F-actin polarizes in the IS(DC) upon allogeneic or antigen-specific DC-TC formation. Allogenic conjugates include (i) bone marrow-derived DCs (BM-DCs) (BALB/c genetic background) and CD4 + TCs (C57BL/6 genetic background) (Al-Alwan et al., 2001b) and (ii) human monocytederived DCs and allogeneic lymphoblasts (Riol-Blanco et al., 2009). Antigen-specific conjugates include (i) OVA peptideloaded BM-DC, from BALB/c or C57BL/6 mice, and DO11.10 or OTII CD4 + TCs, respectively (Al-Alwan et al., 2003;Eun et al., 2006;Riol-Blanco et al., 2009), and (ii) OVA peptideloaded BM-DCs and OTI CD8 + TCs (Tanizaki et al., 2010). A drawback of these fluorescence microscopy analyses performed with fixed conjugates is that it is difficult to know for certain whether the phalloidin-stained F-actin belongs to the IS(TC) or the IS(DC). However, recently, the use of Lifeact, an amino acid fragment of the protein ABP140 that binds selectively to F-actin, has solved this problem (Riedl et al., 2008;Leithner et al., 2021). High-resolution confocal microscopy analysis of Lifeact-green fluorescent protein (Lifeact-GFP)-expressing DCs that form IS with OTII CD4 + TCs shows that F-actin displays at the IS(DC) a multifocal organization, with foci of different sizes separated by regions where actin is sparse (Leithner et al., 2021). Finally, fluorescence recovery after photobleaching (FRAP) experiments performed with mCherry-labeled actin-transfected BM-DCs that interact with OTII TCs showed a slower recovery at the IS(DC) compared with the cortex, suggesting a higher stability and specific molecular features of the F-actin network at the IS(DC) (Malinova et al., 2016). SURFACE PROTEINS THAT INDUCE ACTIN ACCUMULATION IN THE IS(DC) Engagement of MHC-II, MHC-I, or LFA-1 with specific antibodies bound to polystyrene beads induces F-actin accumulation only in DCs that bind to beads associated with anti-MHC-II antibodies (Al-Alwan et al., 2003). The lack of effect of MHC-I was unexpected because F-actin accumulates at the IS(DC) in DC-OTI CD8 + TC conjugates (Tanizaki et al., 2010). Moreover, engagement of MHC-I on the membrane of endothelial cells with antibodies induces activation of the F-actin regulator ras homolog family member A (RhoA) (Coupel et al., 2004;Lepin et al., 2004) and actin organization (Lepin et al., 2004;Jin et al., 2007;Ziegler et al., 2012a,b). Hence, other experimental strategies, including different anti-MHC-I antibodies, should be used before ruling out that MHC-I controls F-actin accumulation in DCs. An analysis of wild-type (WT)-BM-DCs or CD80/86 knock-out (KO)-BM-DCs interacting with DO11.10 CD4 + TCs suggests that CD80/CD86 induces actin polarization at the IS(DC) (Rothoeft et al., 2006). However, actin failed to accumulate at the IS(DC) upon engagement of CD86 on DCs with antibodies or when BM-DCs, expressing that the CD28 receptors CD80 and CD86 interact with human Jurkat cells expressing murine CD28 (Al-Alwan et al., 2003;Rothoeft et al., 2006). Therefore, stimulation of CD80 or CD86 is not sufficient to promote F-actin aggregation. Finally, the semaphorin receptor Plexin-A1, which is localized at the IS(DC), can also induce RhoA activation and F-actin polarization in this region (Eun et al., 2006). ROLE OF F-ACTIN AND ACTIN-REGULATORY PROTEINS AT THE IS(DC) ON TC ACTIVATION Below, we analyze reports that provide information on the role of DC's F-actin and actin-regulatory proteins on TC activation (Figure 1 and Table 1). When analyzing these experimental data, it is important to take into consideration several points. First, the focus of most of the studies available on this issue was not the IS(DC). Second, the proteins analyzed can be expressed in the IS(DC) and elsewhere in DCs, like the DCs' cortex (e.g., F-actin, WRC, WASP, and Myo9b), implying that these proteins may exert their regulatory effects inside and/or outside the IS(DC) (e.g., WASP, Rac1/2, and mDia also regulate migration). Third, actin-regulatory proteins can also govern actin-independent functions (e.g., HS1). Fourth, the experimental strategies employed to study the role of these molecules, namely, the use of pharmacological agents to inhibit F-actin or DCs deficient in actin-regulatory proteins, do not discriminate between the IS(DC) and other intracellular regions. Filamentous Actin To analyze the role of F-actin at the IS(DC), pharmacological agents have been used that alter actin stability, including cytochalasin D, latrunculin A, and mycalolide B (MycB), which disrupt F-actin, and Jasplakinolide, which stabilizes it (Fenteany and Zhu, 2003). When DCs treated with any of these inhibitors interact with DO11.10 or OTII CD4 + TCs, the activation and proliferation of these lymphocytes is inhibited (Al-Alwan et al., 2001b;Leithner et al., 2021; Table 1). These results emphasize the importance of the integrity of the DCs' actin cytoskeleton for TC activation (Al-Alwan et al., 2001b;Leithner et al., 2021). Confocal microscopic analyses of DCs that interact with Lifeact-GFP expressing OTII CD4 + TCs show that IS(TC) form multiple actin foci (Leithner et al., 2021). However, when DCs pretreated with MycB to disrupt F-actin were allowed to interact with the Lifeact-GFP OTII CD4 + TCs, ∼50% of the IS(TC) present a ring of F-actin surrounding an actin-free circle, instead of a multifocal actin organization (Leithner et al., 2021; Table 1). Hence, multifocal actin at the IS(DC) contributes partially to stabilizing multifocal actin at the IS(DC) and predictably also to Frontiers in Cell and Developmental Biology | www.frontiersin.org *A short note on the regulation of the small GTPases shown in this table and in Figure 1. These proteins cycle between inactive (GDP-bound) and active (GTP-bound) states, which can interact with effector proteins that relay downstream signaling. This cycle is regulated by specific guanine nucleotide exchange factors (GEFs), which catalyze the release of the bound GDP that is replaced by GTP, and by GTPase-activating proteins (GAPs), which induce GTP hydrolysis. Frontiers in Cell and Developmental Biology | www.frontiersin.org the formation of the multicentric IS(TC) (Brossard et al., 2005;Rothoeft et al., 2006;Reichardt et al., 2007;Fisher et al., 2008;Tseng et al., 2008;Thauland and Parker, 2010), although this has to be confirmed in future studies because, in the experiments described, the authors did not stain the surface proteins, such as CD3, and other molecules, which organize in foci in the IS(TC) (Leithner et al., 2021). Finally, F-actin at the IS(DC) can also control DC-TC adhesion by selectively regulating the lateral mobility on the plasma membrane of ICAM-1 (Comrie et al., 2015). Immobilized ICAM-1 at the IS(DC) can promote LFA-1 activation on the IS(TC) and increase DC-TC adhesion (Feigelson et al., 2010). Wiskott-Aldrich Syndrome Protein Wiskott-Aldrich syndrome protein (WASP) is a nucleationpromoting factor (NPF) that activates the actin-related protein 2/3 (Arp2/3) complex (Figure 1). Arp2/3 is an actin-nucleation factor (ANF) that assembles actin dimers or trimers that serve as nuclei that subsequently polymerize into Y-branched actin networks (Schonichen and Geyer, 2010). WASP organize in foci within the IS(DC) (Leithner et al., 2021). WASP-KO BM-DCs show reduced motility and lower F-actin levels (Bouma et al., 2011;Malinova et al., 2016). WASP-KO BM-DCs that interact with OTII CD4 + TCs in vitro present a high number of transient interactions and reduced DC-TC contact areas, suggesting that in the DCs, WASP may stabilize the interactions with the TCs (Bouma et al., 2011;Malinova et al., 2016; Table 1). Similar conclusions have been obtained in in vitro and in vivo studies that analyze the interactions between WASP-KO BM-DCs and OTI CD8 + TCs (Pulecio et al., 2008). At the IS(DC) formed by the WASP-KO BM-DCs, the levels of ICAM-1 and MHC-II are reduced (Malinova et al., 2016). The levels of TCR, LFA-1, F-actin, and talins are also reduced at the IS(TC). Moreover, TCRdependent signaling was also altered, resulting in reduced IL-2 production, and TC proliferation (Bouma et al., 2011;Malinova et al., 2016). Further supporting a role for the WASP/Arp2/3 axis in IS formation, DCs that express Y293F-WASP (a mutation that impairs WASP's ability to activate Arp2/3) display a low number of IS with TCs and reduced priming ability (Bouma et al., 2011). Finally, in FRAP experiments performed with Cherry-labeled actin-transfected WASP-KO, Y293F-WASP, and WT-BM-DCs that formed IS with OTII CD4 + TCs, actin recovery at the IS(DC) was slower in the WT DCs compared with the WASP-KO and Y293F-WASP DCs, suggesting that WASP-Arp2/3-mediated formation of branched actin stabilizes the actin network at the IS(DC) (Malinova et al., 2016). Hematopoietic Lineage Cell-Specific Protein 1 Hematopoietic lineage cell-specific protein 1 (HS1) is a NPF that induces Arp2/3-dependent branched actin networks, and, moreover, it can also bind and stabilize this network (Weaver et al., 2001;Uruno et al., 2003;Hao et al., 2005;Dehring et al., 2011; Figure 1). Since HS1 expression increases during DC maturation , it is interesting to study whether this molecule could regulate actin organization at the IS(DC) and DCs' priming ability ( Table 1). WT and HS1-KO BM-DCs bind and present OVA peptides equally as well with DO11.10 CD4 + TCs . However, HS1-KO BM-DCs loaded with intact OVA protein display a reduced ability to activate the CD4 + TCs . WT and HS1-KO BM-DCs present the MHC-I-restricted VSV8 peptide equally as well with the CD8 + TC hybridoma N15. However, when VSV8 was complexed with the protein GRP94, which also uses the MHC-I pathway of antigen presentation, the priming ability of the HS1-KO BM-DCs was impaired. It was observed that receptormediated endocytosis was selectively inhibited in the HS1-KO BM-DCs, preventing antigen uptake . It was also found that HS1 is required for antigen uptake because it participates, together with dynamin 2, in the scission of the endocytic vesicles . Hence, although HS1 is a NPF, it apparently regulates antigen presentation through the control of antigen endocytosis. WAVE Regulatory Complex WASP-family verprolin homologous proteins (WAVE) regulatory complex (WRC) is an NPF that activates Arp2/3 and induces branched actin (Buracco et al., 2019; Figure 1). WRC is found in the IS(DC), but it also associates with F-actin at the DC cortex (Leithner et al., 2021). Upon interaction of WRC-deficient BM-DCs (Park et al., 2008) with OTII CD4 + TCs, F-actin displays a multifocal organization in the IS(TC), like the IS(TC) formed by the WT BM-DCs (Leithner et al., 2021; Table 1). However, F-actin levels at the IS(DC) are reduced, suggesting that WRC promotes actin accumulation in this region. WRC-deficient DC-TC contacts last longer and display larger areas of contact (Leithner et al., 2021). These prolonged interactions are associated with an increase in the levels of the phospho-ezrin-radixin-moesin (ERM), suggesting a higher anchoring of ICAM-1 to cortical F-actin, which may result in the immobilization of this ligand and increased LFA-1-mediated DC-TC adhesion (Comrie et al., 2015;Leithner et al., 2021). These abnormal long-lasting interactions between WRC-deficient DCs and TCs may explain the observed reduction in the activation of the TCs (Leithner et al., 2021). Mammalian Homolog of Diaphanous Mammalian homolog of diaphanous (mDia1) is an ANF of the formin family that promotes F-actin elongation (Schonichen and Geyer, 2010; Figure 1). The mDia-KO-BM-DCs display reduced adhesion and impaired migration (Tanizaki et al., 2010). CD4 + TCs that establish alloreactive interactions with mDia-KO BM-DC also presented reduced proliferation and low interferon-γ (IFN-γ) production (Table 1). Two-photon microscopy analysis shows that mDia-KO BM-DCs that interact with OTII CD4 + TCs or with OTI CD8 + TCs establish brief contacts within the LNs, indicating that DCs' mDia is important for keeping stable ISs (Tanizaki et al., 2010). Hence, correct TC activation requires of mDia1 expression in the DCs. Fascin Fascin is an actin-bundling protein whose expression is increased during DC maturation (Mosialos et al., 1996;Al-Alwan et al., 2001a;Yamashiro, 2012). In mature DCs, fascin, which can localize to the IS(DC), controls dendrite formation (Al-Alwan et al., 2001a,b;Rothoeft et al., 2006; Figure 1). In antigenspecific models of IS formation, accumulation of fascin and F-actin correlates with more extended contacts between DCs and TCs, increased TC proliferation, and CD4 + Th1 TC-dependent responses (Rothoeft et al., 2006). Using an allogeneic model of IS formation, it is observed that the levels of fascin in DCs correlate with the ability of these cells to stimulate the TCs (Al-Alwan et al., 2001a; Table 1). Finally, in an alloreactive IS model, it was observed that a reduction of fascin levels in the BM-DCs with antisense oligonucleotides inhibits their ability to allostimulate the TCs (Al-Alwan et al., 2001a). Therefore, fascin-mediated bundling of F-actin in DCs contributes to TC priming. Switch-Associated Protein 70 Switch-associated protein 70 (SWAP-70) is a Rac GEF (see Table 1 and legend) that also controls F-actin bundling (Regnault et al., 1999;Blander and Medzhitov, 2006). In SWAP-70-KO DCs, the actin regulatory GTPases RhoA and RhoB are constitutively activated, resulting in an increase in the amount of F-actin in these cells (Ocana-Morgner et al., 2009;Sit and Manser, 2011). Inhibition of RhoA and RhoB in the SWAP-70-KO DCs with Clostridium botulinum increased MHC-II on their plasma membrane and helped recover their ability to activate the TCs. Hence, it is suggested that the high F-actin levels prevent the correct MHC-II localization on the plasma membrane (Bretou et al., 2016). Therefore, SWAP-70 may inhibit RhoA and RhoB activation, which prevents an abnormal increase in F-actin and allows MHC-II localization on the membrane of the DCs (Ocana-Morgner et al., 2009). Myosin IXb Myosin IXb (Myo9b) is a cytoskeletal motor that displays Rho-GTPase-activating protein (GAP) activity (see Table 1 and legend). Myo9b colocalizes with F-actin in DCs (Hanley et al., 2010;Xu et al., 2014; Figure 1). Compared to WT-BM-DCs, KO-Myo9B BM-DCs present a low number interaction with OTII CD4 + TCs, although these interactions last longer (Xu et al., 2014). F-Actin is highly increased in the IS(DC) of the KO-Myo9B BM-DCs that form IS with OTII CD4 + TCs. However, KO-Myo9B BM-DC-OTII CD4 + TCs' interactions resulted in reduced proliferation within 3D-collagen matrices but not in liquid co-cultures (Xu et al., 2014). These results could be due to the different spatiotemporal organization of F-actin in the IS(DC) of the KO-Myo9B DCs under both conditions (Xu et al., 2014). RhoA, Rac1 and Rac 2 RhoA, Rac1, and Rac 2 belong to the Rho GTPase subfamily, which are critical regulators of the actin cytoskeleton (see Figure 1, and Table 1 and legend) (Sit and Manser, 2011). Treatment of the DCs with epidermal cell differentiation inhibitor (EDIN) toxin, which inactivates RhoA, or inhibition of its downstream target, the Rho-associated protein kinase (ROCK), with Y27632, failed to affect IS formation between BM-DCs and OTII CD4 + TCs (Benvenuti et al., 2004). These results suggest that the effects of knocking down SWAP-70 on F-actin discussed above could be mediated by RhoB, instead of RhoA. DCs deficient in Rac1 and Rac2 show alterations in the F-actin organization, resulting in the absence of dendrites and reduced motility. In vitro analyses show that Rac1/2-KO BM-DCs do not form stable contacts with CD4 + TCs. Consistent with these results, the Rac1/2-KO BM-DCs show a reduced ability to activate OTII CD4 + TCs. It was suggested that this inhibition was due to deficient actin dynamics in the Rac1/2-KO DCs that prevent these cells from engulfing and establishing full contacts with TCs (Benvenuti et al., 2004). Cytohesin-Interacting Protein (CYTIP) Although CYTIP is not an actin-regulatory protein, we have included it in this review because it regulates LFA-1, which is an important molecule at the IS(DC) (Figure 1). In resting DCs, LFA-1 remains on the plasma membrane in an inactive state; that is, it cannot bind to its ligand ICAM-1. DCs express cytohesin-1, which interacts with the cytoplasmic β-subunit of LFA-1, resulting in its activation and binding to ICAM-1. CYTIP binds to cytohesin-1, which translocates from the membrane to the cytoplasm, leaving LFA-1 inactivated. During DC maturation, CYTIP levels increase and localize to the IS(DC) (Hofer et al., 2006). Studies on CYTIP in DCs are controversial ( Table 1). In experiments in which CYTIP was reduced with siRNA in human DCs (Hofer et al., 2006), these cells showed a diminished ability to induce proliferation of autologous antigen-specific CD8 + TCs (Hofer et al., 2006). Other studies show that knocking down CYTIP with siRNA in BM-DCs extends antigen-specific contacts with OTII CD4 + TCs or OTI CD8 + TCs and reduces the activation and proliferation of these cells (Balkow et al., 2010). In contrast, in experiments performed with DCs obtained from CYTIP KO mice (Coppola et al., 2006), CYTIP KO BM-DCs enhanced antigen-specific activation OTI and OTII TCs (Heib et al., 2012). CONCLUDING REMARKS The study of the role of F-actin at the IS(DCs) is at its inception. The data discussed above suggest that F-actin polarization at the IS(DC) maintain IS stability necessary to induce TC activation. A network of actin-regulatory proteins controls F-actin organization at the IS(DC) (Figure 1). The pharmacological disruption of F-actin, the knockdown of Rac1/2, or the increase of F-actin levels after knocking down Myo9b reduce the ability of the DCs to activate the TCs. Knockdown of WASP or WRC, which promotes branched actin through Arp2/3, or mDia, which regulates linear F-actin, or fascin, a bundling protein, results in inhibition of TC activation. Deletion of WASP, mDia, fascin, and Rac1/2 reduces the number and/or the duration of DC-TC contacts. In contrast, knockdown of WRC or Myo9b results in extended DC-TC contacts and inhibition of TC activation. In summary, perturbation of F-actin dynamics at the IS(DC) leads to the inhibition of TC activation. Multiple aspects of the F-actin regulation and functions at the IS(DC) need to be addressed in the future. For this purpose, it is very important to develop experimental strategies that selectively target F-actin and its regulatory proteins at the IS(DC). Many wonderful things remain to be discovered at the IS(DC). AUTHOR CONTRIBUTIONS JR-F designed the work and wrote the manuscript with the help of OC-G. OC-G prepared the table and the figure. Both authors contributed to manuscript revision, and read and approved the submitted version.
2021-08-02T13:18:37.415Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "0481a7f4544f0e02f88419cc272d615897759920", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.679500/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0481a7f4544f0e02f88419cc272d615897759920", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118514723
pes2o/s2orc
v3-fos-license
Symmetric Ternary Quantum Homomorphic Encryption Schemes Based on the Ternary Quantum One-Time Pad Aiming at a ternary quantum logic circuit, four symmetric ternary quantum homomorphic encryption schemes, based on ternary quantum one-time protocol, were presented. First, for a one-qutrit rotation gate, a homomorphic quantum encryption scheme was constructed. Second, in view of the synthesis of a 3x3 general unitary transformation, another one-qutrit quantum homomorphic encryption scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit quantum homomorphic encryption scheme about GCX(m') gate was constructed and was further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed from two perspectives. It could be concluded that the attacker can correctly guess the encryption key with a maximum probability ${p_k} = {1 \mathord{\left/ {\vphantom {1 {{3^{3n}}}}} \right. \kern-\nulldelimiterspace} {{3^{3n}}}}$, thus it can better protect the privacy of users' data. Moreover these schemes can be well integrated into future quantum remote server architecture, and the computational security of the user's private quantum information can be well solved in a distributed computing environment. Introduction In a distributed computing environment, customers have a large amount of data stored in the remote server. These data may include personal bank account information, online shopping records, credit card consumption records, etc., and this information belongs to customers private encrypted data which is indistinguishable for a remote server. Suppose we intend to compute on the encrypted data without a decryption process, or delegate it to a trusted third party without leaking information of the input data: is it possible to do so? Fortunately, blind computation [1] or homomorphic encryption [2,3,4,5] can achieve it perfectly without a decryption process and leaking private information of the encrypted input data. From the perspective of quantum information processing, performing operations on encrypted data without a decryption process is relative to blind quantum computation [6,7,8,9,10,11]and quantum homomorphic encryption (QHE). This paper will study this problem by the QHE technique that can not only protect the privacy of users data, but also accomplish secure computatiom on a remote server. Rohde et al. [12] firstly studied quantum walks with encrypted data, and then they proposed a limited QHE scheme using the Boson sampling and multi-walker quantum walk models on Linear Optics Quantum Computation. However, QHE has still not been defined, and a quantum fully homomorphic encryption (QFHE) scheme has not yet been constructed. Liang [13] firstly presented the definitions of the QHE and QFHE, and then, based on the Quantum One-Time Pad (QOTP) protocol, he constructed the symmetric QHE and QFHE scheme with perfect security, where evaluation function depends on the encryption keys. Subsequently, learning from the Universal Quantum Circuit (UQC), he proposed a QFHE scheme [14]. In the scheme, the encryption key is different from the decryption key, and cannot be public. Moreover, evaluation algorithm is independent of the encryption key, and the decryption key can be computed from the encryption key by an interactive update process. Recently, Liang [15] again presented two QFHE schemes, which are constructed based on quantum fault-tolerant construction. The characteristics are using quantum CSS code as the secret key and containing the periodical interaction between Client and Server. Armknecht et al. [16] proved the general impossibility of (Abelian) group homomorphic encryption in the presence of quantum adversaries, when assuming the IND-CPA security notion as the minimal security requirement. And they provided a sufficient condition and discussed its satisfiability in non-group homomorphic cases. Tan et al. [17] presented a private-key QHE scheme that hides arbitrary quantum computations. A particular instance of their encoding hides information at least proportional to m log m bits when m bits are encrypted. Recently, Broadbent et al. [18] presented QHE schemes for circuits of low Tgate complexity. These schemes allow for arbitrary Clifford group gates, but they become inefficient for circuits with a large degree of complexity, measured in terms of the non-Clifford portion of the circuit. Currently, QHE research is limited and mainly focused on the quantum bits (qubits). According to present research, this article presented four Ternary QHE (TQHE) schemes based on the Ternary QOTP (TQOTP) for the first time. The rest of the paper is organized as follows. Section 2 provides a brief introduction to ternary quantum gates and QHE, and a TQOTP scheme is presented. In section 3, the first TQHE scheme, based on one-qutrit rotation gatesR (ij) ∂ (θ), is constructed, and it is generalized to a general ternary quantum gate with synthesis idea. Then the third TQHE scheme about a GCX gate is constructed, and extended to n-qutrit gate in theory. These schemes are analyzed by synthesizing concepts from ternary quantum gates, secret key security and user data privacy in section 4. Conclusions and future research ideas are presented in section 5. Universal Quantum Circuits Definition 1. (Universal Quantum Circuit (UQC) [19]) Fix n > 0 and let C be a collection of quantum circuits on n qubits. A quantum circuit U on (n + m) qubits is universal for C if, for every circuit C U ∈ C, there is a string x ∈ {0, 1} m (the encoding) such that, for all strings,y ∈ {0, 1} n (the data), The definition of the UQC tells us two things: one is that an arbitrary U transformation can be synthesized by a finite number of logic gates of the C set. Another is that blind quantum computation or homomorphic encryption scheme can be constructed, which will be discussed later. In the UQC, given a n-qubit |y as the input data, while a m-qubit |x is input as the encoding of a quantum transformation C U ∈ C, the UQC would output (n + m) qubits (shown in Eq.2.1). Here,n-qubit |x is called the encoding of the quantum transformation C U ∈ C with regard to the UQC C. |x as input data can not only hide the data |y but also protect the operation C U . Furthermore, Eq.2.1 gives a transformation between U and C U quantum circuit, which means it is possible to construct a homomorphism operator between U and C U . Ternary Quantum Circuit A qutrit is represented as a unit vector in state space, which is a complex three-dimensional vector space H 3 . In the computational basis, the basis vectors (or basis states) of H 3 are written in Dirac notation as |0 , |1 , and |2 , where |0 ≡ (1, 0, 0) T , |1 ≡ (0, 1, 0) T , and |2 ≡ (0, 0, 1) T . An arbitrary vector |ϕ in H 3 can be expressed as a linear combination |ϕ = a 0 |0 + a 1 |1 + a 2 |2 , where a i ∈ C and 2 i=0 |a i | 2 = 1. The real number |a i | 2 is the probability that the state vector will be in the i th basis state upon measurement. A qudit is represented as a unit vector in the state space, which is a complex projective d dimensional Hilbert space H d . In the computational basis, the basis vectors of H d are written in Dirac notation as |0 , |1 , · · ·, |d − 1 , where |i =(0, 0, ..., 1, ..., 0) T with a 1 in the (i +1)st coordinate, for 0 ≤ i ≤ d − 1. Its linear combination is similar with that of the qutrit. Note that the basis vectors in the computational basis are ordered by natural numbers. Common one-qutrit circuits The operators X (ij) are defined as follows, The X (ij) gates acting on |k can be represented as . Because X (ij) = X (ji) and X (ij) i=j = I 3 ,TX gates have three valid forms and are defined as follows: The ternary extension of Hadamard gate can be expressed as The H (ij) gates acting on |k can be expressed as The Z (i) gates can be denoted as follows, The Z ij gates applied to |k can be expressed as Z (i) |k = − |k , i = k |k , i = k . 4)Ternary shift gates The ternary shift gates are listed in Table 1, in which the addition is mode 3 addition. 5)Ternary rotation gates The definition of a one-qutrit rotation gate is listed in Eq.2.7. Based on exponent mapping [20], the following four equations are obvious,R (01) A one-qutrit gate is essentially a 3 × 3 unitary matrix. According to the Cartan Decomposition of Lie algebra, the unitary matrix U has the following form [21], where α, β, γ, δ, θ, ϕ, β , γ and δ are all real numbers. The four basic one-qutrit rotation gates, R Common two-qutrit circuits where ⊕ 3 stands for addition modulo 3, and will be acted on target bit B j , therein B i1 , B i2 , · · ·, B ik are control bits. When K =1, the TKCX gate will evolve into the TCX gate. 5)Generalized Controlled X(GCX) Gate For a GCX(m ) gate and a two-qutrit |m, A , When m is fixed as a special control bit and m = m, X (ij) will be acted on target bit A; Otherwise GCX(m ) gate will not affect the state |m, A ,for m , m ∈ {0, 1, 2}. TQOTP scheme By referring to QOTP [22] and combining it with ternary quantum information technology, we design a TQOTP scheme. Suppose an n-qutrit encryption operator is denoted as Where X, H, and Z all represent ternary quantum gates, and Obviously, we can use 3n random natural numbers and patulous bit-wise XHZ gates to encrypt an n-qutrit quantum state. The encryption operator U k satisfies a uniform distribution of 3 3n unitary matrices. Thus, the probability p k , which means the probability of choosing U k , is defined as With respect to our TQOTP scheme, there are some improvements which we can do. These improvements include determining how to combine the TQOTP scheme with quantum key distribution, and construct the encryption operator U k , and so on. By improving the TQOTP scheme, we can obtain an information-theoretically-secure scheme. These interesting problems will be discussed in future work. QHE scheme Definition 2. A QHE scheme is composed of four algorithms [13]: a key generation algorithm, an encryption algorithm, a decryption algorithm and an evaluation algorithm. Compared with the usual quantum encryption scheme, the QHE scheme has a fourth evaluation algorithm, which is used to process the quantum ciphertext without decrypting it. The purpose of the evaluation algorithm is mainly to construct a homomorphic unitary operation, which can be acted on given quantum ciphertext, according to a given unitary operation. After a user has decrypted the result of the evaluation algorithm, he will obtain the same result of the corresponding operation in plaintext. So how to construct a homomorphism operation on given quantum ciphertext is the key issue in constructing the QHE scheme. Remark 2. Suppose ρ c and σ m are the forms of density matrices for quantum states. T ∆ is a set of permitted quantum operators. The evaluation algorithm can be described as follows: according to the key and the given operator T ∈ T ∆ , it generates another quantum operator T ' and performs it on the ciphertext ρ c . In this case, the operator T ' is related to the operator T and the key. The operator T can be regarded as a desired operation on the plaintext σ m . The operation T ' corresponding to T is performed on the ciphertext ρ c , and can implement the desired operation on the plaintext σ m . The evaluation algorithm will construct the corresponding homomorphic operator It is deduced that The first TQHE scheme is shown as follows. KeyGenAlgorithm:Randomly generate three key numbers α, β, δ ∈ {0, 1, 2}; EvaluateAlgorithm:According to α, β, δ and R (ij) ∂ (θ), it performs homomorphic operator ∂ (θ)Z δ H β X α on the given ciphertext ρ c without a decryption process. The output state of EvaluateAlgorithm is For Eq.3.2, a brief verification is given as follows, So Eq.3.2 holds. After the user decrypts the state of Eq.3.2, he will obtain the desired-state as follows Obviously, according to Eq.3.4 the output state of EvaluateAlgorithm is just the result of the operator R The output state of EvaluateAlgorithm is Z 1 H 0 X 2 (01) y X 2 H 0 Z 1 |0 = (0, 0, 1) T = R (01) y (π) |0 , which is exactly the output of the user-desired operator R (01) y (π) acting on the plaintext |0 . Moreover, no decryption is performed during the computing of EvaluateAlgorithm. Thus, the scheme satisfies the definition in section 2.4, and the result of the homomorphism operator is precisely the user-desired output state. 2)General one-qutrit gates An arbitrary 3 × 3 unitary matrix U can be decomposed into some R ∂ . Referring to the first TQHE scheme, we can propose the second TQHE scheme for an arbitrary 3 × 3 unitary matrix U. It can be seen that the operator U is a little complicated. As a result it will influence the execution efficiency of EvaluateAlgorithm. TQHE scheme for GCX gate The Cartan Decomposition of one-qutrit gate is not unique, so the choice of one-qutrit elementary gates is not unique either. Refs.23 and 24 regard GCX gate and extended rotation gate as a two-qutrit elementary gate, respectively. In this paper, we choose the GCX gate as a ternary quantum elementary gate, which is universal for n-qutrit quantum computing, when it is assisted by arbitrary one-qutrit gates. The common two-qutrit gates: TCX, TSWAP, TSUM (TFeynman), TXOR, and TShift gate can be synthesized by some GCX gates without auxiliary one-qutrit gates. For example, the TSUM (TFeynman) gate is synthesized by four GCX gates, as shown in Fig.1. Likewise, the TSWAP gate is synthesized by nine GCX gates (see Fig.2) and the TXOR gate synthesized by three GCX gates (see Fig.3). When the control bit m is set to |0 state (or |1 or |2 ) and m = m, the effect of the GCX(m' ) gate is that the operator X (ij) will be applied to the target bit A. Namely, If m = m, then the operator X (ij) will be equivalent to I 3 without affecting the target bit A. Therefore the homomorphism operator of the GCX(m') gate is exactly the similar with the operators X (ij) ∈ X (01) , X (02) , X (12) (equivalent to X (ij) ∈ X (0) , X (1) , X (2) ). In order to better describe the performance of the GCX(m') gate acting on the two-qutrit |m, A , the function of f (m , m) is defined as follows So the third TQHE scheme associated with the GCX gate is described as follows. KeyGenAlgorithm:Randomly generate three numbers α, β, δ ∈ {0, 1, 2} 2 ; EvaluateAlgorithm:According to α, β, δ, i and j, it computes the homomorphic operator Then the corresponding operator X (ij) will be acted on the given ciphertext ρ c without performing DecryptAlgorithm. The output state of EvaluateAlgorithm is Obviously it is just the same as the operator I 3 ⊗ X (ij) τ acting on the plaintext σ m . The scheme based on the GCX gate satisfies the QHE scheme demand. For example, suppose that the user's plaintext state is |02 = (0, 0, 1, 0, 0, 0, 0, 0, 0) T . For the user-desired GCX(m'), the control bit m' is set to 0 and operator X is specified as X (02) τ (equivalent to I 3 ⊗ X (02) , τ = f (m , m) = I 3 ). The encryption operator is denoted as . Thus, the homomorphic oper- The output state is After a user has decrypted the result of EvaluateAlgorithm, he will obtain the same result as the user-desired operator X (02) τ acting on the plaintext σ m . The scheme is in accord with the definition in section 2.4. n-qutrit TQHE scheme Any n-qutrit quantum gate can be expressed as a 3 n ×3 n unitary matrix. Based on one/twoqutrit quantum gates, there are plenty of research opportunities for n-qutrit quantum gates by using permutation group theory and Cosine-Sine Decomposition (CSD). According to permutation group theory, all n-qutrit((n ≥ 2)) quantum circuits can be generated by a group of two-qutrit gates: SWAP, NOT and 1-controlled-NOT gates without ancillary qutrits [25,26]. Obviously, it is only a construction-based algorithm. In terms of CSD, a 3 n × 3 n unitary matrix can be synthesized by 12 controlled-U gates, 12 Dual-Shift gates, 3 n (N -1)-controlled rotation gates and 2 · (n − 1) · 3 n−1 TX gates [27]. The (N -1)-controlled rotation gate is defined as follows which method is used to synthesize a 3 n × 3 n unitary matrix, there always exists extremely complex process, a giant number gates and very low efficiency. At present, this is a difficulty and hot issue of synthesis of multivalue quantum gate. According to Ref.25 and section 3.2, we describe a process of how to construct an nqutrit TQHE scheme. TNOT gate and Ternary 1-controlled gate are in fact two single-shift gates with conditions (see Table 1). The difference is the latter requires the control bit to be 2 on base of the formers condition (see Figs.4 and 5). Obviously, a TNOT gate can be synthesized by two GCX gates (see Fig.4). And a ternary 1-comtrolled gate can be synthesized by two conditional GCX gates (see Fig.5). Therefore, we can conclude that any n-qutrit (n ≥ 2) logic circuits can be synthesized by some GCX gates. Suppose the user will perform the quantum circuit C 3 n ×3 n on the given quantum plaintext. In principle we can get the homomorphism operator C 3 n ×3 n , which is synthesized by a large number of GCX gates' homomorphic operators, such that Where α, β, δ ∈ {0, 1, 2} n . So far we can present n-qutrit QHE scheme in theory. However, the form of the operator C 3 n ×3 n synthesized by a great deal of GCX gates is so complex that performing EvaluateAlgorithm is very bad in execution efficiency and energy consumption. Thus the n-qutrit QHE scheme does need many improvements in multivalue quantum circuit synthesis and homomorphic EvaluateAlgorithm design. Circuit synthesis Currently, the synthesis of multivalue quantum logic circuit mainly focuses on the simplest ternary quantum logic circuits. Although there are some research achievements, they are not very mature themselves. How to choose universal ternary quantum gate has not been well solved so far. And there are no uniform criteria for the performance analysis, complexity, cost, energy consumption, and so on, after the ternary quantum gates being synthesized by a number of GCX gates. In short, it is feasible to construct an n-qutrit QHE scheme in theorem, but it is too difficult to construct in practice. Therefore, this paper does not actually construct such a scheme and we only describe a general construction process by the synthesis method. In view of the current state of research on the synthesis of an n-qutrit quantum gate (actually a 3 n × 3 n unitary matrix), synthesis methods of quantum circuit require further optimization and refinement. Based on quantum circuit synthesis, construction of an nqutrit QHE scheme is premature. The key reasons are very low performance efficiency and difficulties in constructing the homomorphic operator while executing EvaluateAlgorithm. Therefore, the TQHE scheme presented in the paper is limited to one-qutrit rotation gates, one-qutrit quantum gates and two-qutrit gates synthesized by some GCX gates. Theoretically, we can generalize the TQHE scheme to the n-qutrit (n ≥ 2) quantum gates case. Security Our TQHE schemes, which are based on the TQOTP scheme, all assume that the delegation party is honesty. There are two aspects of this representation. One is that it will not leak or snoop users private data in the process of performing EvaluateAlgorithm in the case of knowing the decryption key. The other is that it will right execute the commissioned unitary operators without decryption. The delegation party must know the user-given operators and the secret key. Otherwise he cannot correctly construct the homomorphic operator. It should be noted that the EvaluateAlgorithm is dependent on the secret key. Thus, the delegation party can decrypt it and obtain the original qutrit. This will restrict the application of these schemes. Meanwhile these schemes cannot be used in blind quantum computation. However, we can delegate the computation to the trusted party by using the TQHE schemes, which can prevent the malicious parties from obtaining the data and the result of the computation. Of course, these schemes can be also used in secure multiparty quantum computation. In addition to the legal user and trusted delegation party, there is no party (including the eavesdropper, etc.) that is able to get the complete user data. According to our TQOTP scheme, which is used to encrypt the users original data, the eavesdropper correctly guesses the secret key with the probability p k = 1 3 3n . In the case of one-qutrit (n = 1), the probability p k is 1 3 3n = 1 3 3 = 1/27 ≈ 3.7% . Similarly, in the case of two-qutrit (n = 2), the probability p k is 1 3 6 ≈ 0.14% and, with three-qutrit (n = 3), the probability p k is 1 3 9 ≈ 5.08 × 10 −3 %. With the increase of original datas length, the secret keys length will be longer. So the probability p k will become smaller and smaller, and eventually it will tend to zero. It is extremely difficult for the eavesdropper to correctly guess the secret key. Additionally, the quantum key distribution protocols, such as the BB84 protocol, can be used to generate and transmit the secret key between user and delegation party, which must make our secret key unconditional secure. As a result, our schemes will be more secure. The eavesdropper cannot obtain effective and complete information for the user encrypted data and the output state of the EvaluateAlgorithm. Suppose forcing to measure the intercepted data, the eavesdropper will obtain the random information about these encrypted data. Due to the quantum No-Cloning Theorem and Heisenberg Uncertainty Principle, along with the TQOTP scheme, the eavesdropper knows nothing about the mutual information of the unknown cipher states, namely I(ρ c : Eve) ≈ 0. There is only one round of information exchange between the user and the delegation party in our TQHE schemes, rather than several rounds of information exchange in blind quantum computation. The benefits of our schemes are the high level of users data privacy and the reduction of the times for which sensitive data is exposed. By performing EvaluateAlgorithm, the delegation party will execute the corresponding homomorphic operator in the user-given cipher states. The homomorphic operator is equivalent to a new encryption operator, which acts on the user-given ciphertext. The homomorphic operator is only known by the delegation party, and the eavesdropper knows nothing about it. Without knowing the secret key and homomorphic operator, the eavesdropper does not know the secondary encrypted quantum states, i.e.,I(ρ c : Eve) ≈ 0. The binary QHE schemes presented in Ref. 13 are efficient and perfectly secure, i.e., I(ρ c : σ m ) = H(ρ c ) − H(ρ c |σ m ) = 0. It shows that the ciphertext is independent of the plaintext. However, a deterministic QHFE scheme necessarily incurs exponential overhead if perfect security is required [28]. This is very difficult to implement in practice. The TQHE schemes in this paper are not perfectly secure because of the encryption operator U k , which is not a complete orthogonal basis in n-qutrit Hilbert Space. As a result, both of the output quantum states of the EvaluateAlgorithm and EncryptionAlgorithm are not the totally mixed states. According to definition 1 in Ref.11, our TQHE schemes are all ε-security. Conclusion At present, there exist few researches in QHE. According to present research, it is the very first time to present these TQHE schemes based on TQOTP in this paper. First, we proposed a TQOTP protocol and these TQHE schemes are based on it. Second, in allusion to the ternary quantum rotation gate R (ij) ∂ , we had constructed the homomorphic operator (ij) ∂ and the first TQHE scheme. Then according to general one-qutrit gate synthesized by eight (ij) ∂ gates, we presented the second TQHE scheme. Third, on the basis of Ref. 23 and the GCX(m') gate which can be as a two-qutrit universal gate, we constructed the third TQHE scheme for the GCX(m') gate. Referring to Refs.25-26 and the third TQHE scheme, we generalized to n-qutrit case and theoretically presented the fourth TQHE scheme. Finally, we discussed two components of these schemes security. One is that there exists an extremely low probability that the attacker correctly guesses the secret key. Another is that the attacker knows almost nothing about two kinds of cipher quantum states, i.e., the output encrypted quantum states of the EvaluateAlgorithm and EncryptionAlgorithm. Meanwhile, future research ideas are to find a TQOTP scheme with perfect security and to construct an asymmetric TQHE scheme, where the EvaluateAlgorithm only depends on the public key but not the private key. The latter is an open problem. Maybe one can consider how to modify the quantum public-key encryption scheme in Refs.29-30, such that it becomes an asymmetric QHE scheme. If this goal were achieved, the computing on the quantum cipher state could be securely outsourced, and then blind quantum computation would be implemented in this way.
2015-05-12T01:54:32.000Z
2015-05-12T00:00:00.000
{ "year": 2015, "sha1": "17a167db6b3e94477814b505a8ffc42051fbf0ac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "17a167db6b3e94477814b505a8ffc42051fbf0ac", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
4924289
pes2o/s2orc
v3-fos-license
Multiple Endocrine Neoplasia with Pulmonary Localization: A New Protocol of Approach We present three patients with bronchial carcinoids, in which a more probed study emphasized the presence of three multiple endocrine neoplasia (MEN). Assessment included a total-body computerized tomography, a total-body single-photon emission computerized tomography by In-DTPA-D-Phe octreotide, and genetic map. Two patients presented an atypical MEN 1 and one patient showed an atypical MEN 1 with a familial medullary thyroid carcinoma. All patients were operated upon: two are still alive and one died 50 months after the first intervention. Precocious diagnosis of MEN permits a good long-term outcome. INTRODUCTION Multiple endocrine neoplasia (MEN), transmitted by an autosomal-dominant pattern of inheritance, represent a rare variation of neuroendocrine tumors. MEN show the presence of various endocrine neoplasms in the same patient that can appear either at the same time, at a distance of time, or never be visible. Clinical aspects are variable and complicated due to the production of the biologically active substances. There are two major forms: MEN 1 and MEN 2 [1,2]. MEN type 1 is characterized by parathyroid, pancreatic islet, and anterior pituitary tumors; gastrointestinal and lung neoplasms or suprarenal adenomas can be found. MEN type 2 presents these subgroups: (1) MEN 2A, characterized by medullary thyroid carcinoma, pheochromocytoma, and primary hyperparathyroidism; (2) MEN 2B, characterized by medullary thyroid carcinoma, pheochromocytoma, marfanoid habitus, and ganglioneuromatosis; (3) FMTC, in which medullary carcinoma appears alone. Our experience refers to this complicated topic through the study of three patients with the common characteristic of the presence of bronchial carcinoids. CASE 1 A 28-year-old woman came to our observation with hemoptysis. Chest X-ray revealed a lesion in the right pulmonary parenchyma. Total-body computerized tomography (CT) highlighted a 4-cm diameter mass located in the middle pulmonary lobe and a 2-cm pancreatic neoformation with clear outlines. Fiber-optic bronchoscopy allowed diagnosis of typical carcinoid. Total-body single-photon emission computerized tomography (SPECT) by 111 In-DTPA-D-Phe 1 octreotide confirmed the localizations cleared at CT. Serum levels of neuron-specific enolase (NSE) and chromogranin A (CgA) were positive; urinary level of 5hydroxy-3-indoleacetic acid (5-HIAA) was negative. Immunohistochemistry was positive for CgA and synaptophysin. The patient underwent middle lobectomy and pancreasectomy; postoperative histological evaluation was of typical bronchial carcinoid (T 1 N 0 M 0 : Stage IA) and pancreatic insular tumor type gastrinoma with G nonsecreting cells. In consideration of these characteristics, the anamnesis and a clinical-functional study of parathyroid and suprarenal glands were carried out; results were also normal. Genetic map revealed a deletion of three a.a. on exons 6 and 8 of oncosuppressor gene MEN 1. After 7 years from the interventions, the patient shows excellent health, without recurrence of pathology. On the basis of the clinical and surgical history, a final diagnosis of atypical multiple neuroendocrine syndrome type MEN 1 was expressed. CASE 2 A 54-year-old woman came to the emergency room with acute sight disorders and a persistent headache. Total-body CT showed a 3.5-cm diameter lesion located in the hypophysis. Intraoperative histological diagnosis was of inactive pituitary macroadenoma, with consequent excision and complete resolution of the symptomatology. One year and 3 months later, the patient came to our observation with hemoptysis. Total-body CT highlighted an atelectasis of the right superior pulmonary lobe. Fiber-optic bronchoscopy revealed a smooth red neoformation in the right upper lobe bronchus; multiple biopsies allowed diagnosis of atypical carcinoid. Total-body SPECT by 111 In-DTPA-D-Phe 1 octreotide confirmed a singular localization in the thorax. Serum level of NSE (0-15.2 µg/ml) and CgA (60 ηg/ml) was positive as was the immunohistochemical assessment of CgA and synaptophysin. Patient underwent superior right lobectomy with a histological evaluation of atypical bronchial carcinoid without lymph node metastases (T 2 N 0 M 0 : Stage IB). Anamnesis revealed that the 60-year-old brother underwent lung excision for carcinoma. Functional study of parathyroid and suprarenal glands gave normal results. Genetic map highlighted an alteration on exon 8 of oncosuppressor gene MEN 1. Four years from intervention, the patient shows excellent health, without neuroendocrine syndrome. Clinical and surgical history suggested a definitive diagnosis of atypical multiple neuroendocrine syndrome type MEN 1. CASE 3 A 44-year-old man came to our observation because of a hemoptysis and persistent cough. Total-body CT highlighted a 4.5-cm diameter neoformation located in the inferior right pulmonary lobe with clear outlines (Fig. 1). Fiber-optic bronchoscopy revealed a smooth vegetation in the right lower lobe bronchus; biopsies allowed diagnosis of typical carcinoid. Total-body SPECT by 111 In-DTPA-D-Phe 1 octreotide confirmed the thorax localization. Serum level and immunohistochemistry of tumor markers were positive. Patient underwent inferior right lobectomy for typical bronchial carcinoid without lymph node metastases (T 2 N 0 M 0 : Stage IB). One year later, the patient was hospitalized owing to visual field illness. Total-body CT and SPECT by 111 In-DTPA-D-Phe 1 octreotide showed a 2.5-cm diameter lesion located in the hypophysis (Fig. 2). Intraoperative histological diagnosis was of inactive pituitary macroadenoma; excision allowed resolution of the symptomatology. Two years after the second intervention, the patient carried out a routine control examination that checked the presence of a palpable nodule in the left thyroid lobe. CT of the neck and a total-body SPECT by 111 In-DTPA-D-Phe 1 octreotide showed a circular left thyroid lesion with diameter of 2.5 cm (Fig. 3). Calcitonin levels were elevated to 650 pg/ml. Fine needle aspiration biopsy (FNAB) permitted diagnosis of medullary thyroid carcinoma. Patient underwent total thyroidectomy for stage II medullary thyroid carcinoma. Anamnesis showed that the father and the great uncle had thyroid disease. Functional study of parathyroid and suprarenal glands was negative. Genetic map highlighted a site-splice alteration on exon 6 of oncosuppressor gene MEN 1 and proto-oncogene RET missense mutation. On the basis of the clinical and surgical history, a final diagnosis of atypical multiple neuroendocrine syndrome type MEN 1 associated to FMTC was expressed. Eight months later, the patient showed a bone metastases (right femur, ribs and cranial). He was treated with somatostatin analog (octreotide), but passed away after 6 months because of cardiac failure. DISCUSSION Our experience underlined the necessity of an accurate anamnesis and a total-body CT and SPECT by 111 In-DTPA-D-Phe 1 octreotide in the multiple endocrine syndrome. MEN 1 is linked to germline mutations on the MEN 1 tumor suppressor gene located on the long arm of chromosome 11 (11q13) [3]. Germline mutations on the proto-oncogene RET located on the long arm of chromosome 10 (10q11.2) are responsible for MEN 2 [4]. A genetic map of the patient and his/her family can show unrecognized inherited diseases and presymptomatic diagnosis. In fact, the simple alteration of genetic screening in MEN 2 authorizes the early prophylactic thyroidectomy. CT allows the primary identification of lesions and the evaluation of topographic and morphological aspects; neoformations smaller than 1 cm cannot be distinguished with CT. SPECT by 111 In-DTPA-D-Phe 1 octreotide defines the exact localization and stage of tumor owing to the high affinity to sst2 and the low affinity to sst3 and sst5 subtype receptors of somatostatin; this device orientates the surgical approach. Physiological characteristics of octreotide (long half life of the tissue and rapid plasmatic clearance) explain the higher specificity and sensitivity with SPECT by 111 In-DTPA-D-Phe 1 for the diagnosis of MEN, lymph node invasion, and repeat phenomena. Shi et al. [5], in a comparative study of SPECT vs. CT and magnetic resonance imaging (MRI), show unexpected mass in 40 sites not detected by conventional techniques and defined by octreoscan in 50% of MEN patients. Complexity of neuroendocrine tumors make the use of chemotherapy still debatable [6]. Octreotide is a valid alternative because it inhibits the growth factors on the tumor cells and increases the binding protein IGF-I. This method must be applied in the case of metastases, clinical intolerance of complementary treatment, or chemoresistance. The role of surgery is uncontroversial, determining the pathological stage of the tumors and long-term survival. Our study found the rare presence of pulmonary localization and the unusual association between MEN 1 and FMTC. A bone relapse after many years in one patient highlights the low oncological aggressiveness of the neoplasms. The surgical approach must ensure histological completeness. The indications of excisions have already been discussed and established in bronchial carcinoids [7]. Nikou et al. [8], in Zollinger-Ellison and MEN 1 syndrome patients, treated only 54% of pancreatic or duodenal gastrinomas surgically and the associated parathyroid adenomas in all patients, pituitary adenomas in three patients, and bronchial carcinoid in one patient. Survival rate was 91%. CONCLUSIONS The pathogenesis and clinical features of MEN need a protocol of study based on genetic, radiological, functional analysis, and follow-up of the different organs potentially involved in the MEN. This behavior permits an effective precocious diagnosis and surgical treatment, improving the prognosis of patients. In the future, the progress of genetic-molecular techniques will make it easier to classify MEN patients, with a nearly complete possibility of healing.
2018-04-03T05:08:20.257Z
2008-08-08T00:00:00.000
{ "year": 2008, "sha1": "9b186f39ca11c0256137ed3300f3330d147bc565", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2008/606251.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e3647901e7e9e18d09584f34b79aa9f6d69646f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236981326
pes2o/s2orc
v3-fos-license
A Ka-Band Balanced Four-Beam Phased-Array Receiver With Symmetrical Beam-Distribution Network in 65-nm CMOS This paper demonstrates a Ka-band CMOS phased-array receiver capable of generating four balanced beams from two inputs. A passive four-beam symmetrical differential network is proposed to distribute two input signals to eight channels and facilitate beam generation at the outputs in the receiver. The detailed design and optimization of the passive four-beam differential network are presented. Using a 6-bit passive vector-modulated phase shifter and a 5-bit switched-type attenuator in each channel, we implement a phase control of 360° with <4° RMS phase error and a gain control of 17 dB with <0.35 dB RMS gain error from 27 to 31 GHz. The receiver consumes a current of only 40 mA under 1 V supply voltage. Eight channels of the four-beam phased-array receiver were measured and a gain mismatch of less than 0.3 dB has been achieved. Beam to beam couplings are investigated by measurements and beam-to-beam isolation are better than 32 dB from 27 to 31 GHz. The chip size is $2.6\times 4$ mm2 including all digital control circuitry and pads. assemble multiple single-beam phased arrays in one board as show in Figure 2(a). However, this solution leads to a bulky, expensive and energy inefficient design for multibeam generation, with N B × N A chips needed to generate N B beams with N A -element phased arrays. The other solution is shown in Figure 2(b), where multiple concurrent beams can be received by a single phased array using multi-beam receiver front-ends. For N CH channels of each beam at the same antenna aperture size, the solution in Figure 2(b) has higher gain than that in Figure 2(a). In fact, for a total of N A antenna elements to support N B beams, the gain of the phased array in Figure 2(a) is only from N A /N B antennas while the gain of the multi-beam phased array in Figure 2(b) is from N A antenna elements. In other words, the solution in Figure 2(b) can reduce the system size of a multi-beam phased array by using the high-integration multi-beam chip. As an example, to realize four beams with each beam drawing from the same number of antennas, the integrated four-beam chip can help save about 75% physical size. Multi-beam phased-array transceivers have been proposed in [8]- [12] to improve system integration and reduce cost. In [9], a hybrid beamforming phased array is proposed, with two baseband streams generated from eight antennas. Also, the work in [8] proposed a dual-band four-beam receiver. Although the designs could be employed for a large-scale phased array, a high power consumption would be introduced by the large number of mixers and the LOs. Active combining networks are used in the eight-beam receiver [10] and four-beam receiver [11], introducing extra power consumption. Besides, imbalance is caused in [11] by the different length of transmission lines at the beam outputs. Therefore, the multi-beam chip design has special challenges to properly design the signal distribution networks and to reduce the couplings between different beams. Furthermore, the power consumption needs to be reduced for space use. This paper presents a Ka-band four-beam phased-array receiver based on [13] for SATCOM application, featuring balanced and beam generation, accurate gain/phase control and ultra-low power consumption. A passive symmetrical beam-distribution network (SBDN) is proposed for generating four balanced beams with high isolation. Accurate phase and gain controls of each channel are achieved by a vector-modulated phase shifter (VMPS) and a switchedtype attenuator (STA). The fully-passive SBDN, gain and phase tuning blocks, along with the low-power low noise amplifiers (LNAs) contribute to an extremely low power consumption. In consequence, the complete two-element fourbeam phased-array receiver only consumes a total of 40 mW DC power, which satisfies the low-power requirement of space use. The rest of this paper is organized as follows. Section II presents the overall architecture of the proposed twoelement four-beam phased array. Section III demonstrates the detailed circuit implementation of the building blocks, including the SBDN, the pre-LNA, the phase shifter and the attenuator. In section IV, single-channel measurement and beam coupling characterization through measurement results are presented. Section V concludes this paper. II. SYSTEM ARCHITECTURE For the large-scale phased arrays, low power consumption is of great importance. While the multiple-beam receiver chip can help reduce the system size, the large number of phase/amplitude tuning blocks, required for multiplebeam operation, would still cause high power consumption. To tackle this challenge, a two-element four-beam receiver chip consuming extremely low power is proposed. Figure 3 shows the block diagram of the proposed twoelement four-beam receiver chip. The two pre-LNAs are employed to suppress the noise of the subsequent circuit stages and improve the noise figure of the system. Then, the two outputs from the pre-LNAs are each divided into four signal paths, which forms two elements for each beam. The elements for the same beam are placed adjacently. A passive SBDN featuring perfect balance and high isolation is proposed to perform the aforementioned two-to-eight signal division. After the SBDN, each of the eight channels incorporates the phase and gain control blocks, which are independently controlled. The phase tuning is implemented by a 6-bit fully-passive VMPS and the gain tuning is implemented by a 5-bit STA, both introducing zero DC power consumption. After the phase and gain tuning blocks, four combiners are adopted to sum up the signals from element 1 and element 2, which generate four output beams. Finally, an amplifier is added after the combiner to increase the gain of each beam. To take advantage of the low-noise GaAs technology, external GaAs LNA will be added before the pre-LNA, which can suppress the noise of the CMOS chip and reduce the system noise figure. GaAs LNAs with good performance have been reported in [14]- [20]. In particular, broadband LNAs with 1-2 dB noise figure (NF) and 20 dB gain have been achieved in [14]- [16]. In Figure 4, the calculation of the gain and noise performance with and without the GaAs LNA is summarized. As indicated, a 2.3 dB gain and 12.3 dB NF can be achieved by the CMOS chip, without GaAs LNA. By adding a GaAs LNA with 17 dB gain and 1.2 NF in [19], the gain of the whole system can be increased to 19.3 dB and the NF can be reduced to 2.1 dB. Therefore, the complete system can achieve low power consumption and low NF at the same time. III. CIRCUIT IMPLEMENTATIONS A. SYMMETRICAL BEAM-DISTRIBUTION NETWORK In this work, a two-to-eight SBDN is required to distribute the signals from the two inputs. Since the passive distribution networks have the advantage of high linearity and zero dc power consumption, the SBDN in this work is implemented by a symmetrical two-stage Wilkinson power divider (WPD). The main design target of this network includes: 1) high isolation between different beam branches; 2) identical transmission performance for each branch; 3) low loss and compact area; 4) facilitation of beam synthesis. The circuit diagram of the proposed SBDN is shown in Figure 5, which consists of single-ended and differential WPDs, differential interconnection lines and transformerbased baluns. The eight outputs (i.e., OUT A 1 -OUT A 4 and OUT B 1 -OUT B 4 ) distributed from the two inputs (i.e., IN A and IN B) are arranged in a way that facilitates beam synthesis. At the first stage of the network, a single-ended WPD is employed. To reduce the large area of the λ/4 transmission line (TL) in the WPD, lumped-element WPDs have been employed [21]. In this design, the λ/4 TL is implemented by a five-tap inductor to ensure compact size and broad bandwidth simultaneously. The simulated S-parameter of the compact WPD are shown in Figure 6. The isolation and return loss are better than 25 dB and the insertion loss is less than 0.64 dB across the 27-31 GHz band. The differential WPDs are required at the second stage. A low-loss, broadband and compact differential λ/4 TL is the key to implement a high-performance differential WPD. To reduce chip area, transformer-based [22] and inductorcapacitor-based [23] differential WPD have been proposed. However, the former introduces phase and magnitude imbalance and the latter suffers from narrow bandwidth. In this design, a capacitor-free differential λ/4 TL with compact area is proposed, as shown in Figure 7(a), where L p and L n represent the inductors in the positive and negative paths, respectively. The parasitic capacitance between L p and L n inherently forms the capacitance of a λ/4 TL, which avoids the use of extra capacitors and thus improves the bandwidth. Besides, benefiting from the electromagnetic enhancement between L p and L n , the insertion loss of the λ/4 TL is greatly port impedance at 29 GHz, while the line space (S D ) is varied. As expected, the characteristic impedance will increase with larger line space, due to the decreased parasitic capacitance between the metal lines. Nonetheless, it should be noted that there is a lower bound of the characteristic impedance of the differential λ/4 TL due to physical limitation of the technology. In this work, line space of 2 µm and line width of 4 µm are chosen for the λ/4 TL, which exhibits a simulated characteristic impedance of 113 . Thus, the characteristic impedance of the differential WPD is 80 . Figure 8 shows the simulated S-parameters and magnitude/phase imbalance between the two output ports. As indicated, the differential WPD achieves a simulated 0.36 dB loss and 25 dB isolation from 27 to 31 GHz. The phase and magnitude imbalances are less than 0.05 • and 0.001dB, respectively. To reduce the coupling between the interconnects for different beams, differential lines are preferred for the outstanding anti-interference ability [24]. Considering the pair of differential lines shown in Figure 9, the line distance (D), line length (L), line width (W ) and line space (S) will affect the coupling between the two differential lines. The coupling factor of the differential lines under various D and L is compared in [24], which indicates that large D and small L contribute to lower coupling. For further improvement of isolation and also to reduce insertion loss, the optimization of W and S is necessary. Figure 9(a) and (b) depict the forward and reverse coupling simulation setup for the pair of differential lines, where Z o represents the characteristic impedance of the differential line. Based on the settings in Figure 9, electro-magnetic (EM) simulations are performed to calculate TL insertion loss versus isolation and TL characteristic impedance versus S, with various W and S settings. Figure 10(a) suggests that a low S contributes to high isolation between the two differential lines, mainly because that much less electromagnetic energy is leaked to the outside of the differential lines. However, setting the S for optimal isolation will cause several problems. First, the low S for high isolation will cause high loss, as revealed by Figure 10(a). Second, a low S will reduce the characteristic impedance Z o to <60 , as presented in Figure 10(b). This will cause impedance mismatch at the differential WPD input ports, since there is a low bound of the characteristic impedance of the differential WPD, as mentioned. In consequence, a line space of 6 µm and line width of 5 µm contributing to 80 Z o is chosen for the differential interconnection lines, which ensures low loss and relatively high isolation. The routing of the differential interconnection lines is carefully designed for symmetry, in order to achieve balanced outputs. Finally, the transformer-based baluns (i.e., balun 1 and balun 2) are employed to perform the single-ended to differential conversion signal and also provide impedance matching. The performance of the proposed SBDN is simulated by the ADS Momentum simulator. The simulations indicate the return loss of the input and output ports are 15-27 dB from 27 to 31 GHz. The insertion losses are less than 10 dB across the 27-31 GHz band including the 6 dB intrinsic loss as shown in Figure 11(a). The loss differences among the eight distribution branches (i.e., from IN A to OUT A x and from IN B to OUT B x ) are less than 0.1 dB, which is mainly caused by the implementation of the lower metal lines in the three crossovers (see the shadow in Figure 5). The output ports isolations are shown in Figure 11(b). As can be seen, the isolations between different beams (i.e., between OUT A x and OUT B x ) are higher than 46 dB. The isolations between the signal branches of a same power divider from individual input A or B(i.e., between OUT (A/B) 1 B. PRE-LNA The pre-LNA is intended to suppress the noise of the CMOS phased-array and improve the noise figure of the system. As shown in Figure 12, the first stage employs a small source-degenerated inductor of 55 pH to achieve input impedance and noise matchings simultaneously. To achieve low noise figure and broadband impedance matching with low power dissipation, the total gate width of 2 × 32 µm is chosen for all transistors. The L-C-L network at the interstage can provide two frequency peaks to realize broadband matching while minimizing the insertion loss. The spiral inductors are widely used for saving the chip area and the capacitors are adopted between stages for independent biasings which are generated by the bandgap reference circuit. Furthermore, design iterations are required to fine tune matching circuits for optimal performance. The simulated results are plotted in Figure 13, it shows that the gain of the pre-LNA is 20 dB. The gain variation is less than 1 dB from 26 to 32 GHz. The two peak gains are 21 dB and 20.85 at 26.3 GHz and 31 GHz, respectively. The noise figure is less than 4.3 dB across 26-32 GHz. The minimum noise figure is 4 dB at 29-31 GHz. From 27 to 31 GHz, the simulated S 11 and S 22 are less than −11 dB and −8 dB, respectively. The simulated input P 1dB is −27 dBm. Figure 14 shows the block diagram of a fully-passive vectormodulated phase shifter (VMPS). It consists of a 3-dB quadrature coupler, two fully-passive phase-invertible gain tuning blocks, a power combiner and the matching networks (MN). The design is similar to [25]. The 3-dB quadrature coupler is implemented by vertically coupled microstrip lines using the top two metal layers, as shown in Figure 15. The microstrip lines are folded to reduce chip area. The simulated amplitude and phase responses of the coupler is depicted in Figure 16. The IQ gain and phase errors are less than 0.2 dB and 2.1 • within the 26 − 32 GHz band, implying good IQ balance. Then, the two generated quadrature signals are weighted by the switch-array-based gain tuning blocks. The gain tuning block contains a total of six cross-connected the transistor array provides phase inverting operation for the passive gain tuning block and thus ensures phase shifting in all four quadrants which covers full 360 • phase-shift range. The transformer-based MNs are employed for the input and output impedance matching of the gain tuning blocks, which also provide the conversions between the single-ended and differential signals. Then, the I-and Q-path signals are summed up by the Wilkinson-like power combiner. Lumped inductors and capacitors are adopted to implement the λ/4 transmission line, based on its lumped L-C model, in order to reduce chip area. The passive VMPS are digitally controlled by a total of 12 bits (i.e., 6 bits for each of I and Q paths), which provides 4096 possible phase shifting states. A sub-selection of the 4096 states ensuring both accurate phase shifting and low gain error are determined based on the simulated phase responses. The corresponding controls are stored in a look-up table (LUT) and can be refreshed according to the measured results, if changes are necessary. D. SWITCHED-TYPE ATTENUATOR A switched-type attenuator (STA) is employed to achieve the linear-in-dB gain tuning. The schematic of the 5-bit STA is shown in Figure 17. Accurate gain tuning and good impedance matching can be ensured by optimizing the resistance values of each attenuation cell. Capacitive compensation technique is used to enhance the attenuator performance over a wide operation bandwidth. According to the simulation, the stand-alone 5-bit STA has an RMS amplitude error of less than 0.1 dB from 27 to 31 GHz. This design is similar to [26] while the source and load impedance of the attenuator are further optimized to ensure wide band amplitude tuning performance. Noted that the attenuation performance would deteriorate if the attenuator is connected to poorly matched source or load impedances. To evaluate this effect, the RMS amplitude errors of the STA under different source and load impedances at 30 GHz are simulated and depicted in the Smith chart (see Figure 18). As revealed, the RMS amplitude error exhibits degradation when the source or the load impedance deviates from the perfect 50 . Thus, the source and load impedances connected to the STA should be carefully designed to avoid possible performance degradation. In this work, this is accomplished by a proper arrangement of the circuit stages as shown in Figure 3. This configuration takes advantage of the inherently good matching at the phase shifter output and the combiner input. As shown in Figure 19), the impedances at the output of the phase shifter and the input of the combiner are located in low amplitude error area and thus ensures high attenuation accuracy. IV. MEASUREMENT RESULTS The four-beam phased-array receiver is implemented in 65-nm CMOS technology. Figure 20 shows the die micrograph of the receiver chip that occupying 2.6×4 mm 2 including all pads. A block diagram of the measurement setup is shown in Figure 21. The SPI is controlled by the field programmable gate array (FPGA). The RF measurements are done using GSG probes on a high-frequency probe station. Note that the measurements can only be performed by probing one input and one output (i.e., one of the two channel inputs and one of the four beam outputs). A. SINGLE-BEAM MEASUREMENTS The balance of the SBDN is verified by measuring the S-parameter of each channel (i.e., from two inputs to four outputs). As shown in Figure 22, the return loss (S 11 and S 22 ) and reverse isolation (S 12 ) are < −10 dB and < −60 dB, respectively. The magnitude and phase errors of S 21 are shown in Figure 23. As can be seen, owing to the symmetrical design of the SBDN, the gain and phase mismatches of the eight channels remain less than 0.3 dB and 2 • , respectively. Figure 24 depicts the measured relative gain and relative phase, indicating that the phased array has achieved approximately 17 dB gain tuning range with 0.53 dB tuning step and 360 • phase shift range with 5.625 • phase step. The corresponding RMS gain and phase errors are shown in Figure 25, which are less than 0.35 dB and 4 • , respectively. The measured and simulated NF are shown in Figure 26. As can be seen, the measured NF is 10.8-11.7 dB at 26-31 GHz. It is slightly less than the value of 12.3 dB calculated in Figure 4. This is because of the measured gain is increased to 3 dB. It should be noted that in order to reduce power consumption, the gain of each channel is low, so the noise figure of the channel is relatively high and it will be reduced to about 2 dB by adding a external GaAs LNA. The measured input P 1dB is −22 dBm at 29 GHz. station test, both the input port of CH1 and the output ports of B1, B2, B3 are left open circuited. Note that when these four ports are terminated by 50 , as in the real application, the measured beam couplings would be further reduced. The couplings from beams 1 , 2 and 3 to beam 4 can be obtained by measuring beam 4 at a constant phase setting and sweeping the phases of beams 1 , 2 and 3 all over 360 • . Amplitude and phase of the output B4 will be affected by the couplings from beams 1 , 2 and 3 . They can be characterized as vector C 1 e jϕ 1 , C 2 e jϕ 2 and C 3 e jϕ 3 [30] (see Figure 27), respectively. Suppose the unaffected output signal of B4 is Ye jϑ . The magnitude of the coupling vector C can be calculated by (1) where E amp and E pha represent amplitude error and phase error. Then the coupling coefficient described as C coe = C/Y can be obtained from the measured data. Figure 28(a) and (b) shows the measured gain and phase deviation caused by the couplings from beams 1, 2 and 3. Among these three beams, the gain and phase deviations are 0.17 dB and 0.9 • at 29 GHz, respectively. The corresponding RMS gain error and phase error are presented in Figure 29 (a), which are less than 0.1 dB and 0.5 • respectively. The coupling coefficients are calculated in dB, as shown in Figure 29 (b). As can be seen, the coupling from beam 1 , 2 and 3 are better than −32 dB. Thanks to the symmetrical structure of the beam distribution network. Note that coupling curves of beam 1 and 2 has a similar shape. As indicated in the simulated isolation of the SBDN, the coupling curve of beam 3 is different because of the signal of beam 3 and beam 4 are divided from the same power divider. Similar measurements were performed with beams 1, 2 and 3 and similar results were obtained. Table 1 summarizes the performance of the phased-array receivers, which shows that this work has achieved four beams generation with competitive phase shifting and gain tuning accuracy under ultra-low power consumption. V. CONCLUSION A Ka-band four-beams phased-array receiver is presented in this paper. A symmetrical beam-distribution network is used to achieve four balanced beam generation and high beam isolation in a compact area. From single channel measurements, the phased array has achieved an RMS gain error of less than 0.35 dB and phase error of less than 4 • from 27 to 31 GHz. The beam imbalance and beam to beam coupling are first reported, and are less than 0.3 dB and −32 dB, respectively. While the NF is 12.3 dB, this can be improved to 1.9 dB by the external GaAs LNA. The beam couplings of the phased array have investigated, excellent beam isolation is achieved. The passive beam distribution network, the passive phase shifter and STA of each channel contribute to the extremely low power consumption (only 40 mW) of the proposed four-beam receiver. All these made the design appropriate for large-scale space use.
2021-08-12T13:21:19.202Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b433cf05ba602d693c10ed6aefd7746fd19f8e12", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09496664.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "b433cf05ba602d693c10ed6aefd7746fd19f8e12", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
212829064
pes2o/s2orc
v3-fos-license
MAPPING OF ORGANIZATIONAL MODELS IN PORTUGUESE COMPANIES : Researchers have focused on the influence of organizational models in the actions, and subsequent outcomes of organizations and the results support the view that there is indeed an association between certain features of organizational models and organizational performance outcomes. The purpose of this paper is to map the organizational models used by Portuguese companies to identify possible dominant patterns and search for differences across several dimensions (sector, size, number of customers; internal/external market). The results show a level of organizational hybridism with several models applied simultaneously and with smaller firms showing a higher emphasis on dialogue, flexibility, and response capability. There is also a general preference among Portuguese companies for the bureaucratic organizational model. The results also indicate that organizations that adopt the bureaucratic model seem to be able to implement systematic processes innovation making compatible the rules and procedures with the ability to learn and adapt. Introduction Most managers nowadays are focused on defining the vision, mission, strategy and business model of their organizations, and often forget about the importance of clearly defining how the activities will be organized and which tasks will be allocated to the available personnel. In fact, the organizational model is one of the main factors which affect organizational performance, and if chosen without clear deliberation it can lead organizations to extremely negative results. Managers need to understand the fact that organizational models evolve as the business grows and that successful organizations are those which have learned how to adapt their structure to both internal and external environmental changes. According to Mintzberg (1979), organizational models result from the dynamic interactions between organizational strategies, environmental factors and the structure of the organization. Thus, there is a variety of organizational models which can be successfully applied in order for the organization to be able to respond to the environmental forces that impact its activity and to its own characteristics (Ghinea & Ghinea, 2015). Specific organizational models will be more frequently employed in certain historical periods and certain economic periods, and there will always be a place for rational decision-making, but contemporary managers have to master the art of managing in conditions of extreme information uncertainty and ambiguity (Bavec, 2001). Learning about different organizational theories and the associated organizational models could draw managers' attention to different solutions for their organizational issues. The main argument of the current research is that managers need to be aware of the existence of other organizational theories besides those inspired by the scientific management tradition and that these newer theories might help them push their organizations to higher performance in environmental conditions characterized by ambiguity and chaos. This investigation aims to bring additional knowledge to the field of organizational theories and their application in Portuguese companies. In a knowledge, global, fastpaced, digital, and interconnected society, organizations can be challenged by technology leaps, changing values, increased competition, and globalization. Both incremental and disruptive innovation and thinking are required coupled with selfdisciplined, agile and timely action in response to challenges. Organizational theories have stressed the relevance of both the internal and external contexts and organizational strategies and models for success. However, overconfidence on past success and extreme self-satisfaction can be a severe organizational disease. These lead to a lack of concern for challenging entrenched beliefs, difficulties in recognizing and responding to changing environments, poor performance and inadequate culture models and structures. The present research focuses on organizational models and searches for possible correlations between the organizational models used in Portuguese businesses and the characteristics of the organizations. Is there a predominant organizational model? Do organizations choose their models based on their characteristics? Are Portuguese companies employing more than one organizational model at a time? After the literature review, quantitative research based on an online survey was carried and the results were statistically analyzed. Several conclusions were formulated pointing to a level of organizational hybridism with several models applied simultaneously. Smaller firms demonstrate a higher emphasis on dialogue, flexibility and response capability when compared to bigger ones. Literature Review Organizational theory aims to identify patterns and structures that can help organizations to avoid and solve problems, maximize effectiveness and efficiency and meet stakeholders' expectations. Part of organizational theory research focuses on the identification of conceptual models which reflect the way in which organizations behave, on the analysis of the impact that different patterns of organizational behaviors have on specific organizational objectives, and on the formulation of recommendations for organizations interested in improving their chances for success. Organizational models, also known as organizational structures, map the ways in which roles and responsibilities are allocated in a organization and the way in which processes are coordinated and supervised to ensure the achievement of the organization's objectives. A review of the theoretical underpinnings of the organizational models developed until now revealed that there are three main organizational theories which have been used to understand the way in which organizations behave: rational system theories, natural system theories, and open system theories (Onday, 2016;Scott, 2003;Scott, 1981). Rational organizational theories have as foundation Taylor's principles for scientific management, Weber's characterization of bureaucracy and Fayol's administrative theory. These theories place emphasis on the degree to which rules and procedures are formalized and the extent to which all organizational processes are oriented towards the achievement of very specific goals. As a result, organizational models inspired by rational system theories are usually focused on two specific aspects: formal structures and goal specificity. In contrast, natural organizational theories reject the notion that all organizational behaviors are governed by rational decision making and instead aim to understand the configuration of the informal structures found in organizations. In fact, natural system theories investigate the way in which the plurality of the organizational members' goals impacts organizational growth and survival, as well as the ways in which informal social networks that naturally appear inside organizations influence decision making. As a result, organizational models inspired by natural system theories are mostly focused on goal complexity and informal structures. Lastly, open system organizational theories opened new research avenues by breaking the organizational boundaries within which rational and natural organizational theories were confined. These theories which started to appear during the 1960s are focused on the ways in which organizations interact with their external environments to attract or mobilize resources for their own objectives (resource dependency theory) or the ways in which stakeholders' expectations and existing regulations affect the way in which organizations configure their activities (institutional theories). Thus, organizational models inspired by open system theories usually emphasize the influence of external factors on how both roles and objectives are established. Scott (2003) pointed out that current trends in the field of organizational studies are oriented, on the one hand, towards the integration of the three perspectives (e.g., contingency theory, bounded rationality theory, etc.) and, on the other hand, on criticizing the view of organizations as rational systems with predictable behaviors and on introducing new organizational models with take into account the complexity and unpredictability of social systems (i.e., organizational anarchy theory, organizational learning theory etc.). The main observation that any brief review of the literature on organizational theories and models leads to is that each strand of organizational theories has imposed its own ways of looking at what organizations are doing, and, consequently, created different models and classifications for organizational structures and behaviors. For this research, it is important to understand the differences between five types of organizational models: the rational model, the bureaucratic model, the coalition model, the organizational anarchy model. and the organizational learning model. The main characteristics of these models are presented in Table 1. There are also researchers that consider that the bureaucratic/rational approach can be problematic in volatile environments and that such organizations may not be able to change as quickly as would be required, losing competitive edge to more agile and innovative competitors (Brown, 1995). A high performance might lead to the creation of a 'strong' corporate culture (cultural homogeneity), with little incentive or encouragement to question 'ways of doing things' leading to passivity and conformism (Ghinea, 2015). This successful culture may contribute to status quo and lack of flexibility to respond to situations that might require radical change. Depending on their age, size, and environment, organizations function in diverse and complex ways, due to different flows of authority, work material, information, and decision processes (Mintzberg, 1979). Stakeholders management and satisfaction is also relevant for the way organizations operate and their outcomes (Fonseca et al., 2016). There are authors who consider that organization size is a relevant dimension for success since large organizations have more valuable resources than smaller ones (Gustafsson et al., 2001;Ismyrlis & Moschidis, 2015). However, for other researchers' size is not relevant, since SMEs are more flexible and open to change than larger ones (Briscoe et al., 2005;Lee et al., 2009: Prado et al., 2013Psomas, 2013;Terziovski et al. 2003). There are observed differences in management approaches and performance between organizational sectors (Pekovic, 2010) and some investigation shows that a higher export intensity is positively related to firm performance, since firms with a higher rate of export must be more effective and efficient and have access to more knowledge (Bernard & Jensen, 1999;Ling-Yee, 2004). Table 1. Organizational Theories and Models summary. Organizational Model Summary Comments Rational Model The organization is seen as a group of individuals and resources held together by a very specific set of goals. The structure of the organization is defined in alignment with the results desired by the organization. Emphasis is placed on the creation of formal structures and on clear rules and procedures for each member of the organization. Decisions are taken by managers and communication lines follow the organizational structure (Scott, 2003). Usually encountered in finance, politics, and public administration; accent placed on rule-following and results-based decision making. Bureaucratic The organization places emphasis on rules and specialization (Weber et al., 1947). The two main defining characteristics of the organization are: a) a hierarchical structure with clear standards and lines of authority, and b) a reliance on rational-legal authority. This means that each organizational member has a clearly defined function (Olsen, 2008), and that the principles that guide behaviors in the organization are entirely objective. In this type of organizations, managers gain authority through their position in the hierarchy and there is a clear chain of command which removes the possibility of lower organizational members receiving orders from more than one manager. Mostly found in finance and the legal system; usually employed by large and complex organizations; suitable for repetitive and relevant functions. Coalition The coalition organizational model is based on the idea that informal structures are formed within organizations and that these coalitions influence organizational behavior. The coalitions that naturally form between managers, employees, and any other stakeholders not only participate in setting organizational goals but also influence decision making (Cyert & March, 1963). In best case scenarios, coalitions manage to identify a set of common goals and can cooperate. However, in most cases, there are multiple actors with conflicting interests and decision making is hindered (Sened, 1996). Thus, most decisions are reached through negotiations. Mostly found in politics or situations where power belongs to several actors (e.g., large firms); management by negotiation. Organizational Model Summary Comments Organizational Anarchy The organization is seen as a grouping of individuals with no clear goals, structure or procedures. Most organizational behaviors are based on trial-and-error attempts and there is no formalization (Lomim & Fioretti, 2008). Employees lack a clear job specification and their own commitment to organizational goals varies according to their own interests and needs. There is a sense of a lack of accountability and no clear authority lines. The organization is usually confronted with many problems with no simple and clear solutions. Employees adhere to inconsistent ideas and it is difficult to establish organizational preferences. The effectiveness of this organizational model depends on the ability to find patterns of interaction between participants, problems, solutions and choice opportunities (i.e., the "Garbage Can Theory") (Cohen et al., 1972). Coined after a study of higher education institutions; no predefined rules or objectives, effectiveness depends on the issues, solutions, and actors; it is particularly suitable for knowledgeintensive industries. Organizational Learning This model is focused on ensuring that the organization can learn and to adapt (Slater & Narver, 1995). It is the responsibility of all the organizational members to search for errors and to contribute to the development of new ideas, procedures, connections etc. which will help the organization achieve its goals. Organizations can learn either through adaptive learning (single-loop learning) or through generative learning (i.e., doubleloop learning which relies on the ability of the organization to analyze its beliefs and unacknowledged assumption regarding goals, customer's needs, strategy and resources) (Argyris, 1993). High learning and adaptation capabilities; mostly used in knowledge-intensive industries. Method This investigation aims to map the organizational models used by Portuguese companies to identify possible dominant patterns and search for differences across several dimensions (sector, size, number of customers; internal/external market). Due to the limitation to gather existing data to assess the research questions, a quantitative research approach was adopted supported by an online questionnaire yielding a sample of 96 respondents. Respondent contacts were gathered through social media (LinkedIn) and co-workers. The contacts were retrieved from the authors' co-workers, customers', and suppliers and invitation emails were sent asking potential participants to respond to the online survey. Although online surveys can generate low response rates when compared to other survey methods, they are a suitable technique to reach quickly, and at a low cost, a specific population that is geographically dispersed and used to the online activity. The answers were monitored during the survey time to check and minimize possible bias with non-respondents and no significant changes were identified. The survey was prepared based on a review of the literature and it contains 5 multiple choice questions meant to identify the respondents and their organization and 25 Likert scales (1strongly disagree, 5strongly agree) items for grasping the organizational models in use. Companies from the civil construction sector, the service sector, and production companies, accounted for a total of 76.1% of the survey responses and the internal consistency of the survey was validated using Cronbach's Alpha, while SPSS (vs 22) was used for the statistical calculations. Results Around 37% of the companies included in the sample operated in the civil construction sector, followed by the service sector (24%) and production (14.6%). For a complete breakdown of the economic sectors of the companies see Table 2. Small and medium enterprises represented 70% of the total answers, which is in line with the distribution of those sectors of activities in the target population (Table 3). The intensity of the commercial activity and export presence are presented in Table 4 and 5. According to the results of a focus group held with four experts from these activity sectors the sample results matched the population distribution concerning these two dimensions. Construct reliability was tested with Cronbach's Alpha, which assesses reliability through the internal consistency of each construct. The constructs presented good internal reliability values (Cronbach, 1951), as seen in Table 6. According to Maroco and Garcia-Marques (2006) since Cronbach's Alpha is greater than 0.60 we can consider the survey as consistent and use the group items for each model (Table 7). The bureaucratic model was the one most frequently used followed by the organizational learning model. Organizational anarchy was the model with less reported use. Due to the non-normality of all the variables, Spearman rho correlations were calculated to measure the intensity of variables relationships (Table 8). Spearman rho varies between -1 and 1 and the nearer the values are from these extremes, the stronger is the linear association between the two studied variables. The sign indicates the direction of the association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the correlation coefficient is positive. If Y tends to decrease when X increases, the correlation coefficient is negative. If the value is zero, this means there is no linear relationship between the variables. When Spearman rho is higher than 0.60, we can state that the linear association between the two variables is strong (Pestana & Gageiro, 2008). There is a strong positive relationship between bureaucratic and organizational learning models, suggesting that organizations that adopt the bureaucratic model seem to be able to implement systematic processes innovation making compatible the rules and procedures with the ability to learn and adapt. To further research if the size of the company, the number of employees, the intensity and nature of the commercial activity is related to the choice of organizational model, several ANOVA analyses were performed. The sample's normality was checked with the Shapiro-Wilk test, and variance homogeneity with the Levene test and no significant violations were found. The most significant ANOVA tests are presented in Tables 9 and 10. Production. 14 3.3000 Conclusions Our research sheds new light on the connection between organizational characteristics and the choice of organizational model. First, we have shown that the activity sector and the size of the organization influence the type of organizational model developed. This is in line with Mintzberg's (1979) seminal work and with more recent findings from Gustafsson et al. (2001) and Ismyrlis and Moschidis (2015). Moreover, we have also found that there are no statistical differences as a result of the influence of the number of customers or the level of internationalization of the market activities, which confirmed the results obtained by Bernard and Jensen (1999) and Ling-Yee (2004). Second, our results show that the rational organizational model is more frequently used in the service sector, whereas the bureaucratic model dominates the production sector. For the coalition model (F= 4.107; p= 0.021), we found that it is more intensely used in the production sector, followed by the construction sector and that it is less frequently encountered in the service sector. These findings confirm Mintzberg's (1979) conclusion that organizational flows and environments have an evident influence on the organizational models employed. Third, we have seen that smaller organizations have a more pronounced tendency towards rational organizational models than larger ones, but further research is necessary to confirm it. These results shed light on the potential direction of influence of organizational size on organizational model and performance which was signaled by Gustafsson et al. (2001) and Ismyrlis and Moschidis (2015). Fourth, our results also point towards the general preference among Portuguese companies for the bureaucratic organizational model. Organizations that adopt the bureaucratic model seem to be able to implement systematic processes innovation making compatible the rules and procedures with the ability to learn and adapt, which is line with the remarks from Brown (1995) that organizations need to pay attention to the need to be agile and innovative to remain competitive in a dynamic environment. The coexistence of bureaucratic and organizational learning models is counterintuitive because previous research has shown that emphasis on rules and structures hinders organization's ability to learn and to adapt. Thus, further research is needed to understand how these two organizational models influence each other and how it is possible for organizations to both focus on the formulation on strict rules and regulations and on quickly implementing new ideas and technologies. Fifth, we found that several organizational models can coexist inside the same organization, hinting towards a high level of organizational hybridity especially in smaller firms which place a higher emphasis on dialogue, flexibility, and response capability. This conclusion is also consistent with Mintzberg's (1997) work on the complexity and varying use of different organizational models. However, this might also be the biggest challenge regarding the choice of organizational models, as the dominant model usually overpowers the others, leaving lesser chances for the possibility of organizational agility. Agility is important because, as Schein (2004, p. 32) states, "managers must be capable of diagnosing a situation and in addition adapt their management style to the requirements of the environment surrounding them. If the employees or other stakeholders are distinct, the manager has to treat them accordingly". These results bring some useful insights to both academics and practitioners interested in gaining a deeper understanding of the organizational characteristics that influence the choice of organizational model. The diversity of isolated theories within organizational theory may be related to their chronologic appearance and level of analysis (social-psychological, structural, macro, learning, and adapting to the environment). In the dynamic and highly complex business environment in which organizations today operate, any organization that aims to ensure its success needs managers and employees with a broad set of competencies and a suitable organizational model. Fonseca (2015) argues that the mission, values, and scope of each specific organization should suggest the most suitable organizational model for supporting its culture and maximizing its chances of success. Depending on the organization's strategy and value proposition, the sector of activity, the lifecycle phase, and the resources and external environment, different organizational models should be employed. This research brings also some useful insights for policy development, as it highlights that the size and the activity sector of the organization influence the adoption of the organizational model, suggesting the opportunity to customize different policy approaches (e.g., for small and bigger companies; for service and production sector) to reach better outcomes. For future studies, it is recommended to use a bigger sample size and validate the respondents' answers with qualitative methods to account for possible bias. Also, additional dimensions, such as company age should be considered, and further research could address the potential dangers to society of closed organizational models considering recent corporate scandals.
2019-11-28T12:33:56.473Z
2019-11-25T00:00:00.000
{ "year": 2019, "sha1": "f59e845f2000f5805e0736bb16cd9482c1679932", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.24874/ijqr13.04-04", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1659f1eab06ad21b391eeb1243adb24f62059c08", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
234895250
pes2o/s2orc
v3-fos-license
Optimization of Salt Marsh Management at the Stewart B. McKinney National Wildlife Refuge, Connecticut, Through Use of Structured Decision Making Structured decision making is a systematic, transparent process for improving the quality of complex decisions by identifying measurable management objectives and feasible management actions; predicting the potential consequences of management actions relative to the stated objectives; and selecting a course of action that maximizes the total benefit achieved and balances tradeoffs among objectives. The U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, applied an existing, regional framework for structured decision making to develop a prototype tool for optimizing tidal marsh management decisions at the Stewart B. McKinney National Wildlife Refuge in Connecticut. Refuge biologists, refuge managers, and research scientists identified multiple potential management actions to improve the ecological integrity of two marsh management units within the refuge and estimated the outcomes of each action in terms of performance metrics associated with each management objective. Value functions previously developed at the regional level were used to transform metric scores to a common utility scale, and utilities were summed to produce a single score representing the total management benefit that would be accrued from each potential management action. Constrained optimization was used to identify the set of management actions, one per marsh management unit, that would maximize total management benefits at different cost constraints at the refuge scale. Results indicated that, for the objectives and actions considered here, total management benefits may increase consistently up to approximately $1,190,000, but that further expenditures may yield diminishing return on investment. Management actions in optimal portfolios at total costs less than $1,190,000 included controlling avian predators in both management units, managing stormwater on lands adjacent to one marsh management unit, and removing 1Yale University. 2U.S. Geological Survey. 3U.S. Fish and Wildlife Service. a tide gate and breaching a dike to improve tidal flow in the other marsh management unit. The management benefits were derived from expected increases in the numbers of spiders (as an indicator of trophic health) and tidal marsh obligate birds, and an expected decrease in the use of herbicides to control invasive vegetation. The prototype presented here provides a framework for decision making at the Stewart B. McKinney National Wildlife Refuge that can be updated as new data and information become available. Insights from this process may also be useful to inform future habitat management planning at the refuges. Introduction The National Wildlife Refuge System (NWRS) protects extensive salt marsh acreage in the northeastern United States. Much of this habitat has been degraded by a succession of human activities since the time of European settlement (Gedan and others, 2009), and accelerated rates of sea-level rise exacerbate these effects (Gedan and others, 2011; Kirwan and Megonigal, 2013). Therefore, strategies to restore and enhance the ecological integrity of national wildlife refuge (NWR) salt marshes are regularly considered. Management may include such activities as reestablishing natural hydrology, augmenting or excavating sediments to restore marsh elevation, controlling invasive species, planting native vegetation, minimizing shoreline erosion, and remediating contaminant problems. Uncertainty stemming from incomplete knowledge of system status and imperfect understanding of ecosystem dynamics commonly hinders management predictions and consequent selection of the most effective management options. Consequently, tools for identifying appropriate assessment variables and evaluating tradeoffs among management objectives are valuable to inform marsh management decisions. Structured decision making is a systematic approach to improving the quality of complex decisions that integrates assessment metrics into the decision process (Gregory and Keeney, 2002). This approach involves identifying measurable management objectives and potential management actions, 2 Optimization of Salt Marsh Management at the Stewart B. McKinney National Wildlife Refuge, Connecticut predicting management outcomes, and evaluating tradeoffs to choose a preferred alternative. From 2008 to 2012, the U.S. Geological Survey (USGS) and U.S. Fish and Wildlife Service (FWS) used structured decision making to develop a framework for optimizing management decisions for NWR salt marshes in the FWS Northeast Region (that is, salt marshes in the coastal region from Maine through Virginia). The structured decision-making steps were applied through successive “rapid prototyping” workshops, an iterative process in which relatively short periods of time are invested to continually improve the decision structure (Blomquist and others, 2010; Garrard and others, 2017). The decision framework includes regional management objectives addressing critical components of salt marsh ecosystems, and associated performance metrics for determining whether objectives are achieved (Neckles and others, 2015). The regional objectives structure served as the foundation for a consistent protocol for monitoring salt marsh integrity at these northeastern coastal refuges, in which the monitoring variables are linked explicitly to management goals (Neckles and others, 2013). From 2012 to 2016, this protocol was used to conduct a baseline assessment of salt marsh integrity at all 17 refuges or refuge complexes in the FWS Northeast Region with salt marsh habitat (fig. 1). The Stewart B. McKinney National Wildlife Refuge protects about 200 hectares (ha) of salt marsh bordering Long Island Sound in Stratford and Westbrook, Connecticut (fig. 2). The refuge’s salt marsh provides critical nesting and Horizontal coordinate information is referenced to the North American Datum of 1983 (NAD 83). Elevation, as used in this report, refers to distance above the vertical datum. Abstract Structured decision making is a systematic, transparent process for improving the quality of complex decisions by identifying measurable management objectives and feasible management actions; predicting the potential consequences of management actions relative to the stated objectives; and selecting a course of action that maximizes the total benefit achieved and balances tradeoffs among objectives. The U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, applied an existing, regional framework for structured decision making to develop a prototype tool for optimizing tidal marsh management decisions at the Stewart B. McKinney National Wildlife Refuge in Connecticut. Refuge biologists, refuge managers, and research scientists identified multiple potential management actions to improve the ecological integrity of two marsh management units within the refuge and estimated the outcomes of each action in terms of performance metrics associated with each management objective. Value functions previously developed at the regional level were used to transform metric scores to a common utility scale, and utilities were summed to produce a single score representing the total management benefit that would be accrued from each potential management action. Constrained optimization was used to identify the set of management actions, one per marsh management unit, that would maximize total management benefits at different cost constraints at the refuge scale. Results indicated that, for the objectives and actions considered here, total management benefits may increase consistently up to approximately $1,190,000, but that further expenditures may yield diminishing return on investment. Management actions in optimal portfolios at total costs less than $1,190,000 included controlling avian predators in both management units, managing stormwater on lands adjacent to one marsh management unit, and removing Introduction The National Wildlife Refuge System (NWRS) protects extensive salt marsh acreage in the northeastern United States. Much of this habitat has been degraded by a succession of human activities since the time of European settlement (Gedan and others, 2009), and accelerated rates of sea-level rise exacerbate these effects (Gedan and others, 2011;Kirwan and Megonigal, 2013). Therefore, strategies to restore and enhance the ecological integrity of national wildlife refuge (NWR) salt marshes are regularly considered. Management may include such activities as reestablishing natural hydrology, augmenting or excavating sediments to restore marsh elevation, controlling invasive species, planting native vegetation, minimizing shoreline erosion, and remediating contaminant problems. Uncertainty stemming from incomplete knowledge of system status and imperfect understanding of ecosystem dynamics commonly hinders management predictions and consequent selection of the most effective management options. Consequently, tools for identifying appropriate assessment variables and evaluating tradeoffs among management objectives are valuable to inform marsh management decisions. Structured decision making is a systematic approach to improving the quality of complex decisions that integrates assessment metrics into the decision process (Gregory and Keeney, 2002). This approach involves identifying measurable management objectives and potential management actions, predicting management outcomes, and evaluating tradeoffs to choose a preferred alternative. From 2008 to 2012, the U.S. Geological Survey (USGS) and U.S. Fish and Wildlife Service (FWS) used structured decision making to develop a framework for optimizing management decisions for NWR salt marshes in the FWS Northeast Region (that is, salt marshes in the coastal region from Maine through Virginia). The structured decision-making steps were applied through successive "rapid prototyping" workshops, an iterative process in which relatively short periods of time are invested to continually improve the decision structure (Blomquist and others, 2010;Garrard and others, 2017). The decision framework includes regional management objectives addressing critical components of salt marsh ecosystems, and associated performance metrics for determining whether objectives are achieved (Neckles and others, 2015). The regional objectives structure served as the foundation for a consistent protocol for monitoring salt marsh integrity at these northeastern coastal refuges, in which the monitoring variables are linked explicitly to management goals (Neckles and others, 2013). From 2012 to 2016, this protocol was used to conduct a baseline assessment of salt marsh integrity at all 17 refuges or refuge complexes in the FWS Northeast Region with salt marsh habitat ( fig. 1) wintering habitat for birds of highest conservation priority, including saltmarsh sparrows and American black ducks, in the U.S. North American Bird Conservation Initiative's bird conservation region for the New England and mid-Atlantic coast (Steinkamp, 2008;National Audubon Society, 2020a, b;U.S. North American Bird Conservation Initiative, 2020). The salt marsh also provides important foraging habitat for wading birds (such as Great and Snowy Egrets) during breeding and migratory seasons (National Audubon Society 2020a, b). The primary threats to this habitat are marsh loss, fragmentation, and degradation associated with increasing human activity within 1,000 meters (m) of the refuge boundary, spread of the invasive reed Phragmites australis (hereafter referred to as Phragmites), and marsh submergence associated with rising sea level (Potvin, 2017;National Audubon Society 2020a, b;S.C. Adamowicz and T. Mikula, FWS, unpub. data, 2017). Salt-marsh management goals for the refuge focus on maintaining, restoring, and enhancing critical habitat for breeding, migrating, and wintering birds. In this study, the regional structured decision-making framework was used to help prioritize salt marsh management options for the refuge. Purpose and Scope This report describes the application of the regional structured decision-making framework (Neckles and others, 2015) to the Stewart B. McKinney National Wildlife Refuge. The regional framework was parameterized to local conditions through rapid prototyping, producing a decision model for the refuge that can be updated as new information becomes available. Included are a suite of potential management actions to achieve objectives in two marsh management units at the refuge ( fig. 2), approximate costs for implementing each potential action, predictions for the outcome of each management action relative to individual management objectives, and results of constrained optimization to maximize management benefits subject to cost constraints. This decision structure can be used to understand how specific actions may contribute to achieving management objectives and identify an optimum combination of actions, or "management portfolio," to maximize management benefits at the refuge scale for a range of potential budgets. The prototype presented here provides a framework for continually improving the quality of complex management decisions at the Stewart B. McKinney National Wildlife Refuge. Description of Study Area The Stewart B. McKinney National Wildlife Refuge comprises 10 separate parcels along the coast of Connecticut ( fig. 1). Two of the refuge's parcels, the Great Meadows and Salt Meadow marsh management units, protect extensive salt marsh habitat along this highly developed shoreline and are the subject of this study. The Great Meadows marsh management unit ( fig. 2A) in Stratford contains about 173 ha of salt marsh bounded on the northern side by industrial and commercial development and on the southern side by Lewis Gut, an embayment that is connected to Long Island Sound. Dikes interrupt tidal flow to the northern and northeastern reaches of the marsh management unit. Although the northeastern section of the marsh management unit is moderately ditched, the marsh management unit contains the largest segment of unditched high salt marsh in Connecticut (Potvin, 2017). The Salt Meadow marsh management unit ( fig. 2B) in Westbrook contains about 25 ha of salt marsh along the Menunketesuck River. The majority of land within 150 m of the unit boundary is forest. The marsh management unit is bisected by a railroad bridge over the river that may restrict tidal flow. The salt marsh is heavily ditched throughout the entire Salt Meadow management unit. During summer 2012, average surfacewater salinities were about 27 parts per thousand (polyhaline as defined by Cowardin and others, 1979) within both marsh management units (S.C. Adamowicz and T. Mikula, FWS, unpub. data, 2017). Regional Structured Decision-Making Framework A regional framework for assessing and managing salt marsh integrity at northeastern NWRs was developed through collaborative efforts of FWS regional and refuge managers and biologists, salt marsh research scientists, and structured decision-making experts. This process followed the discrete steps outlined by Hammond and others (1999) and Gregory and Keeney (2002): 1. Clarify the temporal and spatial scope of the management decision. 2. Define objectives and performance measures to evaluate whether objectives are achieved. 3. Develop alternative management actions for achieving objectives. 4. Estimate the consequences or likely outcomes of management actions in terms of the performance measures. Evaluate the tradeoffs inherent in potential alternatives and select the optimum alternatives to maximize management benefits. This sequence of steps was applied through successive workshops to refine the decision structure and incorporate newly available information. Initial development of the structured decision-making framework occurred during a week-long workshop in 2008 to define the decision problem, specify management objectives, and explore strategies available to restore and enhance salt marsh integrity. During 2008 and 2009, workshop results were used to guide field tests of salt marsh monitoring variables (Neckles and others, 2013). Subsequently, in 2012, data and insights gained from these field tests were used in a two-part workshop to refine management objectives and develop the means for evaluating management outcomes (Neckles and others, 2015). From the outset, FWS goals included development of an approach for consistent assessment of salt marsh integrity across all northeastern NWRs ( fig. 1). Within this regional context, staff at a given refuge must periodically determine the best approaches for managing salt marshes to maximize habitat value while considering financial and other constraints. The salt marsh decision problem was thus defined as applying to individual NWRs over a 5-year planning horizon. The objectives for complex decisions can be organized into a hierarchy to help clarify what is most important to decision makers (Gregory and others, 2012). The hierarchy of objectives for salt marsh management decisions (table 1) was based explicitly on the conservation mission of the NWRS, which is upheld through management to "ensure that the biological integrity, diversity, and environmental health of the System are maintained for the benefit of present and future generations of Americans," as mandated in the National Wildlife Refuge System Improvement Act of 1997 (16 U.S.C. §668dd note). Two fundamental objectives, or the overall goals for salt marsh management decisions, were drawn from this policy to maximize (1) biological integrity and diversity, and (2) environmental health, of salt marsh ecosystems. Participants in the prototyping workshops deconstructed these overall goals into low-level objectives relating to salt marsh structure and function and identified performance metrics to evaluate whether objectives are achieved (table 1). In addition, performance metrics were weighted to reflect the relative importance of each objective (Neckles and others, 2015). The hierarchy of objectives for salt marsh management (table 1) provides the foundation for identifying possible management actions at individual NWRs and predicting management outcomes. Workshop participants developed preliminary influence diagrams (app. 1), or conceptual models relating management actions to responses by each performance metric (Conroy and Peterson, 2013), to guide this process. To allow metric responses to be aggregated into a single, overall performance score, participants also defined value functions relating salt marsh integrity metric scores to perceived management benefit on a common, unitless "utility" scale (Keeney and Raiffa, 1993). Stakeholder elicitation was used to determine the form of each value function relating the original metric scale to the utility scale, ranging from 0, representing the lowest management benefit, to 1, representing the highest benefit (app. 2). Neckles and others (2015) provided details regarding development of the structured decision-making framework and a case-study application to Prime Hook National Wildlife Refuge. [Two fundamental objectives (overall goals of the decision problem) draw directly from National Wildlife Refuge System policy to maintain, restore, and enhance biological integrity, diversity, and environmental health within the refuge. These are broken down into low-level objectives focused on specific aspects of marsh structure and function. Values in parentheses are weights assigned to objectives, reflecting their relative importance. Weights on any branch of the hierarchy sum to one. The weight for each metric is the product of the weights from each level of the hierarchy leading to that metric. NA, not applicable; See also Neckles and others (2015) Application to the Stewart B. McKinney National Wildlife Refuge In January 2018, FWS regional biologists, biologists and managers from seven northeastern NWR administrative units and USGS and Yale University research scientists (table 2) participated in a 1.5-day rapid-prototyping workshop to apply the regional structured decision-making framework to the Maine Coastal Islands, Monomoy, Moosehorn, Parker River, Rachel Carson, and Stewart B. McKinney National Wildlife Refuges. Participants worked within refuge-specific small groups to focus on management issues at individual refuges. Plenary discussions of common patterns of salt marsh degradation, potential management strategies, and mechanisms of ecosystem response offered additional insights to enhance refuge-specific discussions. Participants identified a range of possible management actions for achieving objectives within the Great Meadows and Salt Meadow marsh management units at the Stewart B. McKinney National Wildlife Refuge and estimated the total cost of implementation over a 5-year period; the specific years of implementation were not identified in this prototype. Potential actions to enhance salt marsh integrity ranged from targeted efforts that restore hydrologic connections, control predators, or protect shorelines to large-scale projects that alter marsh elevation or vegetation succession (table 3). Participants predicted the outcomes of each management action 5 years after initial implementation in terms of salt marsh integrity performance metrics. For most metrics, baseline conditions within each unit measured during the 2012-16 salt marsh integrity assessment (S.C. Adamowicz and T. Mikula, FWS, unpub. data, 2017) were used to predict the outcomes of a "no-action" alternative. Baseline conditions were estimated by using expert judgement for three metrics that lacked assessment data (abundance of American black ducks, density of spiders, change in marsh surface elevation relative to sea-level rise). Regional influence diagrams relating management strategies to outcomes aided in predicting consequences of management actions (app. 1). Although the influence diagrams incorporated the potential effects of stochastic processes, including weather, sea-level rise, herbivory, contaminant inputs, and disease, on management outcomes, no attempt was made to quantify these sources of uncertainty during rapid prototyping. Management predictions also inherently included considerable uncertainty surrounding the complex interactions among controlling factors and salt marsh ecosystem components. Following the workshop, the potential management benefit of each salt marsh integrity performance metric was calculated by converting salt marsh integrity metric scores (table 3, workshop output) to weighted utilities (table 4) using regional value functions (app. 2). Weighted utilities were summed across all salt marsh integrity metrics for each action; this overall utility therefore represented the total management benefit, across all objectives, expected to accrue from a given management action (table 4). Constrained optimization (Conroy and Peterson, 2013) was used to find the management portfolio (the combination of actions, one action per marsh management unit) that maximizes the total management benefit across all units under varying cost scenarios for the entire the refuge. Constrained optimization using integer linear programming was implemented in the Solver tool in Microsoft Excel (Kirkwood, 1997). Budget constraints were increased in $5,000 increments up to $25,000; in $25,000 increments up to $100,000; in $50,000 increments up to $400,000; in $100,000 increments up to $1 million; and in $500,000 increments thereafter. The upper limit to potential costs was not determined in advance; rather, it reflected the total estimated costs of the proposed management actions. A cost-benefit plot of the portfolios identified through the optimization analysis was used to identify the efficient frontier for resource allocation (Keeney and Raiffa, 1993), which is the set of portfolios that are not dominated by other portfolios at similar costs (or the set of portfolios with maximum total benefit for a similar cost). The cost-benefit plot also revealed the cost above which further expenditures would yield diminishing returns on investment. To exemplify use of the decision-making framework to understand how a given portfolio could affect specific management objectives, the refuge-scale management benefits for individual performance metrics were compared between one optimal portfolio and those predicted with no management action taken. Results of Constrained Optimization Management actions identified to improve marsh integrity at the Stewart B. McKinney National Wildlife Refuge included adding sediment to the marsh surface to increase elevation; restoring natural hydrology through breaching or removing dikes, removing tide gates, or restoring basin contours or tidal channels; controlling predators; and acquiring land to facilitate marsh migration into adjacent uplands (table 3). For costs ranging from $0 to $4.8 million, the estimated management benefits for individual actions across all metrics, measured as weighted utilities, ranged from 0.410 (for implementing no action in the Great Meadows marsh management unit) to 0.957 (for implementing thin layer deposition followed by vegetation planting in the Salt Meadow marsh management unit coupled with managing stormwater on adjacent lands), out of a maximum possible total management benefit of 1.0 (tables 3 and 4). In each marsh management unit, the alternative with both the lowest management benefit and lowest cost was the "no action" alternative (management action A). Constrained optimization was applied to identify the optimal management portfolios over 5 years for a range of total costs to the refuge. As total cost increased from $0 (no action in either unit) to approximately $6.23 million, the total management benefit at the refuge scale increased from Measures change relative to sea-level rise: 0, lower than sea-level rise; 1, above sea-level rise. 4 Measures level of herbicide applied: 0, none applied; 1, some applied. [Numeric table entries are weighted utilities, which were calculated as raw utilities multiplied by objective weights. Unitless raw utilities were derived from metric scores (table 3) using existing regional value functions (app. 2). Objective weights for individual metrics were calculated as the product of the weights on the branch of the objectives hierarchy leading to each metric (table 1) . 3, portfolio 9). Portfolio 9 represented the turning point in the cost-benefit plot. As expenditures increased beyond the cost of portfolio 9, total management benefit continued to increase but at a lower rate, yielding diminishing returns on investment; there was very little gain in management benefit for expenditures greater than about $3.9 million ( fig. 3, portfolio 14). Several patterns emerged relative to management actions selected within the set of portfolios that yielded the greatest total management benefit per unit cost (table 5, portfolios 2 through 9). The lowest-cost portfolios (total cost up to $250,000) always included predator control at one or the other of the marsh management units. In addition, portfolios at the Great Meadows marsh management unit included actions to restore hydrologic connections or integrated management for mosquito control, whereas portfolios at the Salt Meadow marsh management unit included primarily stormwater management on adjacent lands. In contrast, some management actions were never or rarely included in the portfolios yielding the greatest benefit per cost. For example, stormwater management on adjacent lands and adding culverts or channels were never selected for the Great Meadows marsh management unit, and facilitating marsh migration into the uplands was never selected for the Salt Meadow marsh management unit. At both marsh management units, thin layer deposition was only selected when the available budget exceeded $1 million. Examination of the refuge-scale metric responses to actions included in portfolio 9, which is the turning point in the cost-benefit plot ( fig. 3), revealed how implementation could affect specific management objectives. The actions included were predicted to achieve large gains in the overall management benefits derived from density of spiders (as an indicator of trophic health), duration of flooding, and the capacity of marsh elevation to keep pace with sea-level rise and modest gains in the benefits derived from numbers of tidal marsh obligate birds and herbicide application ( fig. 4). Ecologically, the combination of actions in portfolio 9 may result in an average 32 percent increase in tidal marsh obligate bird counts (averaged across both marsh management units), [Letter designations for actions refer to specific actions and are listed in tables 3 and 4. Portfolios represent the combination of actions, one per marsh management unit, that maximized the total management benefit across all units, subject to a refuge-wide cost constraint. The management actions constituting individual portfolios were selected using constrained optimization. The total cost represents the sum of costs estimated for each action included in the portfolio. The maximum possible total management benefit for the refuge is 2, derived as the maximum possible total management benefit of 1.0 for any management action within one management unit, summed across 2 53 percent decrease in the deviation of surface flooding from the ideal reference condition, and 1,450 percent increase in spider density (derived as the average difference between the predicted metric scores for the actions implemented in portfolio 9 and the "no-action" alternative; table 3). Implementation of actions in this portfolio was also predicted to improve the capacity for marsh elevation to keep pace with sea-level rise in the Salt Meadow marsh management unit and reduce application of herbicides in the Great Meadows marsh management unit. The management benefits predicted for portfolios 1 through 8, at total costs up to $305,000, were derived primarily from expected improvements in surface-water drainage, presumed increases in densities of spiders and numbers of tidal marsh obligate birds, and reduced need for herbicide application (tables 3 and 4). Considerations for Optimizing Salt Marsh Management A regional structured decision-making framework for salt marshes on NWRs in the northeastern United States was applied by the USGS, in cooperation with the FWS, to develop a tool for optimizing management decisions at the Stewart B. McKinney National Wildlife Refuge. Use of the existing regional framework and a rapid-prototyping approach permitted NWR biologists and managers, FWS regional authorities, and research scientists to construct a decision model for the refuge within the confines of a 1.5-day workshop. This preliminary prototype provides a local framework for decision making while revealing information needs for future iterations. Insights from this process may also be useful to inform future habitat management planning at the refuge. The suite of potential management actions and predicted outcomes included in this prototype (table 3) were based on current understanding of the Stewart B. McKinney National Wildlife Refuge salt marshes and hypothesized processresponse pathways (app. 1). Tidal flooding is the predominant physical control on the structure and function of salt marsh ecosystems (Pennings and Bertness, 2001), and there is widespread scientific effort to elucidate how salt marshes may respond to accelerating rates of sea-level rise and management strategies to enhance their sustainability (Kirwan and Megonigal, 2013;Roman, 2017). Many salt marshes throughout the northeastern United States are degraded by roads, dikes, railroads, or other obstructions to tidal flow, and salt marsh restoration frequently focuses on reestablishing tidal flow (Konisky and others, 2006;Roman and Burdick, 2012). Actions to restore tidal exchange throughout the Great Meadows marsh management unit were predicted to improve overall management benefit for a relatively low cost. In contrast, thin-layer deposition of sediments to raise marsh elevation is increasingly proposed to enhance sustainability of salt marshes in the northeastern United States (Wigand and others, 2017) and was identified as a potential action to improve the integrity of both marsh management units at the Stewart B. McKinney National Wildlife Refuge with expected high total management benefit (table 4). However, the high cost of implementation restricted this option to the most costly portfolios (table 5, portfolios 9 through 18). Multiple, interacting factors influence the long-term success of restoration actions in prolonging marsh integrity and improving marsh resilience (Roman, 2017). Future iterations of this decision model can incorporate improved understanding of both implementation costs and marsh responses to management actions. In addition, during construction of the regional decision model, lack of widely available data on rates of vertical marsh growth led to the adoption of a very coarse scale of measurement for change in marsh surface elevation relative to sea-level rise (table 1). In 2012, surface elevation tables (Lynch and others, 2015) were installed in each marsh management unit to obtain high-resolution measurements of change in marsh surface elevation (S.C. Adamowicz and T. Mikula, FWS, unpub. data, 2017). Incorporating this information into subsequent iterations of this structured decisionmaking framework would likely improve predictions related to the potential for marsh surface elevation to keep pace with sea-level rise. Results of constrained optimizations (table 5) based on the objectives, management actions, and predicted outcomes included in this prototype identified four areas in which to improve the utility of the prototype for refuge decision making. First, although increasing the rate of marsh elevation gain relative to sea-level rise is a primary management concern at the Stewart B. McKinney National Wildlife Refuge, enhancing elevation directly through sediment deposition may be cost prohibitive for these salt marshes. Therefore, alternative options to reduce the depth and duration of surface flooding, such as digging runnels to improve surface drainage, may be more feasible (Wigand and others, 2017). Additionally, testing targeted actions to mitigate effects of flooding on at-risk species, such as creation of floating islands as nesting sites for saltmarsh sparrows, may be useful (Benvenuti, 2016). Second, actions to minimize marsh loss through stabilizing channel banks or lessening wave action were excluded from the optimal portfolios. Deconstructing the objective of maintaining the extent of the marsh platform into subordinate objectives and performance metrics related to both horizontal and vertical gains and losses of marsh substrate may help focus decision making on erosion of marsh edges. Third, although implementing integrated marsh management for mosquito control, which is a comprehensive approach to restoring tidal hydrology and reducing mosquitoes (Rochlin and others, 2012), was predicted to enhance abundance of nekton and tidal marsh obligate birds and reduce application of herbicide for Phragmites control at the Great Meadows unit (table 3), the regional environmental health objectives included in this prototype did not accommodate a potential additional benefit of reducing or eliminating use of insecticides to control mosquitoes. The mosquito management plan for the Great Meadows unit emphasized that to minimize use of insecticides on the refuge, hydrologic restoration should be employed where possible to decrease mosquito production (Potvin, 2017). In the future, including an objective in the decision model related to minimizing insecticides would incorporate the effect of integrated marsh management on total pesticide use, including herbicides and insecticides, into the total management benefits. Finally, the constrained optimizations analyzed in this report were based on approximations of management costs. As salt marsh management is undertaken around the region, a detailed list of actual expenses can be compiled, including staff time for project planning as well as materials, equipment, contracts, and staff time for implementation. This will allow future iterations of the decision model to include more accurate cost estimates. The prototype model for the Stewart B. McKinney National Wildlife Refuge provides a useful tool for decision making that can be updated in the future with new data and information. The spatial and temporal variability inherent in parameter estimates were not quantified during rapid prototyping. Previously, preliminary sensitivity analysis revealed little effect of incorporating ecological variation in abundance of marsh-obligate breeding birds on the optimal solutions for Prime Hook National Wildlife Refuge (Neckles and others, 2015). This lends confidence to use of this framework for decision making; however, including probability distributions for each performance metric in the decision model could be a high priority for future prototypes. Future monitoring of salt marsh integrity performance metrics will be useful to refine baseline parameter estimates and to determine the background rate of change in the absence of management actions; feedback from measured responses to management actions around the region will help reduce uncertainties surrounding management predictions. The structured decision-making framework applied here to the Stewart B. McKinney National Wildlife Refuge is based on a hierarchy of regional objectives and regional value functions relating performance metrics to perceived management benefits. It will be important to ensure that subsequent iterations reflect evolving management objectives and desired outcomes. Elements of the decision model could be further adapted, for example through differential weighting of objectives or altered value functions, to reflect specific, local management goals and mandates. Future optimization analyses that use this framework could also incorporate additional constraints on action selection, such as ensuring that particular actions within individual marsh management units are included in optimal management portfolios, to further tailor the model to refuge-specific needs. Appendix 1. Regional Influence Diagrams The influence diagrams (following the style of prototype diagrams in Neckles and others, 2015) in this appendix (figs. 1.1-1.8) relate possible management strategies to performance metrics. Shapes represent elements of decisions, as follows: rectangles for actions, rectangles with rounded corners for deterministic factors, ovals for stochastic events, and hexagons for consequences expressed as a performance metric. Utilities [u(x)] are derived as monotonically increasing, monotonically decreasing, or step functions over the range of performance metric x. In the functions in figures 2.1 through 2.10, x, Low, High, and ρ are expressed in performance metric units; Low and High represent the endpoints of the given metric range for the Stewart B. McKinney National Wildlife Refuge; and ρ represents a shape parameter derived by stakeholder elicitation (Neckles and others, 2015). Break points in step functions were also derived by stakeholder elicitation.
2020-12-31T09:04:49.876Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "1d923bb3a9ff45a71ba331b1dec882cec3187532", "oa_license": null, "oa_url": "https://pubs.usgs.gov/of/2020/1139/ofr20201139.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "98f0593ce6b9e3c2c7f3ac670b74c526095cfe95", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
235905915
pes2o/s2orc
v3-fos-license
Dutch tenants at risk of eviction: Identifying predictors of eviction orders In order to prevent evictions, it is important to gain more insight into factors predicting whether or not tenants receive an eviction order. In this study, ten potential risk factors for evictions were tested. Tenants who were at risk of eviction due to rent arrears in five Dutch cities were interviewed using a structured questionnaire, and six months later their housing associations were asked to provide information about the tenants’ current situation. Multiple logistic regression analyses with data on 344 tenants revealed that the amount of rent arrears was a strong predictor for receiving an eviction order. Furthermore, single tenants and tenants who had already been summoned to appear in court were more likely to receive an eviction order. These results can contribute to identifying households at risk of eviction at an early stage, and to develop targeted interventions to prevent evictions. Introduction Losing one's home due to an eviction, and also the mere prospect of being evicted, can have a severe impact on the lives of those involved [1]. It causes major stress and feelings of panic and shame [2], and may even lead to suicide [3]. Although the consequences of evictions are severe, data on the impact of and reasons behind evictions are scarce [4][5][6]. It is therefore important to gain more insight into the factors that increase the risk of eviction. In this study we explore which factors cause eviction orders among Dutch tenants who are at risk of eviction due to rent arrears. Preventing evictions is not only important from a social and health point of view, but also from a financial perspective [5]. Akkermans and Räkers [7] estimated the cost of an eviction in the Netherlands to be between 5,000 and 9,000 euro. These costs include bailiff and litigation costs, costs for the eviction itself, unrecovered rent, costs for repairing damages to the property, and costs for removing the personal belongings of the tenants from the accommodation. When tenants are not able to pay these costs, they have to be covered by the housing associations. There are also costs for society, for example, due to the use of the social relief system, and costs related to rehousing [8], debt counseling [9] and the utilization of shelters facilities. There are approximately 350 social housing associations in the Netherlands, owning about a third of the total housing stock; they house around four million tenants in 2.4 million homes [10]. Tenants with arrears of roughly two months' rent receive a letter from their social housing association requesting them to pay the rent arrears or make payment arrangements. When the terms of the housing association are not met within approximately three months, the debt is handed over to a bailiff who then tries to collect the rent or make payment arrangements. If this fails, the bailiff requests the court to serve the tenant with an eviction order, which the housing association can use to terminate the tenancy agreement [5]. In the Netherlands, receiving an eviction order does not necessarily mean that a tenant will be evicted. In almost 75% of the cases in which tenants receive an eviction order, the social housing association and the tenant come to agreements which prevent the eviction, for example repayment in installments [11]. In 2018, an estimated 12,000 eviction orders were issued in the Dutch social housing sector, of which 3,000 orders were actually effectuated, and 1,500 tenants left the residence after receiving the eviction order, before an eviction could take place. Of the 3,000 evictions, the majority, 80%, was due to rent arrears [12]. The negative social, health and financial consequences of evictions underscore the importance of gaining insight into the predictors of eviction in order to prevent evictions and the accompanying negative effects. In a recent large study in Milwaukee [13], individual, neighborhood, and social network determinants of eviction were identified. This study showed that a higher number of children in the household, recent job loss, high neighborhood crime and eviction rates, and network disadvantage (strong ties with people who had experienced or were experiencing poverty and disadvantage) increased the likelihood of eviction. Furthermore, a survey among tenants appearing in the Milwaukee eviction court indicated that tenants with children were significantly more likely to receive an eviction order than tenants without children [14]. However, several Dutch housing associations claim to be more lenient towards households with children, because it is agreed that, if possible, eviction of children should be prevented [15]. Risk factors for eviction identified by Stenberg, Kareholt, and Carroll [16], are a low income, a criminal record, being refused help from the welfare authorities and being an immigrant. In a longitudinal study on home owners and renters in Britain from 1991 to 1997, Böheim and Taylor [17] concluded that households that were evicted tended to have younger heads, a lower household income, and more often had experienced a financial setback than households that were not evicted. Van Laere, De Wit, and Klazinga [18] studied a group of tenants with rent arrears in Amsterdam, and compared characteristics of those that were and were not evicted. Being native Dutch and having a drug-related problem were identified as risk factors for eviction. Furthermore, debt counseling can be helpful in preventing evictions [2,19]. Thus, a lack of help from debt counseling services may be a potential risk factor for eviction. Additionally, data from Dutch housing associations show that the majority of evicted tenants are single [12]. In addition to the abovementioned risk factors for evictions, it is plausible that the height of the rent arrears and the phase in the eviction process are also important predictors for evictions due to rent arrears, because higher rent arrears are more difficult to repay. The higher the rent arrears get, and the further tenants are in the eviction process, the less possibilities there are for recovery [7]. Furthermore, policies to prevent evictions differ significantly across Dutch cities [7,15], and therefore the city in which the tenant lives may affect their chance of recovering from their rent arrears and averting eviction. In most Dutch municipalities there are agreements between housing associations, social work agencies, debt counseling services, and in most cases the municipality, to prevent evictions. However, these agreements differ significantly across municipalities; some agreements have clear guidelines, measurable goals, and clear descriptions of tasks of the different parties involved, while other agreements are more general and lack these details [7]. For the present study, we approached tenants who had received a second notification from a bailiff due to non-payment of rent. This is the last phase in the eviction process before the bailiff starts the court proceeding in order to obtain an eviction order. The aim of this study was to gain insight into the role of the abovementioned risk factors in predicting whether or not Dutch tenants at risk of eviction due to rent arrears eventually receive an eviction order. This knowledge will help housing associations and social workers to identify vulnerable households and develop early, targeted interventions to prevent evictions. Procedure and participants For this study, we conducted logistic regression analyses with a sample of 344 Dutch households at risk of eviction due to rent arrears. Tenants were included in the study when they were at least 18 years old, lived in independent housing from social housing associations, and had received at least a second notification from a bailiff because of rent arrears for the housing they were currently living in. We have complied with APA ethical principles in our treatment of individuals participating in our research, and our study complies with the criteria for studies that have to be approved by an accredited Medical Research Ethics Committee. Upon consultation, the Arnhem/Nijmegen Ethics Committee stated that the study was exempt from formal approval (registration number 2011/110) as the participants were not subjected to any treatment other than the interview. Tenants were contacted through sixteen social housing associations in five different municipalities in the Netherlands (Amsterdam, Leiden, Nijmegen, Rotterdam, and Utrecht). The social housing associations, or bailiffs working for them, screened all tenants with rent arrears to identify tenants who met the study's inclusion criteria. All identified tenants received an information letter about the study. Two types of letters were sent: opt-out and opt-in letters. With the opt-out method, tenants were informed that they would be contacted by telephone to ask them if they were willing to participate in the study, unless they sent in the opt-out card. With the opt-in method, tenants were asked to contact the researchers when they were willing to participate. The opt-in method was only used when the social housing association was unwilling or unable to use the opt-out method due to organizational or privacy reasons. In addition, seven local projects working with people at risk of eviction and six debt counseling agencies were provided with flyers explaining the research, which they distributed among clients that met our inclusion criteria. All these institutions and agencies chose the opt-in method, so their clients were asked to contact the researchers if they were willing to participate. In total, 495 tenants were included in the study at T0 (Fig 1). All tenants were interviewed between November 2011 and February 2013, using a structured questionnaire with standardized instruments. Informed consent was obtained before the start of the interview. After participation, tenants received 20 euro. Six tenants could not be interviewed in Dutch. These interviews were conducted using an English translation of the questionnaire (n = 3), or with on the spot translation to French (n = 2) or Turkish (n = 1) by bilingual interviewers. All interviewed tenants gave written permission to the researchers to obtain information about their situation from their housing association. This T1 data collection took place six months after each T0 interview. At T1, the respective housing association was asked whether or not an eviction order had been served to the tenant. This information was collected through an online questionnaire for most housing associations, while two housing associations preferred to send this information by e-mail. T1 data collection took place between May 2012 and December 2013. While all participating housing associations had agreed to provide us with the T1 data, collecting this information at T1 proved to be difficult. Several housing associations were reorganizing during the course of this study, so contact persons that had agreed to provide us with the required information were no longer working at the housing association when we eventually asked for a tenant's information and their colleagues were not willing to participate. One housing association had changed their data system and was unable to find information about specific tenants. Furthermore, since several housing associations transferred rent arrear cases to debt collection agencies which then worked on these cases independently, these housing associations were unaware of the status of court proceedings. The debt collection agencies were unwilling to participate in this study, and therefore these housing associations were not able to provide information about eviction orders. We were able to obtain T1 information about eviction orders for 71% (353) of the interviewed tenants. Nine tenants had already received an eviction order at T0; these tenants were excluded from the analyses. Thus, 344 respondents were included in the analyses. A possible bias might be caused by tenants' high levels of stress, anxiety, shame, or fear of being evicted. Consequently, we made every effort to explain that participation in the study was very important. Researchers were also very accommodating with rescheduling interview appointments when needed. We conducted several preliminary analyses to determine whether the imminent eviction was indeed perceived as an emotional burden by the tenants included in the study. We found that tenants participating in the study reported that, as a result of their rent arrears and risk of eviction, they experienced stress (87%), had trouble sleeping (66%), were sad (72%), felt powerless (77%) and felt ashamed (57%). This suggests that tenants with a high self-assessment of eviction risk also entered the study. Measures We included one outcome variable in our analyses, and ten potential predictors, derived from our review of the literature. Outcome variable: Eviction order. Six months after each interview, the respective housing associations were asked whether or not the tenant had received an eviction order. Predictors. Based on the literature, we included ten predictors, divided into three clusters: socio-economic variables, housing and finances, and eviction circumstances. Socio-economic variables. Four socio-economic variables were included as potential predictors. The first was the respondent's age at the time of interview. Second, a foreign background variable classified tenants into three categories, based on the classification by Statistics Netherlands [20]: native Dutch (both parents were born in the Netherlands, even if the respondent was not born in the Netherlands), first generation immigrants (the respondent and at least one of the parents were not born in the Netherlands), and second generation immigrants (the respondent was born in the Netherlands and at least one of the parents was not born in the Netherlands). The household composition variable indicates whether respondents lived in one-person households or multi-person households, and the fourth variable indicates whether there were children living in the household (yes/no). Housing and finances. Four variables related to housing and finances were included. Respondents estimated their total household income in the last month in Euros, and the total amount of their current rent arrears in Euros. For presentational reasons, we transformed the income and rent arrears variables by dividing them by 1,000. Furthermore, respondents were asked whether they had been fired from a job in the past three years (yes/no), and whether they had received any help from a debt counseling agency in the six months prior to the interview (yes/no). Eviction circumstances. Two variables were included to account for the heterogeneity of our sample of households: the city in which the tenant lived (Amsterdam, Leiden, Nijmegen, Rotterdam or Utrecht), and the phase in the eviction process at the time of interview (before being summoned to appear in court, between summoning and the court hearing, after the court hearing but without an eviction order). The phase in the eviction process was determined by asking several questions, to be answered by yes or no (i.e. "Were you summoned to appear in court because of your rent arrears?"). Table 1 shows descriptive statistics and missing values for our sample. Since data was missing for more than 5% of the respondents for income and level of rent arrears, we searched for differences between the respondents with and without values for these two variables for the outcome measure. A significant difference was found only for respondents with and without missing values for the level of rent arrears: respondents with missing values for the level of rent arrears more often received an eviction order (41%, compared to 22% of tenants with a value for level of rent arrears; χ 2 (1, N = 344) = 5.37, p = .02). Data analysis We used logistic regression analysis to determine which variables significantly predicted whether tenants had received an eviction order at T1. In all analyses, p-values of � .10 were considered statistically significant. A backward stepwise logistic regression analysis was conducted, starting with a model that included all ten predictors and, in each step, deleting the predictor with the lowest significance, leading to a sparse model with only significant predictors. Since only respondents with values for all predictors could be included in the backward stepwise logistic regression, the accumulation of missing values reduced our sample significantly (to N = 275); therefore, we followed up by adding each predictor to the sparse model individually (with the N that was available for this smaller model) to determine whether adding the predictor improved the model. Using the final model, the predicted probability and marginal effects were calculated. Results Of the 344 tenants that could be included in the analyses to predict eviction orders, 24% (82) had received an eviction order at T1. The initial logistic regression analysis with all predictors only shows total rent arrears at T0, living in Utrecht, and being between a summons and a court hearing as significant predictors (Table 2). Backward stepwise logistic regression analysis resulted in a model with household composition, the level of rent arrears and phase in the eviction process as predictors. We then added each of the excluded predictors to this sparse model to determine if they improved the model, but we did not identify other significant predictors. Thus, our final model to predict eviction orders includes being a single tenant, total rent arrears at T0, and phase in the eviction process ( Table 3). The level of rent arrears at T0 proved to be a strong predictor for eviction orders: the odds of receiving an eviction order were more than two times greater when rent arrears were increased by € 1,000 (OR = 2.48; 95% CI = 1.79, 3.44). Furthermore, the odds of receiving an eviction order were more than two times greater for single tenants compared to households of . Marginal effects indicated that, controlled for rent arrears and phase in the eviction process, single tenants had a probability of .32 (95% CI = .23, .43) of receiving an eviction order, while the probability for multi-person households was .18 (95% CI = .11, .27). Additionally, the odds of receiving an eviction order were more than three times greater for tenants who had been summoned to appear in court but had not had a court hearing yet, compared to tenants who had not been summoned to appear in court (OR = 3.22; 95% CI = 1.40, 7.40). Controlled for household composition and rent arrears, the probability of receiving an eviction order for tenants who had not been summoned was .15 (95% CI = .11, .21), while it was .37 (95% CI = .22, .55) for tenants who had been summoned but had not had a hearing yet, and the probability was .24 (95% CI = .13, .40) for tenants who had had a hearing. This logistic regression model was used to calculate the probability of receiving an eviction order as a result of the level of rent arrears at T0. Figs 2 and 3 show how the probability of receiving an eviction order differs for different categories of household composition and phases in the eviction process at T0. In each of these figures, the other predictor was at average level. In general, the probability of receiving an eviction order is relatively low for low levels of rent arrears. However, the probability of receiving an eviction order significantly increase for higher levels of rent arrears, with a higher probability for single tenants. As expected, tenants who had not yet been summoned to appear in court at T0 had the lowest probability to receive an eviction order; this probability increased with higher rent arrears. However, while tenants who had been summoned and were waiting for the court hearing had the highest probability to receive an eviction order, this probability was lower (and not significantly different from the tenants who had not yet been summoned) for the tenants who had already appeared in court but had not received an eviction order (yet). In order to further explore the data, we repeated the logistic regression analyses for single tenants and for multi-person households. Tables 4 and 5 present the final models for these groups. For both single tenants and multi-person households, the level of rent arrears at T0 was a strong predictor of receiving an eviction order, but differences between these groups were found for other predictors. For single tenants, the odds of receiving an eviction order were three times greater when rent arrears were increased by € 1,000 (OR = 2.99; 95% CI = 1.69, 5.30). Furthermore, the odds of receiving an eviction order were more than two times greater for single tenants who had been fired from a job in the past three years compared to single tenants who had not experienced a recent job loss (OR = 2.28; 95% CI = 0.95, 5.47). Additionally, the odds of receiving an eviction order were more than seven times greater for single tenants who had received a summons from a bailiff compared to single tenants who had not received a summons. Marginal effects indicated that, controlled for rent arrears and phase in the eviction process, single tenants who had recently been fired from a job had a probability of .45 (95% CI = .28, .64) of receiving an eviction order, while the probability for single tenants who had not recently lost a job was .27 (95% CI = .16, .41). Controlled for rent arrears and recent job loss, single tenants had a probability of 0.19 (95% CI = .13, .29) to receive an eviction order if they had not been summoned yet, a probability of .64 (95% CI = .34, .86) if they had received a summons but had not had a court hearing yet, and a probability of .28 (95% CI = .12, .53) if they had had a court hearing. Discussion The aim of this study was to gain more insight into risk factors for receiving an eviction order. Our results indicate that the level of rent arrears is a strong predictor; higher rent arrears significantly increase the risk of receiving an eviction order. Additionally, single tenants were more likely to receive an eviction order, and tenants who had been summoned to appear in court and were waiting for their court hearing were more likely to receive an eviction order, compared to tenants who had not been summoned yet. Furthermore, different predictors were found for single tenants and for multi-person households. While rent arrears was found to be a significant predictor in both groups, among single tenants recent job loss and phase in the eviction process were significant predictors, while among multi-person households city was a significant predictor. Our results confirm that the level of rent arrears is an important predictor for receiving an eviction order. Increasing rent arrears decrease tenants' possibilities for recovery. Furthermore, the phase in the eviction process at the time of interview was found to be a significant predictor for eviction orders. Tenants who were between summoning and the court hearing had a higher chance to receive an eviction order than tenants who had not been summoned yet. This seems to comfirm the notion that the further tenants are in the eviction process, the smaller their chances are for recovery [7]. However, the group of tenants that was the furthest in the eviction process (the group that had already had a court hearing), did not have a significantly higher chance to receive an eviction order than tenants who had not been summoned yet. This may be explained by the fact that a court hearing can be a traumatic event for a tenant. The threat of an imminent eviction, when the housing association has the legal right to terminate the rental contract, may also serve as a strong motivator for tenants to take action towards repaying their rent arrears. Another explanation of this result is a certain selection bias: a court hearing is a very stressful event for tenants, so most tenants in that phase may not have wanted to participate in an interview. The tenants who did agree to be interviewed may have been the ones who were in less stressful circumstances, because they may have been given a second chance to repay their arrears in court, and therefore their chances of eventually receiving an eviction order were lower than for other tenants who had had a court hearing. Furthermore, this study demonstrates that single tenants are at a significantly higher risk of receiving an eviction order. This is in line with the observations of Dutch housing associations that the majority of evicted households are single tenants [12]. A previous study [21] also demonstrated that single tenants, especially men, are at an increased risk of eviction, and a study in Sweden [22] showed that 70% of the evicted households were single tenants. The lack of support from a partner or other household members may make it more difficult for these tenants to find a solution for their rent arrears and to avert receiving an eviction order. Further investigation of differences between single tenants and multi-person households showed significant differences between these groups. Among single tenants, besides rent arrears and phase in the eviction process, recent job loss was a significant predictor of receiving an eviction order. Single tenants generally do not have multiple income sources, so losing an income has a bigger impact than it has on families with multiple sources of income. Interestingly, among multi-person households the city is a significant predictor of receiving an eviction order. Households in Rotterdam and Utrecht were less likely to receive an eviction order than households in Amsterdam, Leiden and Nijmegen. This indicates that different local policies and housing association procedures affect households' chances to receive an eviction order. The results of our study have several implications for policies regarding the prevention of evictions, and provide insights that can contribute to developing targeted interventions to prevent evictions. First, since the level of rent arrears and the phase in the eviction process are strong predictors, early interventions are necessary; if at-risk households can be identified at an early stage, the rent arrears are more manageable and evictions can be prevented. Since many households with rent arrears have other debts as well, and the rent often is not the first unpaid bill, early identification of households with a variety of arrears cannot be done by housing associations alone. In Amsterdam, for example, the Vroeg Eropaf ("go for it") policy, which helps to identify households with financial difficulties at an early stage, in order to find solutions before a court process starts, indeed includes a large insurance company and utility providers, besides all social housing associations and social care institutions in the city [23]. Additionally, as single tenants are at a significantly higher risk of receiving an eviction order, special attention is needed for this group. Single tenants may need extra support and professional help in order to avert eviction. Single tenants who have recently experienced a job loss have a particularly higher chance of receiving an eviction order. This knowledge may be helpful to housing associations as they develop early interventions. It should be noted that the population of tenants at risk of eviction due to rent arrears may differ across countries, resulting from different local circumstances and policies. As this study indicated, risk factors even differ across Dutch cities; multi-person households in Rotterdam and Utrecht were less likely to receive an eviction order compared to multi-person households in Amsterdam, Leiden and Nijmegen. Each Dutch city has their own policies and projects regarding evictions. In Rotterdam, for example, intensive support is provided to families with severe problems, where families risk losing social assistance benefits or an eviction if they refuse the support that is offered [24]. While policies differ among cities, housing associations also have differing policies and procedures when families have accumulating rent arrears. Therefore, when developing interventions, it is important to study the local context and identify vulnerable households in that specific context. This study has provided insights into risk factors for eviction, and how these risk factors differ across groups. Similar studies in other countries are needed to determine if these risk factors are relevant in those contexts as well. One of the strengths of this study is its large scale. We included 344 tenants from five municipalities of different sizes, in order to make our sample more representative of the total Dutch population of tenants at risk of eviction due to rent arrears. To our knowledge, this is the first study that specifically focused on tenants facing eviction because of rent arrears, thus providing important clues to improve policy and practice to prevent evictions of these households. This vulnerable population was difficult to contact, because many tenants did not open their mail or answer their telephone, or indicated that the anxiety and stress made it impossible for them to participate in an interview. This may have caused a selection bias at T0: tenants who were experiencing high levels of stress, anxiety and/or shame may have been less inclined to participate in an interview. However, tenants participating in the study reported high levels of stress, sadness, powerlessness, trouble sleeping, and shame; this indicates that for many tenants, these emotions did not prevent them from participating. Another challenge related to data collection was the T1 data collection. T1 data was collected by contacting the participating housing associations and asking whether or not an eviction order was issued for the tenant (after receiving written permission from the tenant at T0 to do so). Not all housing associations were able and/or willing to provide us with the complete information six months after each interview, despite our efforts to ensure that this information would be provided. All this led to a rather high non-response for both our interview data and data from housing associations. However, due to the large scale of this study, there was still ample data to build our conclusions on. Our analyses of the missing data indicated that tenants who had a missing value for the level of rent arrears at the time of interview received an eviction order more often than tenants with a value for the level of rent arrears. It is possible that the tenants who did not answer the question about their rent arrears had lost control over and insight into their financial situation, or were unwilling to mention the level of rent arrears out of shame for the height of their debt. Therefore, our results may have underestimated the true effect of the level of rent arrears as a predictor. Another limitation of this study is that it was impossible to predict actual evictions. Because the process from rent arrears to eviction can be very long, it was not feasible to determine which tenants were eventually evicted, as this usually takes longer than six months. Eviction orders are often used by housing associations to pressure tenants into accepting help from debt counseling. The threat of an imminent eviction, when the housing association has the legal right to terminate the rental contract, may also serve as a strong motivator for tenants to take action towards repaying their rent arrears. Therefore, there may be a long process after an eviction order, and if this process eventually leads to an eviction, it often does not take place within a few months after the eviction order. While this study has provided some important insights into the risk factors for eviction orders, there is a great need for future research, to gain more insight into this vulnerable population and to develop targeted interventions. First, similar research should be conducted in other countries, in order to determine whether risk factors are similar across countries. Second, longer-term studies are needed to determine which risk factors are associated with actual evictions. Furthermore, it is important to examine why single tenants are at a higher risk of receiving an eviction order. More insight into all of the above will help to develop targeted, effective interventions to prevent evictions. To summarize, this study aimed to identify risk factors for tenants at risk of eviction due to rent arrears. Our results call for early identification of households with financial difficulties, because higher rent arrears and being later in the eviction process make recovery more difficult. Single tenants are at a higher risk to receive an eviction order. Therefore, targeted interventions should be developed to take these risks into account.
2021-07-16T06:16:30.502Z
2021-07-14T00:00:00.000
{ "year": 2021, "sha1": "e49912d02deb589153658090015dbced857c3d04", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0254489&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e63e148d211ffc6358a5eb74e33df229e4814b4f", "s2fieldsofstudy": [ "Law", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
14043824
pes2o/s2orc
v3-fos-license
A genome-wide association study reveals a locus for bilateral iridal hypopigmentation in Holstein Friesian cattle Background Eye pigmentation abnormalities in cattle are often related to albinism, Chediak-Higashi or Tietz like syndrome. However, mutations only affecting pigmentation of coat color and eye have also been described. Herein 18 Holstein Friesian cattle affected by bicolored and hypopigmented irises have been investigated. Results Affected animals did not reveal any ophthalmological or neurological abnormalities besides the specific iris color differences. Coat color of affected cattle did not differ from controls. Histological examination revealed a reduction of melanin pigment in the iridal anterior border layer and stroma in cases as cause of iris hypopigmentation. To analyze the genetics of the iris pigmentation differences, a genome-wide association study was performed using Illumina BovineSNP50 BeadChip genotypes of the 18 cases and 172 randomly chosen control animals. A significant association on bovine chromosome 8 (BTA8) was identified at position 60,990,733 with a -log10(p) = 9.17. Analysis of genotypic and allelic dependences between cases of iridal hypopigmentation and an additional set of 316 randomly selected Holstein Friesian cattle controls showed that allele A at position 60,990,733 on BTA8 (P = 4.0e–08, odds ratio = 6.3, 95% confidence interval 3.02–13.17) significantly increased the chance of iridal hypopigmentation. Conclusions The clinical appearance of the iridal hypopigmentation differed from previously reported cases of pigmentation abnormalities in syndromes like Chediak-Higashi or Tietz and seems to be mainly of cosmetic character. Iridal hypopigmentation is caused by a reduced content of melanin pigment in the anterior border layer and iridal stroma. A single genomic position on BTA8 was detected to be significantly associated with iridal hypopigmentation in examined cattle. To our knowledge this is the first report about this phenotype in Holstein Friesian cattle. Electronic supplementary material The online version of this article (doi:10.1186/s12863-017-0496-4) contains supplementary material, which is available to authorized users. Background For many decades, eye color and eye color genetics has been an intensively studied field of research in humans and other species. More than 100 years ago, a relatively plain dominant-recessive model of inheritance of human brown and blue eye color has been assumed [1]. However, in recent years it became clear that iris pigmentation is under the control of a plethora of genes and is rather a quantitative than a simple Mendelian trait [2,3]. Pigmentation of the eye, skin, and hair is the result of melanin pigment production in melanocytes. Melanin producing cells contain specialized lysosome-related organelles, the melanosomes, depositing melanin pigment in mature stage. Brighter colored eyes usually have less melanin pigment in iris stroma than darker eyes [2,4,5]. This and other factors as pigmentation of the posterior pigment epithelium [5], the type of melanin (eu-or pheomelanin) in iris melanocytes [5], light-scattering, and absorption processes [4] are believed to influence human eye color determination. Changes in iris color have also been reported in cattle. Discolorations of the iris, either mono-or bilateral, complete or partial, are usually referred to as heterochromia iridis (HI). However, the phenotypic appearance differs remarkably between reported cases. The most prominent iris color variations were detected in cattle suffering from complete albinism, showing a pale blue iris with a white periphery [6][7][8]. Less distinct pigmentation anomalies were observed in non-albinotic HI cases, showing a bicolored iris with a central ring of blue and a peripheral ring of gray or brown [9,10]. Besides albinism, severe eye color changes in cattle were also observed in syndromes like Tietz [11] and Chediak-Higashi [12,13]. These syndrome related pigmentation alterations are usually accompanied by more restrictive anomalies. German Fleckvieh cattle with Tietz like syndrome exhibited bilateral deafness and colobomatous eyes [11]. Chediak-Higashi syndrome usually manifested in bruisability and bleeding tendency [12,13]. In albino cases with HI further clinical features related to eye development like nystagmus and blindness were detected [8]. Recently an alteration of iris coloration has been observed in Angus and Simmental breed. Affected cattle showed an oculocutaneous hypopigmentation (OH) with a pale blue iris and a tan periphery coupled with a change in coat color from black to chocolate. It is assumed that this aberration was introduced into the Simmental breed in the late 1950s by Angus founders and is inherited as an autosomal recessive trait. An amino acid exchange in the Ras-related Protein Rab-38 (RAB38) gene was identified as the disease causing mutation (Jon Beever, personal communication). Likewise, Rab38 cht / Rab38 cht mice with a mutation (G146T) in exon 1 of Rab38 develop a similar phenotype with chocolate coat color and ocular hypopigmentation [14,15]. In the current study the clinical, histological and molecular examination of bilateral iris hypopigmentation in 18 Holstein Friesian cattle are described. The aim of the study was the description of the phenotype, the identification of pigmentation alterations in the eye of affected cattle and the determination of the underlying genetics of iridal hypopigmentation in cattle. Animals, pedigree information and DNA samples A total of 18 Holstein Friesian (HF) cattle (9 male, 9 female) with hypopigmented irises originating from eight different farms were used for this study. Pedigree data were obtained from the German livestock database service provider (VIT) and checked for shared common ancestors. Complete pedigree data were available for ten individuals, while for eight animals only paternal pedigree data were present. Pedigrees of HF cases were constructed using Pedigraph [16]. DNA was extracted from EDTA-blood using MagNa Pure LC DNA Isolation Kit I (Roche Diagnostics Deutschland GmbH, Mannheim, Germany). Control DNA was obtained from the depository at the Institute of Veterinary Medicine (Göttingen, Germany). Histology To determine the exact cause of the hypopigmentation, histological evaluation of irises of unaffected and affected animals was conducted. Eyes of each animal were completely enucleated immediately after slaughtering. The dorsal half of each freshly enucleated eyeball was opened by incision of the sclera midway between the cornea and the optic nerve. After removal of the vitreous body, eyeballs were immersed in 4% phosphate-buffered formaldehyde. After fixation for at least 48 h, lenses were removed and four cross sections were made through the anterior half of the eyeballs including dorsal, medial, ventral, and lateral aspects of the iris and adjacent structures of the anterior eye including ciliary body and cornea. Additionally, a cross section was prepared from the lens and from the caudal half of the eyeball at the level of the optic nerve. Trimmed tissue samples were paraffin-embedded, sectioned at 3 μm, and stained with hematoxylin and eosin (HE) for light microscopic examination. Genotyping, genome-wide association study (GWAS) and statistical analysis For GWAS 172 randomly selected HF cattle genotypes were used for GWAS. The 18 cases of iridal hypopigmentation were also genotyped using the Illumina BovineSNP50 BeadChip. Final reports were generated using GenomeStudio V2011.1 (Illumina, San Diego, USA) and imported into SNP & Variation Suite (SVS) 8.5.0 (Golden Helix, Bozeman, USA). Genotype data were screened through a series of quality control criteria, including Mendelian errors, minor allele frequency (MAF) < 1%, p-value of Fisher's Hardy-Weinberg equilibrium (HWE) test < 0.001 (based on controls), and single nucleotide polymorphism (SNP) call rate < 98% reducing the data set from 54,610 to 44,952 SNPs. Associations were calculated under an additive, recessive, and dominant model [17]. The additive genetic model fitted the data best. We did not detect significant evidance for population stratification. Genomic positions refer to NCBI UMD3.1.1. Genotypic and allelic dependences were calculated between the 18 cases of iridal hypopigmentation and an additional set of randomly chosen 316 controls. Genotypes of this validation cohort were extracted from another data set generated using the Illumina BovineSNP50 BeadChip. Genotypes were compared using 3×2 or 2×2 contingency tables and Fisher's exact or χ 2 statistics (df = 2). P < 0.005 was considered to be significant. Calculations were done using Microsoft Excel for Mac 2011 (14.7.1). HWE χ 2values were calculated according to Rodriguez et al. [18] and considered significant with p > 0.05 (df = 2). A haplotype association test was performed using a moving window of five markers. The expectation-maximization algorithm was applied using 50 iterations and a convergence tolerance of 0.0001 [19]. Results were corrected for multiple testing according to Bonferroni. Comprehensive clinical examination of affected cattle As shown in Fig. 1, affected HF cattle had a normal, breed specific coat color with no obvious color deviations of eyelids and eyelashes. Furthermore, cases did not show any neurological deficits, i.e. disturbance of the level of consciousness, mentation and behaviour, posture and gait, postural reactions, as well as spinal and cranial nerve function [21]. Signs of a Horner syndrome, which has been described in conjunction with HI in humans, were absent [22,23]. Pupillary Fig. 1 Phenotypic appearance of iridal hypopigmentation. HF cattle were ophthalmological and neurological examined and irises underwent histologic evaluation. a and b: Coat color of affected cattle was typical for the breed and without any sign for albinism. Both cattle were normally developed at time of examination. c-f: Iris color of cases a and b. The degree of discoloration clearly differed between cases. All affected cattle showed a bicolored iris with a central ring of silver-blue and a peripheral ring of brown-gray. Iris color within one iris showed alternating darker and lighter parts light and blink reflex were normal. No spontaneous nystagmus or strabismus could be observed. The animals had a physiological nystagmus. The ocular fundus was normal, a retinitis pigmentosa was excluded. To exclude common infectious diseases, animals were also tested for bovine viral diarrhea virus (bovine virus diarrhea), bovine herpesvirus type 1 virus (infectious bovine necrotic rhinotracheitis), bovine leukaemia virus (bovine lymphomatosis), Brucella spp. (bovine brucellosis), bluetongue virus (bluetongue disease), Mycobacterium paratuberculosis (paratuberculosis), Schmallenberg virus and Neospora caninum (bovine neosporosis). All test results were negative. Summarizing the clinical analysis of the examined animals, a bilateral hypopigmentation caused by a hitherto unknown genetic variation strictly affecting iridal coloration was suspected. As the affected animals did not show any other anomalies, syndromes like Chediak-Higashi or Tietz were excluded. Detailed ophthalmological examination of the irises of affected animals showed two merging, but clearly differentiable shades of color, a bluish center and a grayish peripheral ring. Although all animals showed a clear discoloration of the iris, there was considerable variation of iris coloration between animals. The color of the central iridal parts ranged from silveryblue to gray-blue with darker and lighter parts. In the periphery, irises were light brown to gray with occasional light gray zones (Fig. 1). In some animals, the degree of discoloration differed within the peripheral iris and showed alternating darker and lighter regions. Partial brownish corneoscleral pigmentation was visible in a few animals. Determination of common ancestors To identify a potential founder of the eye color phenotype, a pedigree analysis was performed. Available pedigree data of HF cattle revealed that the 18 cases descended from 11 different sires whereas dams were not closely related. None of the dams had been reported to show the typical iris hypopigmentation. In total 10 male ancestors, born between 1954 and 1983, were identified being present in the dam and sire line of every case. The available pedigree data did not reveal a single common founder for HF cases, and HF cases were not closely related. Histological examinations Histological evaluation of irides revealed less melanin pigment deposition in the anterior border layer and the iridal stroma in the affected animals compared to the normal iris of the unaffected control animals (Fig. 2). This form of hypopigmentation was evident in all examined localizations of the irises in the affected animals. Only the degree of hypopigmentation varied between the different analysed regions and within irises at the same locations. In general, hypopigmentation seemed to be more pronounced in central iridal parts (pupillary zone and central parts of the ciliary zone) and in the dorsal as well as ventral iris. Iridal thickness, stromal density, and cellular composition were consistent in all examined animals. Differences in pigmentation of the posterior pigmented epithelium of the iris, other uveal structures (ciliary body, choroid), as well as of the retinal pigment epithelium were not detected. All other examined ocular structures were inconspicuous. Determination of associated chromosomal regions To identify associated chromosomal regions a genomewide association study was performed. The 18 cases were compared to 172 unrelated control cattle. Seven highly associated SNPs above the Bonferroni genome-wide significance level were identified on bovine chromosome 8 (BTA8) spanning from 57.3 to 65.3 Mb (Fig. 3; Additional file 1: Table S1). The SNP with the highest -log 10 (p) = 9.17 (BTB-00352779) was located at position 60,990,733 (NCBI UMD3.1.1). It is noteworthy that six genes, i.e. CLTA, GNE, RNF38, SHB, TRIM14, and NANS, located in the region from 57.3 to 65.5 Mb on BTA8 have been associated with earlobe color in chicken [24]. However, so far none of these genes have been reported to be directly involved in eye pigmentation. To validate the identified associations and to determine the genotypic and allelic dependences compared with an unrelated additional control cohort, 316 Holstein Friesian cattle were randomly selected from a previously generated data set and compared with the genotypes of the 18 cases. Table 1 summarizes the results and statistics of the comparison of the genotypes. Both groups (cases/controls) were in Hardy-Weinberg equilibrium at BTB-00352779 (cases: χ 2 = 5.8, controls: χ 2 = 0.72). However, as determined by using a 3x2 contingency table and Fisher's exact statistics genotypes were significantly not independent at BTB-00352779 with P = 6.19e-07. To evaluate which allele was associated with a higher risk of developing an iridal hypopigmentation, odds ratios were calculated ( Table 2). As shown in Table 2 alleles were significantly dependent using a 2x2 contingency table and χ 2 -statistics. The presence of the A-allele was leading to a 6.3-times higher chance to develop an iridal hypopigmentation (95% CI: 3.0172-13.1727). Using a moving window of five SNPs a haplotype association was calculated for the 18 cases. Highly significant associations were detected for five haplotypes including BTB-00352779 and flanking markers (Table 3). Haplotype AAAAA with BTB-00352779 as first marker showed the highest odds ratio (OR = 8.31; 95% CI: 3.62-19.08). Exclusion of RAB38 as candidate for bilateral iridal hypopigmention Although the analysed cases of iridal hypopigmentation did not show the typical phenotype described for oculocutaneous hypopigmentation (OH) in Angus and Simmental cattle, i.e. eyes with pale blue irises around the pupil, a tan periphery and slightly bleached black coats looking grey or red, bovine RAB38 was screened for mutations in the 20 cases and controls. In addition, RAB38 is located on BTA29 which did not show any associated SNP in the GWAS. However, to exclude RAB38 as causative for iridal hypopigmentation coding regions including splice sites were amplified and fragments sequenced. As expected no disease causing polymorphisms or alterations from the reference genome for cases and controls were detected. Discussion Iridal hypopigmentation in cattle is attributed to a reduced content of melanin pigment in the anterior border layer and iridal stroma. These findings clearly differ from previously reported histological observations in HI cases, showing a reduction in eye pigmentation in different uveal structures as different iris layers, retinal pigment epithelium (RPE) [8], and choroid [6,25]. Leipold and Huston observed a non-albinotic case of HI with reduced iris pigmentation in anterior border layer, iris stroma, and posterior pigment epithelium in Hereford cattle. Pigmentation in the remaining pigmented eye structures was also reduced and iridal stroma was hypoplastic [25]. Iris hypopigmentation in cases was exclusively located in the anterior border layer and the iridal stroma, and no structures were fully devoid of pigment. Pigmentation differences as seen in the present cases of iris hypopigmentation seemed to be rather comparable with naturally occuring eye color variances in humans than to HI. Human eye color variance is mainly due to differences in the amount of melanin pigment in the anterior border layer and iridal stroma. Pale colored eyes generally contain less melanin compared to brown eyes [2,5]. In cattle with iris hypopigmentation, gradual iridal brightening can be explained by a reduced melanin content of variable intensity affecting the anterior border layer and the iridal stroma, resulting in blue to graybrown instead of the normal black color. Differences in coloration between central and peripheral parts of the iris, namely the rather bluish color of the central iridal ring and the gray to light brown color of peripheral iridal parts, are probably attributed to the increasing thickness of the iris towards the periphery, associated with increased amounts of collagen in the peripheral iridal stroma. As the amount of collagen fibres within the iridal stroma is known to be another important determining factor of eye color, at least in humans [2], it may be assumed that variations in iris brightening between central and peripheral iridal parts in cattle with iris hypopigmentation result from differences in the amount of collagen and, thus, in the backscattering properties of the respective iridal regions. In this respect it is noteworthy that in the associated region on BTA8 collagen gene COL15A1 is located at position 64,437,276-64,540,739 which has been shown to be expressed in multiple ocular structures [26]. In contrast, the amount of collagen in the iridal stroma of normal dark brown to black-eyed cattle seems to have a minor influence on iris coloration as almost all light is probably absorbed by the extensive eumelanin deposits in the anterior border layer and the iridal stroma accounting for a homogenous dark eye color [27,28]. Variations of the degree of discoloration within the peripheral iris with alternating darker and lighter regions in affected animals of the present study are caused by regional differences in the melanin content of the anterior border layer and the iridal stroma, which were confirmed by histological analyses. Taking all clinical findings together, there were no other abnormalities found in conjunction with the iris hypopigmentation. Although no long-term effects were examined, iris hypopigmentation seems to be mainly of cosmetic character. Under physiological conditions melanin pigment protects from ultraviolet (UV) light, and humans with lighter eye color seem to be more susceptible to age related macula degeneration [29] and uveal melanoma [30,31]. This is comparable with blue-eyed horses that are at higher risk to develop ocular squamous cell carcinoma [32]. Conclusion The bilateral iridal hypopigmentation in HF cattle described here was due to a reduction of melanin pigment in the anterior border layer and iridal stroma. The phenotype was highly associated with a chromosomal region on BTA8. Haplotype association analysis showed that the presence of the A-alleles at the associated SNPs significantly increased the chance of developing an iridal hypopigmentation.
2017-08-03T01:18:42.646Z
2017-03-29T00:00:00.000
{ "year": 2017, "sha1": "3156d98c0b82f1afe4a7de4ebd5b7e55eb145661", "oa_license": "CCBY", "oa_url": "https://bmcgenet.biomedcentral.com/track/pdf/10.1186/s12863-017-0496-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3156d98c0b82f1afe4a7de4ebd5b7e55eb145661", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
260376879
pes2o/s2orc
v3-fos-license
Early plasma proteomic biomarkers and prediction model of acute respiratory distress syndrome after cardiopulmonary bypass: a prospective nested cohort study Background: Early recognition of the risk of acute respiratory distress syndrome (ARDS) after cardiopulmonary bypass (CPB) may improve clinical outcomes. The main objective of this study was to identify proteomic biomarkers and develop an early prediction model for CPB-ARDS. Methods: The authors conducted three prospective nested cohort studies of all consecutive patients undergoing cardiac surgery with CPB at Union Hospital of Tongji Medical College Hospital. Plasma proteomic profiling was performed in ARDS patients and matched controls (Cohort 1, April 2021–July 2021) at multiple timepoints: before CPB (T1), at the end of CPB (T2), and 24 h after CPB (T3). Then, for Cohort 2 (August 2021–July 2022), biomarker expression was measured and verified in the plasma. Furthermore, lung ischemia/reperfusion injury (LIRI) models and sham-operation were established in 50 rats to explore the tissue-level expression of biomarkers identified in the aforementioned clinical cohort. Subsequently, a machine learning-based prediction model incorporating protein and clinical predictors from Cohort 2 for CPB-ARDS was developed and internally validated. Model performance was externally validated on Cohort 3 (January 2023–March 2023). Results: A total of 709 proteins were identified, with 9, 29, and 35 altered proteins between ARDS cases and controls at T1, T2, and T3, respectively, in Cohort 1. Following quantitative verification of several predictive proteins in Cohort 2, higher levels of thioredoxin domain containing 5 (TXNDC5), cathepsin L (CTSL), and NPC intracellular cholesterol transporter 2 (NPC2) at T2 were observed in CPB-ARDS patients. A dynamic online predictive nomogram was developed based on three proteins (TXNDC5, CTSL, and NPC2) and two clinical risk factors (CPB time and massive blood transfusion), with excellent performance (precision: 83.33%, sensitivity: 93.33%, specificity: 61.16%, and F1 score: 85.05%). The mean area under the receiver operating characteristics curve (AUC) of the model after 10-fold cross-validation was 0.839 (95% CI: 0.824–0.855). Model discrimination and calibration were maintained during external validation dataset testing, with an AUC of 0.820 (95% CI: 0.685–0.955) and a Brier Score of 0.177 (95% CI: 0.147–0.206). Moreover, the considerably overexpressed TXNDC5 and CTSL proteins identified in the plasma of patients with CPB-ARDS, exhibited a significant upregulation in the lung tissue of LIRI rats. Conclusions: This study identified several novel predictive biomarkers, developed and validated a practical prediction tool using biomarker and clinical factor combinations for individual prediction of CPB-ARDS risk. Assessing the plasma TXNDC5, CTSL, and NPC2 levels might identify patients who warrant closer follow-up and intensified therapy for ARDS prevention following major surgery. Introduction Innovations, such as the use of more biocompatible surfaces and microcircuits, as well as the increasing expertise of surgeons, anesthesiologists, and perfusionists, have transformed cardiac surgery and cardiopulmonary bypass (CPB) into relatively conventional procedures.Despite these refinements, postoperative organ injuries can present in patients with CPB surgery [1] .Acute respiratory distress syndrome (ARDS) after CPB surgery is associated with a widely variable incidence from 0.4 to 20% [2,3] and a high mortality rate of 80% [4] , and there is no specific treatment.No obvious protection can be achieved by ischemic preconditioning [5] or anti-inflammatory treatment [6] . Given that decades of research have failed to find effective therapies for ARDS, the National Heart Lung Blood Institute has recommended that future ARDS research be directed toward identifying patients at high risk [7] .In the past, the prediction model and diagnostic criteria of ARDS were only based on clinical data and physiological variables [8][9][10] .Methods containing only clinical information fail to meet the needs of ARDS prediction because of the relatively low positive predictive value.Biomarkers may augment prognosis and stratification strategies [11] .For ARDS in acute pancreatitis patients, interleukin-6, interleukin-8, and the systemic inflammatory response syndrome score can be used as comprehensive composite markers to predict the future development of ARDS [12] .Elevated mucin1 levels in plasma had a good predictive value for whether sepsis patients would develop ARDS [13] .Biomarker-directed (tumorigenicity-2 and interleukin-6) ventilator management may improve outcomes in patients with ARDS [14] .Therefore, a comprehensive understanding of the corresponding protein changes in CPB-ARDS may allow patients who are at risk to be identified before progression occurs.Nontargeted proteomics has the potential to identify multiple biomarkers and key pathways of ARDS [15] . In this study, we aimed to compare proteome expression in individuals undergoing cardiac surgery with CPB who do and do not develop ARDS using data-independent acquisition (DIA) proteomics technology and to create a machine learning-based model for the prediction of CPB-ARDS.Furthermore, we explored the dynamic changes in proteins in the lung tissues obtained from rat models of lung ischemia/reperfusion injury (LIRI). Methods This prospective observational study has been reported in line with the Strengthening The Reporting Of Cohort Studies in Surgery (STROCSS) Guidelines [16] , and the STROCSS checklist is provided in Table A.1 (Appendix A) (Supplemental Digital Content 1, http://links.lww.com/JS9/A835).This study was registered on clinicaltrials.gov(Registration No. NCT04696172) on 6 January 2021 and was approved by the Institutional Review Board of the Wuhan Union Hospital (Approval No. 20200518).The animal experiment protocol was in accordance with the ARRIVE guidelines [17] (Supplemental Digital Content 2, http:// links.lww.com/JS9/A836) and was approved by the Experimental Animal Center of Tongji Medical College (IACUC No. 2916). Study design A nested case-control study design was chosen, discovery-based biomarker screening was used in Cohort 1 (April 2021-July 2021, n = 150), and validation strategies were used in Cohort 2 (August 2021-July 2022, n = 375) [18] , at Union Hospital of Tongji Medical College Hospital, a tertiary teaching hospital.Twenty pairs of case-controls in Cohort 1 were selected for perioperative proteomics profile determination and screening of differentially expressed proteins (DEPs) at multiple timepoints.Quantitative analysis and validation of predictive proteins, as well as training and testing of a multimarker prediction model, were carried out within Cohort 2. To assess the external validity of the prediction model, prediction accuracy was determined on Cohort 3 (January 2023-March 2023, n = 124).Furthermore, LIRI rat models were established to explore the expression of protein of the lung tissue with the use of proteomics technology, western blotting, and immunofluorescence experiments (Fig. 1). Study populations and groups All adult subjects over 18 years old undergoing elective CPB for valves with or without coronary artery surgery were eligible for the study.The exclusion criteria included the following: patients who refuse to sign the informed consent form or the attending physician refuses to allow the patient to join the study; nonelective surgery (surgery at a nonelective time or emergency surgery); preoperative pulmonary insufficiency, pulmonary hypertension, and pulmonary inflammation; the absence of any specimen and clinical data; and patients who had a failed operation, needed extracorporeal membrane oxygenation support, or underwent CPB operation again within 3 days after operation. Patients who developed ARDS in the first 3 days after CPB were placed into the ARDS group.In Cohort 1 for the proteomics analyses, once the case was determined, an additional patient was matched as a control in the non-ARDS patient group based on the following criteria: same group of surgeons; same type of operation; same sex; and the differences of BMI and age were within 10% compared with the ARDS patient [19] .In Cohort 2, for the development of the prediction model and to fully consider the prediction contribution of clinical factors, the controls were only matched by the same group of surgeons and randomly selected from non-ARDS patients (case:control = 1:2).Multiple clinical variables and protein variables were included in the multivariable logistic regression. The sample size for the development of the final multivariable logistic regression should be more than 20 times the number of HIGHLIGHTS • Plasma thioredoxin domain containing 5 (TXNDC5), cathepsin L (CTSL), and NPC intracellular cholesterol transporter 2 (NPC2) may be predictive biomarkers for acute respiratory distress syndrome (ARDS) after cardiopulmonary bypass (CPB) surgery.• A parsimonious model based on proteomic and clinical features is developed and validated with excellent performance in predicting CPB-ARDS.• TXNDC5 and CTSL are upregulated in the lung tissues of lung ischemia-reperfusion injury rats. predictors to efficiently avoid over fitting.All participants underwent CPB under total intravenous general anesthesia and were admitted to the cardiac surgery intensive care unit (CICU).The standardized CPB operation, anesthesia and ventilation protocol, and postoperative treatment procedures of the patients are shown in Method A.1 (Appendix A) (Supplemental Digital Content 1, http://links.lww.com/JS9/A835). Clinical data and outcome data collection Potential perioperative ARDS clinical risk factors that have been reported in previous studies were collected by trained investigators [8,9,[20][21][22] .The clinical variables were collected from preoperation to the end of the CPB operation, only representative indicators for prediction before the onset of ARDS were considered [23] .The following data were included (please refer to Method A. The primary outcome variable was the development of ARDS within 3 days after CPB.Based on the Berlin definition [10] , ARDS was independently confirmed by two physicians who reviewed the medical records (Method A.3, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).Chest radiograph of these patients were performed daily or twice per day.Blood gases were examined every 4-6 h.Respiratory failure in patients with ARDS was not fully explained by cardiac failure or fluid overload and was evaluated twice from echocardiography, clinical symptoms, and other indicators. DIA proteomic analyses and validation Blood samples were obtained from all consecutive patients included in the cohort at three timepoints: prior to CPB after anesthesia induction (T1), immediately after CPB (T2), and 24 h after CPB (T3).All blood samples were collected from the artery and centrifuged at 3000g for 15 min, and the supernatants were aliquoted and stored at − 80°C.After the outcome of the patients in the cohort was determined through follow-up, plasma samples of the case and control groups were used for DIA proteomic techniques and enzyme-linked immunosorbent assay (ELISA) analyses within 6 months after collection.No more than two freeze-thaw cycles were permissible for each sample. The laboratory procedures were conducted blind to the casecontrol status.After liquid chromatography-mass spectrometry (LC-MS/MS) analysis, raw MS data were searched using MSFragger software (Method A.4, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).The searched results were filtered with 1% FDR at both the protein and peptide levels.The missing values of protein abundance were filled with half of the global minimum and half of the peptide minimum according to random tail imputation [24] . Animal surgery and experimental protocol LIRI is the most common lung injury in CPB-associated lung injury during cardiac surgery [25] .The LIRI rat models were established by the left hilar clamp as previously published [26] .Fifty male Sprague Dawley (SD) rats ((Beijing Vital River Laboratory Animal Technology Co., Ltd. ; Certificate number SYXK2021-0011), weighing 300 20 g, were randomly divided into four groups: sham group (n = 16), ischemia-reperfusion for 2 h group (n = 9), ischemia-reperfusion for 6 h group (n = 16) and ischemiareperfusion for 24 h group (n = 9).The number of animals for different experimental procedures at different timepoints after reperfusion is summarized in Method A.6 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835).After reperfusion, left lung tissue and serum were obtained.The severity of lung injury was scored using a semiquantitative scoring system as described previously [27,28] .Please refer to Method A.6 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835) for further details. Protein analyses of rat lung samples For the proteomic profile at the tissue level for certain LIRI groups, LC-MS/MS experiment-based proteomic analyses were conducted on the lung samples of the LIRI and sham rats.Upon identifying CPB-ARDS circulating predictive proteins in the clinical cohorts, we compared the tissue-level expression of these proteins in the LIRI and sham groups (1) by incubating frozen lung tissue homogenate with primary specific antibodies and performing Western blotting and (2) by comparing the protein expression in lung tissue with immunofluorescence staining and confocal microscopy (Nikon A1R). Screening of DEPs and predictive proteins in Cohort 1 For circulating proteomic data from Cohort 1, the fold change (FC) value was calculated based on the ratio of ARDS/non-ARDS.The statistical significance was calculated for proteins using the paired two-sided t-test (P < 0.05) [29] .DEPs were defined as those with a significant difference between ARDS and non-ARDS (P-value <0.05, and |log 2 (FC)| >0.263).To further investigate potential pathological and biological mechanisms relevant to CPB-ARDS, we performed Gene Ontology (GO)/ Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment and protein-protein interaction (PPI) analyses (Method A.7, Supplemental Digital Content 1, http://links.lww.com/JS9/A835) The Least Absolute Shrinkage and Selection Operator (LASSO) regression and the extreme gradient boosting (XGBoost) model were used to screen predictive protein features from the DEPs.The DEPs were sorted according to the importance score (gain percent).DEPs with gain percentages greater than 10% were included as candidate predictive proteins for subsequent modeling. Development and validation of a prediction model in Cohorts 2 and 3 The distribution differences of clinical features were compared using Student's/Welch's t-test or Wilcoxon rank sum exact test for continuous variables and Pearson's χ2-test or Fisher's exact test for categorical variables among the CPB-ARDS and non-ARDS groups from Cohort 2.Then, the clinical variables with P < 0.05 were entered into a stepwise logistic regression algorithm (SLR) to screen clinical predictors and develop the clinical prediction model. For the collection time of samples, the earlier collection of samples in the development of ARDS may deduce the biological signals of ARDS in a timely manner and enhance efforts to prevent and intervene [30] .Considering that time point T2 was immediately after the CPB injury and before the occurrence of ARDS, DEPs at T2 may be more conducive to the prediction of the early stages of ARDS.Then, quantitative analyses were conducted to validate the differential expression of the candidate predictive proteins at T2. Finally, the protein predictors and the screened clinical predictors were combined to construct the combined CPB-ARDS prediction score (CAPS) model.An interactive web-based dynamic nomogram application was built with Shiny, version 0•13•2•26.In addition, the nomogram was subjected to 3-fold, 5-fold, and 10fold cross-validation for internal validation to assess its predictive accuracy.The CAPS model was evaluated from three aspects: model differentiation ability (receiver operating characteristic curve, ROC), consistency (calibration curve), and clinical utility (decision curve analysis, DCA; clinical impact curve, CIC).We further calculated the sensitivity, specificity, precision, and F1 score to evaluate the ability of the models.Specific calculation methods are shown in Method A.8 (Supplemental Digital Content 1, http:// links.lww.com/JS9/A835).To externally validate the results, the cumulative points of each patient in the validation cohort (Cohort 3) were computed based on the final model established using the training dataset (Cohort 2).Subsequently, a SLR was performed using the cumulative points as a factor, and finally, the ROC and calibration curve were derived and reported. All assessments of predictors were conducted blind to the casecontrol status.All computations were conducted in the R environment, version R version 4•2•0 (April 2022).The results with P ≤ 0•05 were considered statistically significant. Study design and patients For the proteomic profiling and protein biomarker discovery study, plasma samples of 20 CPB-ARDS and 20 matched non-ARDS patients within Cohort 1 (150 patients) underwent LC-MS/MS-based proteomic analyses.In Cohort 2 (375 patients), 50 CPB-ARDS and 100 randomly selected non-ARDS patients (case: control = 1:2) were included for biomarker validation and the establishment of the prediction model.In Cohort 3 (124 patients), 20 CPB-ARDS and 20 randomly selected non-ARDS patients were included for external validation of the prediction model (Fig. 1).The specific screening process and numbers that signified potentially eligible, examined for eligibility, and confirmed eligible, that were included in the study, completed during followup, and analyzed in Cohorts are all shown in Altered plasma proteomic profiling in the perioperative period of CPB-ARDS The clinical characteristics of the patients for proteomic profiling are recorded in Table A.2 (Supplemental Digital Content 1, http:// links.lww.com/JS9/A835).Compared with the CPB-ARDS group, the difference in clinical factors in the non-ARDS group after random matching was not statistically significant.For the 120 plasma samples from Cohort 1 at three timepoints, we obtained 6284 peptides, and 709 human proteins were quantified.All proteins are shown in Table A. The PPI network based on the String/Genemania database and enriched GO/KEGG pathways after CPB (at T2 and T3) were analyzed.Neutrophil degranulation (GO: 0043312) and neutrophil activation involved in the immune response (GO: 0002283) were the top enriched biological processes at both T2 and T3 (Fig. 2 D, G; the gene-ratio and the adjusted P-value of the top 5 terms are shown in Tables A.5 and 6).The key node proteins in the PPI network based on the String database after CPB (T2 and T3) were involved in biological processes related to neutrophil immunity (Fig. 2 F, I) (combined score and degree are shown in Table A.7, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).Moreover, these DEPs frequently interacted with one another in PPI networks from the Genemania database at T2 and T3 (Fig. 2 E, H).Coexpression, colocalization, and physical interactions were the major PPI modes among DEPs after CPB.The PPI network and enriched GO/KEGG pathways before CPB (at T1) are shown in Figure A.5 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835). Machine learning-based inference of biomarkers For proteomic data and by using LASSO regressions, 8, 18, and 7 DEPs were identified as candidate features for XGBoost models at T1, T2, and T3, respectively (Fig. A.6, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).According to the feature contributions calculated by the XGBoost model for proteins at the end of CPB (T2), the 11 most important protein features were obtained, which could distinguish CPB-ARDS and non-ARDS (Fig. 3).The top four protein features were considered candidate biomarkers for the prediction of CPB-ARDS, including NPC2, CD56, TXNDC5, and CTSL, with gain percentages exceeding 10%.XGBoost analyses for predictive protein features were also obtained at T1 and T3, which are shown in Validation of expressions of the biomarker changes The protein markers at the end of CPB (T2) may be more relevant to CPB-related lung injury and conducive to the prediction of the early stage of CPB-ARDS.Therefore, the top four protein biomarkers in the XGBoost model at T2 were selected for quantitative verification by ELISA in another independent cohort.Three proteins (TXNDC5, CTSL, and NPC2) showed statistically significant differences between CPB-ARDS and non-ARDS patients within Cohort 2, except CD56 (Fig. 4A-D).Univariable logistic regression showed that high concentrations of TXNDC5, CTSL, and NPC2 were associated with a high risk of ARDS (Table A.9, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).We next determined whether altered plasma levels of TXNDC5, CTSL, and NPC2 were related to the reduction in PaO 2 /FiO 2 (PF ratio) and the extension of intubation retention time as a consequence of CPB surgery.The Spearman correlation coefficient results showed that the expression levels of NPC2, TXNDC5, and CTSL were positively correlated with the intubation retention time (Fig. 4E-G) and negatively correlated with the minimum PF ratio (Fig. 4H-J). Nomogram-based prediction model for CPB-ARDS The clinical characteristics of the 150 patients in Cohort For the development of the combined prediction model, three significantly altered proteins (TXNDC5, CTSL, and NPC2) and two significant clinical predictors (MRBC and CPB time) were reserved as a candidate pool.The number of protein and clinical predictors is 5, which is less than 1/20 of the sample size, and could efficiently avoid overfitting.After the SLR, the CAPS model was developed as a simple-to-use nomogram (Fig. 5A).In addition, we built an operation interface on a web page (https://wy-wuhanunion.shinyapps.The corresponding ROC curves showed that the AUC of the CAPS was higher than that of the clinical model (0.852 vs. 0.767, P-value = 0.024), suggesting a better performance in discrimination (Fig. 5B).The internal validation of the nomogram was performed with 3-fold, 5-fold, and 10-fold cross-validations, and the AUC of the CAPS model was also more than 0.830 (Table 2).CAPS also exhibited reliable calibration performance, as evidenced by a Brier Score of 0.136 (95% CI: 0.102-0.170)(Fig. 5C).The CAPS showed good performance, with a precision of 83.33%, a sensitivity of 93.33%, a specificity of 61.16%, and an F1 score of 85.05% (Table 2).According to the DCA of the two prediction models, the net benefit for the CAPS model was larger than that for the traditional clinical model (Fig. 5D).CIC analysis of CAPS visually showed that the nomogram had a superior overall net benefit within the wide ranges of the threshold probabilities and impacted patient outcomes (Fig. 5E).External validation performance on Cohort 3 was maintained in model discrimination with an AUC of 0.820 (95% CI: 0.685-0.955)(Fig. 5F), and calibration with a Brier Score of 0.177 (95% CI: 0.147-0.206)(Fig. 5G).The data of predictors in the validation dataset are reported in Table A.12 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835). Altered expression of proteins in lung tissues from LIRI rats To gain insight into the function of the three protein predictors in LIRI injury and the protein expression in lung tissue, molecular biology approaches were used in the LIRI rat model.First, male SD rats were randomly assigned to four groups to observe the temporal trend of lung injury after ischemia-reperfusion. H&E staining showed lung injury after 2, 6, and 24 h of reperfusion (Fig. 6A).In addition, the lung injury score and number of inflammatory cells after 6 h of reperfusion were higher (Fig. 6B, C).Moreover, we found an imbalance in oxidative stress mediators and inflammatory markers ( We therefore conducted an LC-MS/MS-based proteomic analyses for proteomic profiling at the tissue level with rat lung samples of LIRI after 6 h of reperfusion and sham groups.For the 3 protein predictors of the CAPS model in the clinical cohorts, CTSL and TXNDC5 were also significantly upregulated in the LIRI lung tissue proteomic results (Fig. 6F).Immunoblot and immunofluorescence analyses showed that the dysregulation of CTSL and TXNDC5 in the LIRI lung tissue was consistent with the proteomic data (Fig. 6I-L). Discussion Our integrative clinical and animal experimental study reveals the vital role and clinical relevance of CTSL, TXNDC5, and NPC2 during CPB-related lung injury, and a novel prediction model of CPB-ARDS is established and validated with protein biomarkers and clinical features. First, a nontargeted proteomics study identified DEPs at multiple timepoints; then, the targeted proteins at the end of CPB were validated in a new cohort, in which CTSL, TXNDC5, and NPC2 emerged as strong predictors of ARDS within 3 days after CPB operation.The CAPS model combining clinical and protein data was established for the prediction of ARDS after CPB surgery.The final model generally validated well in both internal and external validation.Furthermore, the LIRI rat model confirmed increased levels of CTSL and TXNDC5 protein in the lung tissue in the proteomic, immunoblot, and immunofluorescence analyses, suggesting that loss-of-function or gain-of-function experiments are needed to demonstrate that the protein predictors are potential regulators in CPB-related lung injury. Traditional studies in ARDS treatment typically identify and recruit patients well into the exudative phase of ARDS, a point at which the trajectory of lung injury may be either irreversible or at least extremely difficult to modify.Importantly, ARDS prevention measures must start earlierin the emergency or operating room for patients undergoing CPB who are at great risk of ARDS.The prediction system in our study was comprised of two parts: one focused on perioperative clinical risk factors, one on easily available biomarkers (plasma proteome).Previous studies on ARDS prediction only focused on clinical risk factors and physiologic markers, including the Lung Injury Prediction Score (LIPS) [23] and Surgical LIPS [31] .Moreover, due to different target populations, diagnostic criteria, and risk factors, the previous prediction score of ARDS (such as LIPS, surgical LIPS, etc.) may not be fully applicable to the prediction of ARDS in patients with special surgery, such as CPB-ARDS.We developed the CAPS model that combined proteins and the typical clinical factors of CPB-ARDS reported in previous studies, and compared this model to the model with typical clinical factors alone.Protein biomarkers could theoretically improve ARDS prediction, but a pragmatic approach incorporating both clinical and protein variables has yet to be developed, especially for postoperative ARDS.Combining angiopoetin-2 with LIPS resulted in a modest improvement in predicting subsequent ARDS compared to either method alone (AUC 0.84 versus 0.74) [32] .The combination of serum Parkinson's disease 7 and clinical prediction scores ameliorated the prediction accuracy for ARDS in populations with severe sepsis/septic shock [33] .Compared with the prediction model based on the typical clinical risk factors alone that was established in our study, the CAPS model also showed an improvement in discrimination performance (AUC: 0.852 vs. 0.767, P-value = 0.024) and clinical benefit from the DCA results.Furthermore, the CAPS model exhibited good external validity by reproducing its predictive performance in an independent external dataset.The discriminative ability of the model as measured by the AUC was fair and very close to that reported in the development study.The small decrease in the AUC was expected as we applied an existing prediction model in a new population.Brier Scores, which are under 0.25, indicated reasonable calibration in both the development and the validation cohort.These results suggest that the model performed well in the development dataset may also be applicable to other populations, and be generalizable to a range of postoperative care settings following CPB surgery. On the other hand, the unique and novel prediction ability of TXNDC5, CTSL, and NPC2 may enrich our understanding of the pathogenesis of CPB-ARDS.Recent studies have reported the dual role of TXNDC5 as a novel biomarker and mediator in multiorgan fibrosis, including pulmonary fibrosis [34] and cardiac fibroblasts [35] .TXNDC5 is highly upregulated in both human and mouse fibrotic lungs and promotes pulmonary fibrosis by enhancing TGFβ signaling activity [34] .However, evidence on the molecular role of TXNDC5 in ARDS is sparse.In the current study, high levels of TXNDC5 were found in the plasma of ARDS patients after CPB, and in the lung tissue of rats after ischemia/ reperfusion. The circulating and tissue levels of CTSL were elevated after SARS-CoV-2 infection [36] , suggesting that CTSL is likely to be a potential therapeutic target for blocking viral entry [36,37] .CTSL has been linked to IR injury, including renal IR injury [38] and myocardial IR injury [39] .Consistent with previous studies, we also reported the crucial role of CTSL in the plasma of CPB-ARDS patients and the lung tissue of LIRI rats. NPC2 is a small glycoprotein resulting from mutations in the NPC2 gene that lead to an abnormal increase in intracellular cholesterol [40] .A recent quantitative proteomic analysis showed a significant increase in the level of plasma NPC2 in pneumonia patients, sepsis patients, and patients who subsequently developed septic shock or died within 30 days [41] .Our data showed a significant accumulation of NPC2 after CPB in the plasma of CPB-ARDS patients.The possible causes could be increased synthesis and secretion by the liver [42] and reduced renal degradation or clearance [41] . Strengths and limitations Our study has numerous strengths.First, the study originality resides in its design (going from supervised bedside-omics screening to the target organ and bench analysis), involving two different species (humans and rats).Second, the cases and controls in the nested cohort study were selected from the same cohort.The population is therefore homogeneous and comparable, which can better control the selection bias.In addition, nested cohort studies are more economical and time-saving than cohort studies, as not all biological samples in the cohort need to be detected.Third, we collected both clinical and protein data and developed a machine learningbased prediction system with better performance than traditional clinical factor models.Finally, these DEPs at both preoperative and postoperative timepoints indicated early dynamic changes in protein markers. There are also some limitations.The first limitation of this study derives from its single-center design.The fact that biomarkers from the proteomics results were further validated by ELISA using a larger cohort contributed to the confidence in the results.Second, the number of DEPs in the plasma proteomics of this study was limited.For the purpose of early prediction, we detected proteomic differences before the onset of ARDS, in which the changes in protein expression may be at the early stage.The third limitation comes from the absence of lung tissues from patients, which precluded us from assessing the differential expression of biomarkers in the lung.Although, we established rat LIRI models and detected the expression of related proteins in rat lung tissue, we could not completely restore the clinical background of patients undergoing CPB. Conclusion In conclusion, our study is one of the few proteomics studies to examine predictive biomarkers in ARDS, establish and externally validate a prediction model for the prediction of ARDS after CPB surgery.The use of a composite of clinical and protein predictors may play an important role in early therapeutics or preventative approaches for ARDS.Additionally, the early increase in TXNDC5 and CTSL may be related to endothelial and vascular injury and provides potential therapeutic targets for this syndrome.Further explorations are needed to elucidate the implicated mechanisms of predictive biomarker interactions underlying CPB-ARDS. Ethical approval For the clinical study protocol, it was set in compliance with Helsinki Declaration and was approved by the Institutional Review Board of the Wuhan Union Hospital (Approval No. 20200518) on 13 January 2021. 3 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835) with normalized expression values.The total sample protein, peptide overview, and more details on the quality control and analysis of the proteomics results are presented in Figure A.2 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835).Furthermore, principal component analyses showed the spatial distribution of the quantitative information of two groups of proteins, indicating the protein alterations in CPB-ARDS patients against non-ARDS (Fig. A.3, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).The volcano plots showed the molecular alterations of 9, 29, and 35 proteins in CPB-ARDS versus non-ARDS patients at T1, T2, and T3, respectively (Fig. 2 A, B, C;F, C and P-values are shown in Table A.4, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).The quantitative differences in DEPs among each sample from the CPB-ARDS and non-ARDS groups are shown in the heatmap (Fig. A.4, Supplemental Digital Content 1, http://links.lww.com/JS9/A835). io/ CAPS_DynNomapp_WY/) to calculate the probability of CPB-ARDS (Fig. A.10, Supplemental Digital Content 1, http://links.lww.com/JS9/A835).By entering a patient's clinical characteristics and concentration of target proteins, the user can obtain the predictive probability of CPB-ARDS.The adjusted parameters of the proteins and clinical predictors in the CAPS model are presented in Table A.11 (Supplemental Digital Content 1, http://links.lww.com/JS9/A835). Figure 2 . Figure 2. The DEPs among CPB-ARDS and Non-ARDS patients in Cohort 1.The volcano plot of protein expression in CPB-ARDS patients compared to Non-ARDS patients at T1 (A), T2 (B), and T3 (C); The top three GO and KEGG enriched items at T2 (D), and T3 (G); The PPI networks from Genemania database at T2 (E), and T3 (H); The PPI networks from String database at T2 (F), and T3 (I). Figure 3 . Figure 3.The machine learning-based inference of biomarkers strongly altered at the end of CPB in Cohort 1.The XGBoost results for protein features at T2 (A); Receiver operating characteristic curve (B); Confusion matrix (C); The relative intensity of 11 protein features among Non-ARDS and CPB-ARDS groups at T2 (D-N).ROC, receiver operating characteristic curve. Figure 4 . Figure 4. ELISA analyses for the targeted proteins and the correlation of the protein concentration with values of a panel of clinical parameters in Cohort 2.The targeted protein concentration among CPB-ARDS and Non-ARDS patients for TXNDC5 (A), CTSL (B), NPC2 (C), and CD56 (D); The results of spearman correlation analyses of concentration of TXNDC5 (E), CTSL (F), NPC2 (G) and intubation retention time; The results of spearman correlation analyses of concentration of TXNDC5 (H), CTSL (I), NPC2 (J) and minimum PF ratio.PF ratio, PaO 2 /FiO 2. Figure 5 . Figure 5. Nomogram-based CAPS prediction model for CPB-ARDS.The nomogram of CAPS composed of clinical and protein factors at T2 in Cohort 2 (A).Draw a vertical line from the corresponding axis of each factor to the points axis to acquire the point of this factor.Make a summation of the points for each factor to yield a total score, and the probability of CPB-ARDS could be estimated by projecting the total score to the lower probability axis.The blue wavy line on each axis represented the data distribution of this factor; Receiver operating characteristic curve for the CAPS model and clinical factor model in Cohort 2 (B); Calibration plot of the CAPS model with a 1000 repetition bootstrap in Cohort 2 (C); Decision curve of the two prediction models showed the net benefit for the CAPS model was larger than that for the traditional clinical model (D); Clinical impact analysis curve of CAPS visually showed the number of people classified as positive (high-risk) by the model and the number of true positive people under each threshold probability (E); Receiver operating characteristic curve for the external validation of CAPS model in Cohort 3 (F); Calibration plot for the external validation of CAPS model in Cohort 3 (G).CPB time: cardiopulmonary bypass time; MRBC: massive transfusion of red blood cells; CAPS: the combined CPB-ARDS prediction score; ROC: receiver operating characteristic curve; 95% CI: 95% confidence interval. Figure 6 . Figure 6.The expression and function of the targeted protein predictors in LIRI rat model.The degree of lung injury after 2, 6, and 24 h of reperfusion, including the hematoxylin-eosin staining (A), lung injury score (B), the amount of inflammatory cells (C), and the expression of MPO (D); The immunoblot of Occludin and immunofluorescence of ZO-1 (E); The volcano plot of altered protein expression in lung tissue of LIRI rats (F); The GO and KEGG enriched items (G, H); The Immunoblot (I) and quantification of the signal of the protein expression of TXNDC5 (J), and CTSL (K) in the lung tissues; The immunofluorescence of the protein expression of TXNDC5 and CTSL (L) in the lung tissues.Each bar represents the Mean ± SD; ns: P ≥ 0.05; *P < 0.05; **P < 0.01; ***P < 0.001. 2 are reported in Table 1.After preliminarily testing the differences among the CPB-ARDS and non-ARDS groups within Cohort 2, the clinical factors with P-values <0.05 included age, anesthesia time, CPB time, ultrafiltration volume, cardioprotective fluid volume, volume of plasma transfusion, and MRBC.After multivariable SLR, MRBC, CPB time, age, and ultrafiltration volume constituted a clinical prediction model (Fig. A.9, Supplemental Digital Content 1, http://links.lww.com/JS9/A835), with an AUC (area under the receiver operating characteristic curve) of 0.767.The SLR coefficient results showed that MRBC and CPB time were statistically significant clinical predictors (Table A.10, Supplemental Digital Content 1, http://links.lww.com/JS9/A835). Table 1 Clinical factors among CPB-ARDS and non-ARDS patients within Cohort 2. Descriptive statistics are reported as mean ± SD or median (lower quartile, upper quartile) for continuous variables, and as frequency (percentage) for categorical variables.b Comparison of the two groups was performed using the Student's/Welch t-test or Wilcoxon rank sum exact test for continuous data; Pearson's χ2-test or Fisher's exact test for categorical data.c Complex surgery includes multivalvular surgery, or valvular surgery combined with coronary artery surgery.ALT, alanine aminotransferase; AST, aspartate aminotransferase; CPB, cardiopulmonary bypass; MRBC, massive transfusion of red blood cells; PF, PaO 2 /FiO 2 ; RBC, red blood cells. a Table 2 The results of F1 score, precision, sensitivity, specificity, and AUC (95% CI) after N-fold cross-validation of clinical factor prediction models and the CAPS models., area under the receiver operating characteristic curve; CAPS, the combined CPB-ARDS prediction score; CPB time, time of cardiopulmonary bypass; MRBC, massive transfusion of red blood cells. AUC
2023-08-03T06:17:11.415Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "00aabf45c2aa953012cdf579209a2dffff30158c", "oa_license": "CCBYNC", "oa_url": "https://journals.lww.com/international-journal-of-surgery/Abstract/9900/Early_plasma_proteomic_biomarkers_and_prediction.529.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fa523a829e87c4f4417fa6c392085eb23d7fde08", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
195769130
pes2o/s2orc
v3-fos-license
Quantifying Biogenic Versus Detrital Carbonates on Marine Shelf: An Isotopic Approach The terrigenous sedimentary budget of passive margins, records variations in past sedimentary fluxes, and thus can be used to infer past variations of Earth surface deformation processes or climate change. Accurate estimates of sediment fluxes over various times and spatial scales are therefore crucial. Traditionally, offshore sediment volume determination only considers siliciclastic accumulation, the carbonate fraction (i.e., CaCO 3 ) being considered only as in situ production. Here we propose a new geochemical methodology to decipher and quantify the number of detrital carbonates in comparison to in situ produced biogenic carbonates. This isotopic approach enables considering the export of detrital carbonates and investigating its effect on sediment budgets. This study, located in the Gulf of Lion, is based on a 300 m long sediment borehole located near the shelf break and covering the last 500 000 years (i.e., five glacial-interglacial periods). Strontium isotope ( 87 Sr/ 86 Sr) of carbonate fractions (0.70809 to 0.70858) are significantly less radiogenic than modern seawater (i.e., 0.7092) and show fluctuations in agreement with stratigraphic and climatic variations. These results suggest an unsuspected high export of detrital carbonates from the catchment area during both glacial (between 55 and 85% of the sedimentary carbonate fraction) and interglacial (between 30 and 50%) conditions. Thus, not only do detrital carbonate fluxes need to be factored into sediment flux calculations, but these results also suggest that detrital carbonate components could potentially have a strong influence on bulk carbonate 87 Sr/ 86 Sr ratios when not obtained from micro drilled biogenic carbonates, such as the entirety of the Precambrian Sr chemostratigraphic record. The terrigenous sedimentary budget of passive margins, records variations in past sedimentary fluxes, and thus can be used to infer past variations of Earth surface deformation processes or climate change. Accurate estimates of sediment fluxes over various times and spatial scales are therefore crucial. Traditionally, offshore sediment volume determination only considers siliciclastic accumulation, the carbonate fraction (i.e., CaCO 3 ) being considered only as in situ production. Here we propose a new geochemical methodology to decipher and quantify the number of detrital carbonates in comparison to in situ produced biogenic carbonates. This isotopic approach enables considering the export of detrital carbonates and investigating its effect on sediment budgets. This study, located in the Gulf of Lion, is based on a 300 m long sediment borehole located near the shelf break and covering the last 500 000 years (i.e., five glacial-interglacial periods). Strontium isotope ( 87 Sr/ 86 Sr) of carbonate fractions (0.70809 to 0.70858) are significantly less radiogenic than modern seawater (i.e., 0.7092) and show fluctuations in agreement with stratigraphic and climatic variations. These results suggest an unsuspected high export of detrital carbonates from the catchment area during both glacial (between 55 and 85% of the sedimentary carbonate fraction) and interglacial (between 30 and 50%) conditions. Thus, not only do detrital carbonate fluxes need to be factored into sediment flux calculations, but these results also suggest that detrital carbonate components could potentially have a strong influence on bulk carbonate 87 Sr/ 86 Sr ratios when not obtained from micro drilled biogenic carbonates, such as the entirety of the Precambrian Sr chemostratigraphic record. INTRODUCTION An extensive dataset has been collected over the last decade on marine carbonates and fossils, to document past variations in the strontium isotopic composition record ( 87 Sr/ 86 Sr), as a tool to reconstruct changes in the seawater composition through Earth's history. These past variations are of interest for two reasons: first, 87 Sr/ 86 Sr ratios measured on marine carbonate offer a widely used chronostratigraphic tool (SIS -Strontium Isotope Stratigraphy; for reviews of SIS the readers should referred to Elderfield, 1986;McArthur, 1994;Veizer et al., 1997Veizer et al., , 1999McArthur et al., 2012); second, secular changes in Sr isotope composition provide information about the geochemical cycling of strontium in the ocean, weathering processes, hydrothermal circulation, and carbonate dissolution at the sea-floor (Burke et al., 1982;De Paolo and Ingram, 1984;Veizer, 1989;Prokoph et al., 2008;Allègre et al., 2010 among others). Since the rapid increase of calcifying organisms (i.e., Mesozoic), the SIS method relies on two assumptions: (i) strontium isotopes composition measurements from well preserved, non-altered, shell material (i.e., bivalves, rudist, belemnites, planktonic foraminifera) are assumed to reflect the seawater Sr isotope composition from which they precipitated (Veizer, 1989;McArthur et al., 2012); (ii) the world's ocean is homogenous with respect to 87 Sr/ 86 Sr, and always has been. Such uniformity is expected because Sr residence time in the ocean (4 Myr; Broeker, 1963;Goldberg, 1963;Hodell et al., 1990) is far longer than their mixing time (∼10 3 years). Since the 1980s, the SIS tool relies on the extensive compilation of data, cross calibration between carbonate producers (ammonites, foraminifera, calcareous nannofossils) and/or other geochemical proxies (δ 13 C, δ 18 O). Extensive compilations of measured 87 Sr/ 86 Sr in marine material yield to the development of numerical age determination tools (McArthur et al., 2012 for a thorough review). The degree to which SIS numerical dating is correct mostly depends on the slope and the accuracy of the age model during the given time interval, as well as the quality and state of preservation of the studied material. Given the difficulty of dating sedimentary material, especially during the Paleozoic and Precambrian era, it could be tempting to use a "bulk" SIS approach by measuring the 87 Sr/ 86 Sr ratio of the entire carbonate fraction preserved in the sediment. However, such a carbonate component may not necessarily entirely be produced in situ and could include a significant detrital proportion. If so, not only would the SIS-derived ages be biased, but the inferred flux of detrital material to total sediment would also be underestimated. This is of great importance when calculating sedimentation rates in stratigraphic simulations studies (Allen, 1974;Castelltort and Van Den Driessche, 2003;Allen, 2008;Armitage et al., 2011Armitage et al., , 2013Simpson and Castelltort, 2012;Romans et al., 2016); but also when reconstructing temporal variations in the marine 87 Sr/ 86 Sr or to understand the relative Sr fluxes related to continental weathering versus hydrothermal inputs over time (e.g., Burke et al., 1982;Veizer, 1989;Veizer et al., 1999;Halverson et al., 2007;Prokoph et al., 2008). In this study, we examined the Sr isotope compositions preserved in the sediment carbonate fractions from the Gulf of Lion which were deposited during one glacial advance and the following retreat between 160 to 120 ka, i.e., over the MIS 6 to MIS 5 transition. The goals of this exploratory project were first to test whether detrital carbonates could be preserved in shelf accumulation and developed an isotopic approach that could help in their quantification. We secondly explore their potential effect on sedimentary fluxes calculations. GEOLOGICAL BACKGROUND AND ANALYTICAL RESULTS The Gulf of Lion (GoL), located in the North-Western Mediterranean, is characterized by a wide continental shelf (70 km) that was sub-aerially exposed during glacial periods over the Late Quaternary period (Rabineau et al., 2005). The sedimentation is mainly dominated by the Rhône River inputs which currently provide about 80% of the total sediment flux (Aloisi et al., 1977;de Madron et al., 2000;Molliex et al., 2016). Rivers from the Pyrenees and Languedoc (Herault, Orb, Aude, Agly, Tech and Têt; Figure 1) supply the remaining fraction. The GoL catchment is composed of (i) crystalline rocks, located in mountainous areas (Inner Alps, Massif Central, Pyrenees), and (ii) a large part of carbonated rocks, mostly marl and limestone from the Mesozoic era (Jurassic and Cretaceous) located in Alpine foreland. Some Cenozoic carbonates (Eocene and Miocene), mostly bioclastic and continental sandy limestone, are also present in the downstream part of the catchment and in the foreland of the Alps and Pyrenees and (iii) Pliocene-Quaternary formations which consist of fluvial deposits . This study is based on sediment samples collected from borehole PRGL1-4 (Figure 1), drilled in the framework of the EU PROMESS project 1 , which sampled a 300 m long continuous record spanning the last five glacial-interglacial cycles. The sedimentary succession consists of five progradational units related to the 100-kyr glacio-eustatic cyclicity (Rabineau et al., 2005(Rabineau et al., , 2006. In addition to the moving (regressiontransgression) of the shoreline and associated sedimentary environments, the sea-level strongly controls the connection of the riverine inputs with the upper slope setting. As a consequence, the sediment column at PRGL1-4 is essentially composed of fine siliciclastic grains detrital sediment with several interbedded cm-thick sandy-size layers, made mostly of foraminifera shells accumulation, marking the periods of shelf maximum flooding during interglacial periods (Figure 1; Sierro et al., 2009;Frigola et al., 2012). The stacking of 100kyr sequences is favored by a high subsidence rate in that area (Rabineau et al., 2014). Sediment provenance studies indicate a predominance of Rhône river sediment (Revillon et al., 2011). These studies however, only considered siliciclastic material and, no reliable information on carbonate export is available, although carbonated rocks constitute more than 50% of the Gulf of Lion catchment area, and 40% of the eroded volumes . A total of 12 sediment samples from PRGL1-4 were selected along the glacial retreat from MIS 6 (i.e., 160 ka) toward the MIS 5 climatic optimum (i.e., 120 ka), together with 10 samples from modern Rhône river tributaries riverbeds (see Figure 1 for location). In the present study we analyzed the Sr isotope compositions, trace elements, and CaCO 3 contents of bulk carbonate samples extracted (i.e., leached) from both marine (i.e., PRGL1-4) and river sediments (i.e., from very coarse sand to clay; Supplementary Table S1), using 5% acid acetic digestion. Strontium was isolated from the matrix by column chromatography using a Sr-Spec resin (Eichrom R ) prior to analysis by TIMS (Thermo Fisher Scientific TRITON) at the Pôle Spectrometrie Océan (Brest, France). Purified Sr were loaded on single W and measured on static mode. All measured Sr ratios were normalized to 86 Sr / 88 Sr = 0.1194. During the course of analysis, Sr isotope compositions of standard solution NBS987 gave 87 Sr/ 86 Sr = 0.710259 ± 7 (2σ, n = 9, recommended value 0.710250). Total procedural blanks were <200 pg of Sr and therefore negligible in all cases (see Supplementary Material for a complete description of the methodology). Throughout the studied interval, the Sr isotopic compositions of the carbonate fractions show significant variations from 0.70809 to 0.70858, whereas carbonate contents vary from 29.5 to 42.5 wt.%, (Figure 2 EVIDENCE FOR A DETRITAL CARBONATE INPUT Seawater 87 Sr/ 86 Sr variations are often used to infer changes in the global strontium geochemical cycles, long-term variations of carbonate rocks erosion, or variations in the marine strontium reservoir through time. Most recently, 87 Sr/ 86 Sr variations in carbonates have been also used to establish continuous highresolution seawater curves for the last 500 Myr (Howarth and McArthur, 1997;McArthur et al., 2001 among others). In this case-study, sediment was deposited offshore between 120 and 160 ka, a time-span shorter than the residence time of strontium in the ocean, i.e., 4 Myr (Broeker, 1963;Goldberg, 1963;Hodell et al., 1990) and during which the Mediterranean basin remained well connected to the open world ocean via the Strait of Gibraltar (Hernández-Molina et al., 2014;Rohling et al., 2014). Therefore, variations in the carbonate 87 Sr/ 86 Sr ratios analyzed here cannot reflect changes in seawater Sr isotope composition and the observed deviation between our 87 Sr/ 86 Sr results from FIGURE 2 | (A) 87 Sr/ 86 Sr measured on pure biogenic foraminifera calcite (yellow stars, this study -PRGL 1-4), (B) δ 18 O G . bulloides (black line, Sierro et al., 2009) and 87 Sr/ 86 Sr measured on bulk carbonate (red line, this study -PRGL 1-4), (C) Red Sea sea level record (coreKL09, black curve, Grant et al., 2014) and %CaCO 3 measured on bulk carbonate (orange line, this study -PRGL 1-4). The black arrow indicates evidence of extensive melting of the Alpine ice-cap and local glaciers (Toucanne et al., 2009;Bickel et al., 2015). the present-day seawater Sr isotope composition has to be related to another process. In other words, if the carbonate components preserved in bulk marine samples were purely biogenic then their isotopic composition should be similar to the seawater: 0.709170 (±4.10 −6 , Mokedem et al., 2015;Meknassi et al., 2018 and references therein). In order to further confirm the invariance of the sea-water isotopic composition we performed laserablation -MC-ICPMS Sr isotope composition analyses on pure foraminifera and bivalves calcites on the most radiogenic samples (i.e., 120 ka, S.90-21/22) and the least radiogenic samples (i.e., 142 ka, S.110-59/60). Both samples are statistically indistinguishable from modern sea water composition with isotopic values of 0.70919 ± 0.0001 (2σ) and 0.70921 ± 0.0002 (2σ) respectively. This clearly demonstrates the continuous connectivity with the global ocean and the fact that short timescale variability recorded in our bulk 87 Sr/ 86 Sr cannot be explained by changes in the isotopic composition of the parent fluid (i.e., seawater). How then can such variability be explained? In the Gulf of Lion context, the most likely mechanism to explain the observed data is through the export of detrital carbonates from the catchment area. Indeed, carbonates represent about 50% of the drainage area (relics of the Tethys Ocean, see Figure 1). Recent studies show that long-term denudation of carbonate rocks within the GoL catchment is significant and slope-dependent (Godard et al., 2016;Thomas et al., 2017), enhancing the transfer of carbonates trough rivers' suspended material (i.e., 30 to 60% of the coarse fraction; Pont et al., 2002). Our river samples contain between 2.0 and 40.3 wt.% of carbonates (Supplementary Table S1 and Supplementary Figure S1) which strongly support these conclusions. Moreover, the riverine bedloads are characterized by Sr isotope compositions ranging between 0.70741 and 0.70855 (Supplementary Table S1 and Supplementary Figure S1) which is close to the 87 Sr/ 86 Sr ratios of the different carbonates (i.e., Miocene, Eocene, Cretaceous, Jurassic) present in the catchment area (ranging from 0.70685 to 0.70895; Figure 3 and Supplementary Table S2). Thus, our river ratios likely represent a mixture of the different carbonates end-members (of different age) exposed in the catchment areas (Figure 3). The observed inter-river isotope variability may result from different contributions of mechanical and chemical weathering processes in each watershed and/or reflect the relative proportion of each end-member carbonate unit. In this context, the export of detrital carbonates into marine sediment appears as the best mechanism to explain the observed dataset. VARIATION THROUGH TIME: COMPOSITION OR PROPORTION? A clear distinction in 87 Sr/ 86 Sr of leached carbonate fractions is observed between glacial and interglacial intervals (Figure 2) as deduced from the oxygen isotopic curve obtained in planktonic foraminifera (Globigerina bulloides) and associated age-model from the same core (Sierro et al., 2009;Pasquier et al., 2017). The two samples from interglacial MIS 5 are characterized by 87 Sr/ 86 Sr ratios of 0.70838 and 0.70858 (n = 2) in contrast with less radiogenic composition in glacial sediment (MIS 6) which range from 0.70809 to 0.70832 (n = 11, Figure 2). The lowest 87 Sr/ 86 Sr ratios (i.e., 0.70810 ± 0.00001, n = 3) correspond to maximal ice extension and sea-level lowstand. Contrary to 87 Sr/ 86 Sr ratios, the %CaCO 3 does not track climatic conditions, and is not modulated by depositional conditions across the termination II (T.II ∼130 ka). Instead, %CaCO 3 shows a gradual rise from approx. 30 wt.% at the onset of MIS 6 (i.e., 160 ka) to 42 wt.% at the end of the penultimate glacial maxima (i.e., 140 ka); then it slightly decreases down to 39 wt.% during the beginning of T.II, before finally rising and reaching a constant value of ∼41 wt.% during the MIS 5 (Figure 2). During glacial times, we observe a greater CaCO 3 proportion of what is unexpected considering the low carbonate productivity observed in the western Mediterranean Sea (Hoogakker et al., 2004;Toucanne et al., 2015). Interestingly, this increase in carbonate content is concomitant with a higher detrital flux (Cortina et al., 2013(Cortina et al., , 2016Pasquier et al., 2017Pasquier et al., , 2018; as the rise around 155 ka is synchronous with an extensive melting episode of Alpine ice-caps and local glaciers (Toucanne et al., 2009;Bickel et al., 2015 and reference therein), (Figure 2). This suggests a significant increase of detrital carbonates exported during glacial conditions. This increase might be due to enhanced mechanical glacial and peri-glacial processes such as frost cracking or ablation by glaciers. The temporal variations in the 87 Sr/ 86 Sr ratios of the carbonate fraction can be explained either by changes in the relative FIGURE 3 | 87 Sr/ 86 Sr evolution of sea-water over the last 210 Ma (black line; Howarth and McArthur, 1997;McArthur et al., 2001), pure biogenic calcite measured on foraminifera shells reflecting the actual sea-water composition. The range of PRGL1-4 leachate deposited over the last 500 kyr is illustrated by the red band and range of rivers sediment in the present-day GoL watershed by a brown band. The main carbonated stratigraphic units and their corresponding Sr isotope compositions in the GoL watershed are also highlighted with horizontal bars corresponding to their mean isotopic composition as defined in Supplementary Table S2. proportions of biogenic versus detrital carbonates, changes in the isotope composition of detrital carbonates exported into GoL sediment or a combination of both processes. In the former, the resulting isotope composition of biogenic plus the detrital carbonate mixing processes is typically related to their respective end-member proportion and compositions. Changes in the 87 Sr/ 86 Sr of the carbonate assemblage could be the natural result of an increase or decrease of biogenic carbonate production related to modification of in situ primary production through time. In shelf environments, the biogenic carbonate content is generally controlled by the surface water carbonate productivity, the total amount of carbonate being further controlled by terrigenous sediment dilution effects (Cremer et al., 1992;Hoogakker et al., 2004;Toucanne et al., 2015), with low biogenic CaCO 3 production during glacial periods and high biogenic CaCO 3 content during interglacial ones. These changes in carbonate production should affect the in situbiogenic versus detrital carbonate proportions, leading to more (less) radiogenic 87 Sr/ 86 Sr during interglacial (glacial) times, as observed in our dataset. However, downcore fluctuations of CaCO 3 do not follow the δ 18 O records. Instead, we observed the most important increase in %CaCO 3 during the glacial sensu stricto period (Figure 2). This reveals that, at our site, the %CaCO 3 is not strictly controlled by primary production. Thus, changes in the relative proportion of biogenic carbonates (within the carbonate mixing) cannot on their own explain the observed variation in 87 Sr/ 86 Sr. Interestingly, this increase in %CaCO 3 happened during the period of lower sea level (i.e., glacial maxima), and the less radiogenic values are observed during the lowest sea level conditions (i.e., glacial sensu stricto). At that time, due to sea level fall, PRGL1-4 borehole site is closer to shore (∼10 km) and is therefore more prone to receive and preserve detrital materials. During interglacial times, the sea-level rise and the biogenic carbonate production increase leading to a relative decrease in the proportion of detrital carbonates preserved in PRGL1-4 sediment. This is also observed at finer timescales during the entire penultimate glaciation where variability of 87 Sr/ 86 Sr closely mimics planktonic oxygen isotopes and sea-level reconstruction (Figure 2 and Supplementary Figure S3). Therefore, 87 Sr/ 86 Sr fluctuations may be related to variation in the relative proportion of exported carbonates. Mixing calculations between biogenic and detrital carbonates are used to test this hypothesis. Where ( 87 Sr/ 86 Sr) m represent the Sr isotopic composition measured in PRGL1-4 carbonate fractions; ( 87 Sr/ 86 Sr) sw corresponds to seawater composition and ( 87 Sr/ 86 Sr) d refers to the isotopic composition of the detrital carbonates. In this equation X represents the percentage of in situ biogenic carbonates, and (1-X) the required proportion of detrital carbonates in order to satisfy the isotopic mass balance. Calculation results are shown in Figure 4 where the shaded area illustrates the range of possible 87 Sr/ 86 Sr ratios for the detrital assemblage when considering mixing processes between biogenic carbonates (i.e., 87 Sr/ 86 Sr = 0.70917) and the minimum and maximum 87 Sr/ 86 Sr ratios recorded in the PRGL1-4 sediment carbonate fraction (bottom and top black line, respectively). In this space, the percentage of detrital carbonate can be predicted in order to satisfy the mass balance for a given detrital Sr isotope composition. We also investigated the impact of detrital assemblage isotope composition on the percentage of detrital carbonate required to fulfill the mixing mass balance equation (Figure 4). This detrital assemblage FIGURE 4 | Modeled mixing of a two components system with end-members representing marine biogenic carbonates ( 87 Sr/ 86 Sr = 0.7092) and an unknown detrital source. The black lines are the predicted 87 Sr/ 86 Sr of the detrital end-member that satisfy the isotopic mass balanced, using the 87 Sr/ 86 Sr of the less (bottom boundary) and more (upper boundary) radiogenic PRGL1-4 samples. The gray shaded area shows the range of possible solutions according to the weight percentage of detrital carbonate in the sediment mixing. Colored horizontal bars represent possible detrital isotopic composition based on mean river discharges (dark blue line), mean 87 Sr/ 86 Sr riverbeds composition (dark red line), exposed surface area in the catchment area (gray line), 87 Sr/ 86 Sr of Cretaceous rocks (cyan line), 87 Sr/ 86 Sr of Jurassic (green line), and equal mixing ratio between carbonated rocks within the catchment area (light blue). Circles represent % of detrital carbonates in order to obtain the observed 87 Sr/ 86 Sr composition in PRGL1-4 sediment during glacial sensu stricto and sensu lato (i.e., dark and light blue circles, respectively), and interglacial sensu stricto (i.e., red circles). Figure 4)], or resulting from mixing in rivers. In the latter, we used several multi-component mixing, using individual river 87 Sr/ 86 Sr composition and assuming different relative proportions as a function of: (1) river discharges (i.e., 87 Sr/ 86 Sr = 0.70793; dark blue), (2) percentage of the exposed carbonated surface in the catchment area (i.e., 87 Sr/ 86 Sr = 0.70756; gray line), and (3) assuming an equal mixing between all carbonated rocks in the catchment (i.e., 25% of each carbonate unit exposed; 87 Sr/ 86 Sr = 0.7077; light blue), see Figure 4 and Supplementary Table S2. Our results show variation of the relative contribution of detrital carbonates (i.e., %CaCO 3 detrital) over glacial-interglacial cycles (Figure 4). Using the different scenarios presented above, we also suggest that a higher proportion of detrital carbonates are preserved during glacial sensu stricto (55 and 87%) compared to interglacial (31 to 49%) samples, respectively dark blue and red points in Figure 4. We also note that this observation is independent of the isotopic composition of the detrital exported material because whatever the Sr isotope composition of the detrital carbonate fraction is, the required amount of detrital carbonate is always higher during glacial periods. Therefore, 87 Sr/ 86 Sr fluctuation may result from sea-level variability, with sea-level lowstand allowing a better deposition/preservation of detrital carbonates on the GoL's upper slope. Alternatively, if we assume that detrital carbonates are exported at a constant rate, variation in their strontium isotopic composition could also explain our data. As we showed previously, the GoL catchment area is mainly composed of Mesozoic and Cenozoic marine limestone that are characterized by a large range of 87 Sr/ 86 Sr ratios, and where Miocene carbonate 87 Sr/ 86 Sr > Eocene carbonate 87 Sr/ 86 Sr > Cretaceous carbonate 87 Sr/ 86 Sr > Jurassic carbonate 87 Sr/ 86 Sr (Figure 3). Changes in the Sr isotope composition of the PRGL1-4 carbonate fraction can result from variations in the Sr isotope composition of the detrital end-member. Thus, our data could indicate sedimentary mixing of biogenic carbonate with less radiogenic detrital carbonates during glacial times, possibly enhanced by greater inputs of Jurassic or Cretaceous units. Indeed, at the time of minimum 87 Sr/ 86 Sr ratio (glacial) our mixing model predicts less radiogenic detrital carbonates, as indicated by the lower boundary of the shaded area (Figure 4). By contrast, more radiogenic detrital carbonates are required during interglacial stages, as indicated by the upper boundary (Figure 4). For instance, if we consider that 90% of the carbonates result from detrital export, then the isotopic composition of the detrital source should have evolved from 0.7080 during the glacial sensu stricto to 0.7850 during the interglacial sensu stricto. Such variability in the detrital 87 Sr/ 86 Sr ratio should be interpreted as the natural response to various mechanical and chemical weathering processes and/or spatial changes in sediment provenance in the catchment. The directionality of the relation between sea-level change and 87 Sr/ 86 Sr carb observed here, and the timing between the increase in %CaCO 3 and the Alpine ice cap collapse around 150 ka, argue for the export and preservation of detrital carbonates from the catchment to the GoL shelf. This provides a powerful way to reconstruct past detrital carbonate exports in Source-to-Sink systems. IMPLICATION FOR FLUXES RECONSTRUCTIONS Stratigraphic knowledge of the area relies on previous studies based on seismic and PROMESS drilling data in the Gulf of Lion (Rabineau, 2001;Rabineau et al., 2005Rabineau et al., , 2006Bassetti et al., 2008;Leroux et al., 2016 among others; see detailed methodology in the Section "Materials and Methods"). We focused our sediment budget within the so-called Sequence 3, in which enough high-resolution seismic data allowed identification of two sedimentary units, U75 and U80, that, respectively correspond to an interglacial and glacial period (respectively Marine Isotopic Stages 9 and 8), (Supplementary Figure S4). Previously reported sediment budgets obtained for these units were first corrected from porosity to calculate "deposited" terrigenous solid volumes. Considering that %CaCO 3 reflects in situ biogenic primary production, "deposited" sediment volumes are corrected to get the "detrital" sediment budget. In order to investigate the impact of the detrital/biogenic ratio on sediment flux calculation, we then applied the detrital carbonate estimations established for MIS 5 (i.e., 30-50%) and 6 (i.e., 55-85%), assuming it is suitable for MIS 8 and MIS 9, in order to calculate "true" detrital sediment budgets (Supplementary Figure S5 and Supplementary Table S3). Considering the only uncertainties about detrital carbonate content, we observe that "true" detrital sediment estimates within a single unit yield a systematic increase whatever the climatic conditions. Additional detrital sediment represents an addition of 78 to 130 km 3 .Myr and 128 to 199 km 3 .Myr, respectively for MIS 9 and MIS 8, corresponding to +20 to +33% and +23 to +36% detrital fluxes, respectively. Our conclusions imply an overall under-estimate of (detrital) sediment supply in Sourceto-Sink studies when carbonate content is not examined. This observation is valid for both glacial and interglacial periods, with variable magnitude but is high enough to be considered. Consequently, corrections from in situ carbonate production appears critical for future quantitative studies in Source-to-Sink routing systems. As the terrigenous sediment budget of passive margin basins records variations in the continental relief, triggered by either deformation or climate, it becomes a major challenge to determine sediment accumulation histories in a large number of basins found in various geodynamic contexts (Guillocheau et al., 2012). Usually, Source-to-Sink studies try to relate significant changes in sediment flux to significant changes in terms of climate or deformation, through the geological history of the studied area. But the concept of "significance" is largely dependent on the uncertainties and the time-resolution of the study. And, as previous authors underlined, "the assessment of the associated uncertainties are as important as the accumulation values themselves." Indeed, if uncertainties are underestimated, some "apparent" changes in sediment fluxes may not be significant in terms of climate or tectonic changes and may lead to misinterpretations. However, assessing these uncertainties still remains tricky. To do so, we need to consider and quantify to what extent each factor (i.e., autogenic and allogenic) impacts sediment budget measurement, in addition to the uncertainties strictly related to the method itself (e.g., such as the seismic resolution and borehole age uncertainties). Further work is therefore needed to refine our method, particularly in order to better define the characteristics of the detrital end-member and to therefore more accurately estimate its magnitude. Moreover, Source-to-Sink sedimentary systems are important settings of carbon cycling, serving as sites of carbon transfer between terrestrial and marine reservoirs, and as the primary locations for organic carbon burial on Earth (Leithold et al., 2016). Whereas the order of magnitude for in situ carbonate correction that is measured in this study appears lower than many other uncertainties in some cases, it should be nevertheless interesting to analyze Sr isotopes in Source-to-Sink studies where in situ carbonate production (autogenic factor) can "introduce noise, lags and/or completely mask signals of external forcings" (Romans et al., 2016). It may be particularly true for basins with low sediment accumulation rates where small variations of detrital sediment fluxes, including detrital carbonate, can be significant in term of climate or tectonic change. As well as in Source-to-Sink systems (i) exhibiting small sediment supplies, (ii) including few carbonated rocks in the catchment area (i.e., detrital carbonate flux toward the sink), or (iii) bad preservation of detrital sediment (sediment by-pass). Conversely, Sr isotope analyses may also be relevant for areas where biogenic production (and carbonate preservation) is particularly high or where productivity (and associated export in the surrounding basin) is very unstable throughout time, such as a carbonated/mixed platform. We can now wonder (i) how far this under-estimate can change from one basin to another, and (ii) how far this error could have led to potential previous misinterpretations in term of "true" changes on sediment history and processes at their origin, especially for studies exploring glacial and interglacial cycles. Overall, as already postulated by Helland-Hansen et al. (2016), this study highlights the usefulness of developing isotopic tools and using combined approaches to better refine our understanding of Source-to-Sink systems and to unravel past source terrains. IMPLICATIONS FOR DEEP-TIME SR ISOTOPE CHEMOSTRATIGRAPHY Temporal variations in the marine 87 Sr/ 86 Sr record globally reflect variations in the relative age-weighted fluxes of continental weathering relative to hydrothermal inputs over time, although other sources such as oceanic islands have to be considered (Revillon et al., 2007). Such variations are linked to supercontinent breakup and assembly and sea-level changes (e.g., Burke et al., 1982;De Paolo and Ingram, 1984;Haq et al., 1988;Veizer, 1989;Veizer et al., 1999;Prokoph et al., 2008) and are used extensively in the Precambrian era. Moreover, in the absence of robust biostratigraphic records and absolute chronological constraints, 87 Sr/ 86 Sr records are a critical tool for inter-and intra-basinal correlations in the Precambrian era. However, the absence of calcifying organisms means that all Precambrian (and many early Paleozoic) 87 Sr/ 86 Sr values come from microdrilled bulk carbonate samples (e.g., Halverson et al., 2007). Most of the available geologic records of marine sedimentary rocks predominantly preserve sediment deposited on the continental paleo-shelf (Peters and Husson, 2017), where carbonate facies are abundant (Grotzinger, 1990;Grotzinger and Knoll, 1999;Higgins et al., 2009). Consequently, bulk samples can include detrital carbonates. Still, if detrital carbonates had the ability to reach marine sediment since the emergence of the continent, we can wonder how much carbonated outcrops were likely to be weathered (and preserved over paleo-shelf). Currently, there is no reconstruction of such flux, but a simple geochemical argument can be made: CO 2 has been emitted into the ocean-atmosphere as long as volcanic activity existed, i.e., since the very beginning (Tailor and McLennan, 1985;Jacobsen, 1988;Ying et al., 2011). As CO 2 cannot indefinitely accumulate in the atmosphere, it sank as carbonate mineral and organic matter, at least since the last 3.8 billion years (Schidlowski, 2001). Carbon isotope compilation over the geological record (e.g., Schidlowski, 2001) suggests that approximatively 80% of the CO 2 source has been removed as carbonates over that entire time. This is a firstorder approximation, and there is many details and caveats, but it reveals that carbonates have been formed over much of Earth's history. Today most carbonates form biologically, while in the Precambrian and early Proterozoic era they formed abiotically (Higgins et al., 2009). In addition, and in contrast with the Phanerozoic era, past ocean chemistry (i.e., high silica content) promoted silicification of most Precambrian and early Proterozoic depositional environments (Siever, 1992;Treguer et al., 1995;Hofmann and Wilson, 2007 among others). Meaning that deep-time carbonates were probably more competent upon uplift and exposure, and therefore were even more likely to be transported to the ocean and "contaminate" the bulk Sr samples. The above considerations suggest that variations in the resulting 87 Sr/ 86 Sr values within and between basins could potentially also reflect differential detrital carbonate components, impacting both stratigraphic correlations as well as our understanding of tectonic evolution over time. Therefore, it seems critical to consider the potential impact that detrital carbonate may have on these records. DATA AVAILABILITY Publicly available datasets were analyzed in this study. This data can be found here: https://www.pangaea.de/?q=PROMESS1. AUTHOR CONTRIBUTIONS VP, SR, and MR conceived the work. VP, SR, LM, and SM organized the sampling. VP and SR carried out the geochemical analyses. VP, SR, EL, SM, and MR wrote the manuscript and the Supplementary Information. All authors discussed the interpretation of the results and contributed to the manuscript. FUNDING This work was supported by the "Laboratoires d'Excellence" LabexMER (ANR-10-LABX-19) that became ISblue (Interdisciplinary graduate School for the Blue Planet ANR-17-EURE-0015); and co-funded by a grant from the French government under the program "Investissements d'Avenir, " and by a grant from the Regional Council of Brittany. It was further supported by the CNRS (INSU-Tellus-SYSTER for the Carboflux project), with additional support from the French Actions Marges program. The drilling operation was conducted within the European Commission Project PROMESS (contract EVR1-CT-2002-40024). ACKNOWLEDGMENTS The European Promess Scientific committee and colleagues at Ifremer are thanked for previous contributions of data acquisition, processing, interpretations, and permitting to resample the borehole. The authors warmly acknowledge C. Liorzou and P. Nonnotte who kindly helped during the analytical preparation of samples, and for assistance on the ICP-OES and TI-MS, respectively. D. Fike is thanked for fruitful discussion and his feedback along the entire study; Itay Halevy and Peter Crockford for providing comments and suggestions regarding the importance of such processes on deep time reconstructions.
2019-07-03T13:03:57.509Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "b2b7691fe226b2075f568623779131194e73056a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2019.00164/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cd295ea3ae8b16b21a0fec45e196dd2b7afa3278", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology" ] }
11519996
pes2o/s2orc
v3-fos-license
Impaired ADAMTS9 secretion: A potential mechanism for eye defects in Peters Plus Syndrome Peters Plus syndrome (PPS), a congenital disorder of glycosylation, results from recessive mutations affecting the glucosyltransferase B3GLCT, leading to congenital corneal opacity and diverse extra-ocular manifestations. Together with the fucosyltransferase POFUT2, B3GLCT adds Glucoseβ1-3Fucose disaccharide to a consensus sequence in thrombospondin type 1 repeats (TSRs) of several proteins. Which of these target proteins is functionally compromised in PPS is unknown. We report here that haploinsufficiency of murine Adamts9, encoding a secreted metalloproteinase with 15 TSRs, leads to congenital corneal opacity and Peters anomaly (persistent lens-cornea adhesion), which is a hallmark of PPS. Mass spectrometry of recombinant ADAMTS9 showed that 9 of 12 TSRs with the O-fucosylation consensus sequence carried the Glucoseβ1-3Fucose disaccharide and B3GLCT knockdown reduced ADAMTS9 secretion in HEK293F cells. Together, the genetic and biochemical findings imply a dosage-dependent role for ADAMTS9 in ocular morphogenesis. Reduced secretion of ADAMTS9 in the absence of B3GLCT is proposed as a mechanism of Peters anomaly in PPS. The functional link between ADAMTS9 and B3GLCT established here also provides credence to their recently reported association with age-related macular degeneration. POFUT2 and B3GLCT act in the endoplasmic reticulum and exclusively modify correctly folded TSRs, providing a quality control mechanism regulating the secretion of TSR-containing proteins [21][22][23] . The specificity of B3GLCT is pre-determined by POFUT2, which modifies Ser/Thr residues (underlined) within the defined consensus sequence CXX(S/T)CXXG in TSRs, which is found in 49 proteins encoded by the human and mouse genomes 24 . Although POFUT2 is related to POFUT1, which adds fucose to epidermal growth-factor-like repeats and is involved in Notch signaling, it has exquisite specificity for TSRs and is not involved in Notch signaling. ASD in PPS likely results from impaired secretion, and thus, a functional loss of one or more TSR-containing proteins indispensable for proper ocular morphogenesis, but which of these proteins are essential in this context is presently undetermined. Furthermore, modification by B3GLCT has only been investigated in a few TSR-containing proteins 21 . Of the 49 TSR-containing proteins having the requisite consensus sequence, ADAMTS (a disintegrin-like and metalloproteinase domain with thrombospondin type 1 motif) proteins are the most numerous, with 26 predicted targets, including 19 secreted proteinases and 7 ADAMTS-like (ADAMTSL) proteins that are not proteinases 25 . ADAMTS proteins have been established to have diverse and crucial roles in development and human disease via analysis of human and animal genetic disorders and engineered mutations 26 . ADAMTS9 and ADAMTS20 are highly homologous and evolutionarily conserved, reflecting duplication of an ancestral gene 27 . Adamts9 LacZ/LacZ embryos and embryos arising from germline Cre deletion of an Adamts9 floxed allele (Adamts9 del/del ) die prior to eye development 28,29 . Adamts20 Belted (bt/bt) mutant mice are normal but for a white spotting defect and delayed closure of the secondary palate that leads to a low incidence of cleft palate 28,30 . Adamts9 haploinsufficient (Adamts9 del/+ or Adamts9 LacZ/+ ) mice survive and are fertile, although they have cardiovascular abnormalities 31 . Adamts20 bt/bt ; Adamts9 LacZ/+ and Adamts20 bt/bt ; Adamts9 del/+ mice die at birth as a result of cleft palate 28 . Here, we demonstrate using Adamts9 del/+ mice, that the corneal opacity originally noted postnatally in Adamts9 LacZ/+ eyes 32 is of developmental origin, a result of ASD that includes Peters anomaly and lens abnormalities. Therefore, we investigated O-fucosylation of ADAMTS9 to ask whether its deficiency could be a possible mechanism for ASD in PPS patients. We show that ADAMTS9 modification by B3GLCT is required for its secretion, which together with Peters anomaly observed in Adamts9 +/− mice, and its previously defined roles in cardiac and palate development elicited in combination with ADAMTS20, potentially link ADAMTS9 mechanistically to PPS. Results Adamts9 haploinsufficiency leads to anterior segment dysgenesis (ASD). An Adamts9 germline mutant (Adamts9 del ) was generated using a previously described floxed Adamts9 allele and maintained in the C57BL/6 background 29 . As previously seen in Adamts9 LacZ/+ mice 32 , Adamts9 del/+ mice had corneal opacities with high penetrance (Fig. 1a and Supplementary Fig. S1a), which were discernible as soon as their eyelids opened (~2 weeks of age). Opacities were bilateral in about half the affected mice, and unilateral in the other half with preferential involvement of the right eye ( Supplementary Fig. S1a). The corneal opacities were of variable severity, ranging from faint central clouding to opacity of the entire cornea (Fig. 1a). A central corneal contour anomaly was visible externally in some eyes with corneal opacity (Fig. 1a, central panels). Moreover, Adamts9 del/+ eyes were significantly smaller than Adamts9 +/+ eyes (Fig. 1b). Optical coherence tomography (OCT) at 3 weeks of age showed a flattened cornea and a shallow anterior chamber in Adamts9 del/+ mice compared to the wild-type littermates, comprising little more than a potential space in the most severely affected eyes (Fig. 1c,d). Although the anterior chamber volume was reduced in most Adamts9 del/+ eyes, comparison with wild-type chamber volumes did not reach statistical significance owing to the considerable variability of ASD and the presence of buphthalmos (enlarged eye) in at least one Adamts9 del/+ eye (Fig. 1e). Peters anomaly, never seen in the wild-type eyes, was identified by OCT in 3-week-old Adamts9 del/+ eyes having the central corneal contour anomaly and confirmed histologically by observation of lens-cornea adhesion with disrupted posterior corneal layers, and iridocorneal adhesions (anterior synechiae) (Fig. 1c,f, Table 1). Adamts9 del/+ eyes had a smaller lens (Fig. 1g) and some eyes showed ciliary body hypoplasia/dysplasia, signs of vacuolar cataract and persistence of lens fiber nuclei in the posterior lens, none of which were seen in wild-type littermates ( Table 1, Supplementary Fig. S1b-d). No histological anomalies of the retina, RPE and choroid were evident in Adamts9 del/+ eyes. Adamts9 is expressed at specific sites during mouse eye development. To determine the role of Adamts9 in these anomalies, we defined its spatio-temporal expression pattern in the eye during development (Fig. 2a). At gestational age 10.5 days (E10.5), i.e., shortly after lens vesicle formation, Adamts9 (red signal) was expressed strongly in the optic cup, preponderantly at its anterior pole, with weak Adamts9 expression in the invaginating lens vesicle. E11.5 eyes showed the strongest Adamts9 expression of all developmental stages analyzed, localizing mRNA mostly in the anterior pole of the developing retina/optic cup and more faintly, in the hyaloid vascular plexus and the lens vesicle. At E12.5 and E13.5, strong Adamts9 expression persisted in the anterior pole of the retina. After E12.5, Adamts9 mRNA was no longer detected in the lens, but was present in endothelial cells of the tunica vasculosa lentis and vasa hyaloidea propria, and in the choroid and sclera. In newborn eyes, Adamts9 mRNA was most strongly expressed in the ciliary margin zone and prospective ciliary body, sclera, corneal keratocytes and choroidal vasculature (Fig. 2a). This mRNA distribution pattern was similar to β -galactosidase staining elicited from the intragenic lacZ reporter in Adamts9 Lacz/+ mice ( Supplementary Fig. S2). Adamts9 haploinsufficiency affects lens capsule integrity, its composition, and lens growth. To determine the pathogenic sequence of ASD, and because of continuous expression of Adamts9 during eye development, Adamts9 del/+ and Adamts9 +/+ eyes were compared histologically at different developmental stages. H&E staining of E14.5 and older Adamts9 del/+ eyes revealed eosinophilic globules adjacent to the posterior aspect of the lens (Fig. 2b, Supplementary Fig. S1e). In contrast, no differences in lens morphology were Scientific RepoRts | 6:33974 | DOI: 10.1038/srep33974 Figure 1. Adamts9 del/+ mice have a highly penetrant congenital corneal opacity resulting from ASD. (a-g) Corneal opacity in Adamts9 del/+ eyes is associated with Peters anomaly (a), a smaller eye (b), shallow anterior chamber (c-e), iridocorneal and iridolenticular adhesions (f) and a smaller lens (g). 3 week-old Adamts9 +/+ and Adamts9 del/+ enucleated eyes were analyzed by stereomicroscopy (a), OCT (c,d) and H&E staining of paraffin sections (f). The arrowheads and arrow indicate, respectively, the corneal opacity and the Peters anomaly. Adamts9 del/+ eyes had a significantly smaller diameter than Adamts9 +/+ eyes (b). Anterior chamber volume was determined from OCT data using the Amira segmentation tool (d). Anterior chamber volumes were highly variable in Adamts9 del/+ eyes while quite constant in Adamts9 +/+ eyes (e). Lens area was measured from H&E stained sections and was significantly smaller in Adamts9 del/+ eyes as compared to Adamts9 +/+ eyes (g). The images are representative of 6 Adamts9 +/+ and 12 Adamts9 del/+ eyes analyzed by OCT. Scale bar = 1 mm. Significance was determined using a 2-tailed student's t test (*p < 0.05). Supplementary Fig. S1f). From E15.5, Adamts9 del/+ eyes were smaller than wild-type eyes with a disproportionately smaller lens (Fig. 2b,c and Supplementary Fig. S3a,b). (b,c) Sections from Adamts9 +/+ and Adamts9 del/+ eyes from E14.5 to P0 were stained by H&E and the lens area was measured. At E14.5 Adamts9 +/+ and Adamts9 del/+ eyes were of comparable size. Posterior to the lens, abnormal material (indicated by an arrow) was observed in the E16.5 and P0 Adamts9 del/+ eyes shown here (b) but never in Adamts9 +/+ eyes. At E16.5 and P0, the Adamts9 del/+ lens was significantly smaller than the Adamts9 +/+ lens (c). Images are representative of at least 3 eyes analyzed at each time point. L = lens. Scale bar = 100 μ m. Significance was determined using a 2-tailed Student's t test (*p < 0.05). The retro-lenticular globules noted from E14.5 were identified as arising from extruded lens fibers using anti-γ -crystallin antibody, suggesting impaired lens capsule integrity (Fig. 3a). Therefore, we characterized the composition and structure of the lens capsule. Periodic acid-Schiff (PAS) staining, an indicator of lens capsule glycoprotein content, was subtly reduced and diffuse in the E16.5 Adamts9 del/+ lens capsule (Fig. 3b, insets in left-hand column), although relatively normal in newborn (post-natal (P) 0) and 3-week old (P21) Adamts9 del/+ eyes (Fig. 3b, center and right-hand columns). At E16.5, collagen IV immunostaining (Fig. 3c) and laminin immunostaining (Fig. 3d) were conspicuously reduced throughout the circumference of the lens capsule in Adamts9 del/+ eyes. At P0, the reduction in collagen IV and laminin immunostaining in the Adamts9 del/+ lens capsule was less pronounced than at E16.5 (Fig. 3c,d, center panels). At P21, the lens capsule showed regions where collagen IV and laminin immunostaining of the lens capsule were interrupted (Fig. 3c,d, right-hand panels). In contrast, collagen IV and laminin staining in capillaries comprising the tunica vasculosa lentis, the vascular network adjacent to the embryonic lens, were not altered in Adamts9 del/+ eyes (Fig. 3c,d). Despite the marked differences in lens capsule immunostaining between Adamts9 del/+ and Adamts9 +/+ eyes at E16.5, transmission electron microscopy (TEM) at this age showed comparable lens capsule appearance, including thickness, layering and electron density over almost the entire extent of the lens capsule in Adamts9 del/+ and Adamts9 +/+ eyes ( Fig. 4a,b). Notwithstanding this apparent normalcy, discontinuities, i.e., fenestrations in the lens capsule were also detected by TEM in Adamts9 del/+ eyes (Fig. 4c,d). Posteriorly, Adamts9 del/+ eyes contained extruded material continuous with lens fibers (Fig. 4c) and anteriorly, TEM revealed that lens epithelium cells had translocated through lens capsule fenestrae into the anterior chamber (Fig. 4d). The translocated cells were surrounded by an ECM with a similar ultrastructural appearance as the lens capsule, and a duplicated lens capsule was observed anterior to the ectopic cell nests (Fig. 4d). In one eye analyzed by TEM, we observed continuity of the lens fibers with corneal stroma, along with an interrupted corneal endothelium and Descemet's membrane, together satisfying identification of Peters anomaly (Fig. 4e). Adamts9 del/+ lens epithelium undergoes epithelium to mesenchyme transition. The lens epithelium normally comprises a single layer of cells arranged in a columnar format reflecting strict apical-basal polarity, as was seen in Adamts9 +/+ embryos (Fig. 5a,b). In contrast, up to 3 layers of lens epithelial cells were present in E16.5 and P0 Adamts9 del/+ eyes (Fig. 5a,b). In P21 Adamts9 del/+ eyes, lens epithelium cells were embedded in the corneal stroma ( Fig. 5a,b). Combined with staining for laminin, DAPI stained nuclei indicated that the cells adopted aberrant orientations in mutant E16.5 eyes, indicating disruption of cell polarity (Fig. 5b, upper panel). At birth, we observed aberrant nests of cells anterior to the lens capsule and within the cornea that were morphologically distinct from corneal keratocytes, which have a flattened nucleus and an abundant cytoplasm (Fig. 5b, center panels). ECM surrounding these ectopic cells stained positive for laminin and collagen IV, indicative of ectopic basement membrane (Fig. 5b, center panel), consistent with the TEM observations of a multilayered lens epithelium and duplicated anterior lens capsule ( Fig. 4d). At 21 days of age, the cell nests were more prominent than at birth, and were embedded in an abundant collagen IV and laminin (Fig. 5b, lower panel, inset). We concluded that the aberrant cell nests in the anterior chamber and corneal stroma were ectopic lens epithelium cells that continued to deposit lens capsule after E16.5. In addition, the keratocyte density was greater in the P21 Adamts9 del/+ corneal stroma than the Adamts9 +/+ littermates ( Supplementary Fig. S5). However, cell density in the corneal stroma and corneal thickness were not significantly altered in embryonic or newborn Adamts9 del/+ eyes as compared to Adamts9 +/+ eyes ( Supplementary Fig. S5), indicative of post-natal corneal changes. Corneal invasion by lens epithelium suggested occurrence of epithelium to mesenchyme transition (EMT) of lens epithelium. During EMT, epithelial cells lose or reduce expression of epithelial markers, such as E-cadherin, and acquire mesenchymal markers, such as N-cadherin and α -smooth muscle actin (α -SMA), as well as altered polarity. In Adamts9 del/+ newborn eyes, the anteriorly extruded lens epithelium cells expressed the mesenchymal markers α -SMA and N-cadherin, as well as the epithelial marker, E-cadherin (Fig. 5c). In Adamts9 +/+ eyes, α -SMA staining was absent in corneal stroma and lens epithelium, whereas E-cadherin was expressed only by the lens and the corneal epithelium, and N-cadherin by the lens epithelium and lens fibers (Fig. 5c). At P21, ectopic lens epithelium in Adamts9 del/+ eyes still expressed the three markers, although α -SMA immunostaining was stronger than at P0, and E-cadherin expression was reduced (Fig. 5c). In P21 Adamts9 +/+ eyes, α -SMA staining was not detectable in the cornea or the lens, and E-cadherin and N-cadherin were only expressed, as expected, by lens epithelium and corneal endothelium, respectively. These changes are indicative of age-related progression of EMT of Adamts9 del/+ ectopic lens epithelial cells, while retaining some characteristics of the lens epithelium. Since lens epithelium does not express Adamts9 (Fig. 2a, Supplementary Fig. S2), EMT may be secondary to lens capsule alteration and loss of appropriate lens epithelium basal contacts with the lens capsule. To further establish the origin of the ectopic cell nests, Adamts9 del/+ and Adamts9 +/+ mice were crossed with Wnt1-Cre mice carrying a dual fluorescent reporter (mT/mG). In Adamts9 +/+ ; mT/mG; Wnt1-Cre eyes, neural crest-derived cells (e.g., keratocytes in the cornea and corneal endothelium) were marked by green fluorescence, reflecting excision of the Td Tomato reporter (red) by Cre and a switch to GFP expression, whereas corneal epithelium (Fig. 5d, top panel) and lens epithelium displayed constitutive red fluorescence. In Adamts9 del/+ ; Scientific RepoRts | 6:33974 | DOI: 10.1038/srep33974 mT/mG; Wnt1-Cre eyes, the aberrant cell nests in the anterior chamber exhibited red fluorescence, demonstrating that in contrast to corneal stroma (green), these cells were not neural crest-derived (Fig. 5d, lower panel). Taken (a) E12.5 to E16.5 Adamts9 del/+ and Adamts9 +/+ eyes were stained using an antibody against γ -crystallin which identified the extrusions lying posterior to the lens as extruded lens fibers (arrows). Images are representative of at least 3 eyes analyzed for each time point. L = lens. Scale bar = 25 μ m. (b-d) E16.5, P0 and P21 Adamts9 del/+ and Adamts9 +/+ eyes were stained by the periodic acid Schiff (b, PAS stain, magenta, indicative of glycoproteins) or using antibodies directed against collagen IV (c, red) or laminin (d, red). Collagen IV and laminin immunostaining of the lens capsule (arrow) had reduced intensity in Adamts9 del/+ eyes as compared to Adamts9 +/+ eyes, most evident in E16.5 embryos. However, the capillary basement membrane of vessels comprising the tunica vasculosa lentis (asterisk) had similar staining in both genotypes. At higher magnification (framed images), segmentally weaker or discontinuous lens capsule immunostaining (arrowheads) suggestive of fenestrae, was observed in mutant eyes. The images are representative of at least 3 eyes analyzed at each time point. L = lens. Scale bar = 50 μ m. together with the presence of lens capsule components detected by immunostaining around these cells, and lens epithelium-like morphology in TEM (Fig. 4) we suggest that the ectopic cells in mutant cornea arise from lens epithelium by EMT. POFUT2 and B3GLCT modify ADAMTS9 to regulate its secretion. ADAMTS9 contains 15 TSRs, of which 12 contain the O-fucosylation consensus motif, CXX(S/T)CXXG (Fig. 6a). Full-length ADAMTS9 is secreted into the medium of transfected cells at levels too low to permit efficient recombinant protein purification, and natural sources of this molecule are unavailable. Therefore, to determine whether ADAMTS9 TSRs are modified by POFUT2 and B3GLCT, we generated two recombinant human ADAMTS9 constructs (Fig. 6a), that, between them, include all its TSRs, for analysis by mass spectral glycoproteomic methods 22,23 . These constructs, containing the first 8 TSRs (hADAMTS9-N-L2) or TSR9-15 (hADAMTS9 TSR9-15) (Fig. 6a), were purified from (a,b) E16.5, P0 and P21 Adamts9 +/+ and Adamts9 del/+ eyes were stained with H&E (a) or an anti-collagen IV antibody (red) and the nuclei were counterstained with DAPI (white) (b). At E16.5, a disorganized lens epithelium (arrowhead) was associated with weaker collagen IV staining in Adamts9 del/+ eyes. At P0 and P21, aberrant nests of cells surrounded by collagen IV-positive ECM were observed only in the Adamts9 del/+ eyes (arrow). Higher magnification of these cells in P21 Adamts9 del/+ eyes is shown (inset). These images represent at least 3 eyes analyzed at each time point. L = lens, C = cornea. Scale bar = 25 μ m. (c) P0 and P21 Adamts9 del/+ and Adamts9 +/+ eyes were stained using antibodies directed against α -smooth muscle actin (α -SMA) (red), E-cadherin (red) or N-cadherin (red) and nuclei were stained by DAPI (nucleus: white). At P0 and P21, the ectopic cells in Adamts9 del/+ corneas stained positive for α -SMA, E-cadherin and N-cadherin (arrows) whereas no α -SMA expression was observed in the lens or the cornea of Adamts9 +/+ mice, and E-cadherin was distinctly expressed by the lens and corneal epithelium. The corneal endothelium of Adamts9 +/+ mice strongly expressed N-cadherin, but also low levels of E-cadherin. C = cornea, E = corneal endothelium, L = lens. (d) Adamts9 del/+ and Adamts9 +/+ mice were bred with mT/mG; Wnt1-Cre mice for lineage tracing of neural crest-derived cells (green). In adult Adamts9 del/+ ; mT/mG; Wnt1-Cre eyes (lower panel), ectopic cells are red (arrow) and thus, unlike corneal keratocytes and corneal endothelium (green, see mT/mG; Wnt1-Cre cornea in upper panel), they do not arise from the neural crest. Scale bar = 25 μ m. the conditioned medium of stably transfected HEK cells and were subjected to tryptic digestion. Mass spectral analysis of the peptides confirmed modification of TSR5-9, TSR11-13 and TSR15 by POFUT2 and B3GLCT (Fig. 6b, Supplementary Table S1 and Supplementary Fig. S6). Semi-quantitative analysis by extracted ion chromatography (EIC) showed the fully modified, Glucoseβ 1-3Fucose disaccharide forms of the peptides to be the most abundant (Fig. 6b). Since POFUT2 and B3GLCT are the only known enzymes capable of adding these two sugars to TSRs [33][34][35][36][37] , these data strongly support the conclusion that ADAMTS9 is modified by both POFUT2 and B3GLCT. Whether or not O-fucosylation governed quality control of ADAMTS9 for secretion was determined by knockdown of POFUT2 or B3GLCT mRNA using specific siRNAs and a scrambled siRNA as a control in HEK cells. The secreted ADAMTS9-N-L2 levels in the medium of POFUT2, B3GLCT or control siRNA transfected cells were compared relative to the impact on secretion of a co-transfected plasmid expressing IgG (Fig. 7a, complete gel is shown in Supplementary Fig. S7). As previously seen, ADAMTS9-N-L2 migrated more rapidly in the medium of transfected cells than in the cell lysate (Fig. 7a, Supplementary Fig. S7) because of furin-mediated excision of the N-terminal propeptide (Fig. 7a) which reduces molecular mass by ∼ 25kDa 38 . Knockdown of POFUT2 led to a nearly complete loss of ADAMTS9-N-L2 secretion compared to the scrambled siRNA (Fig. 7a,b, Supplementary Fig. S7). Knockdown of B3GLCT also caused a statistically significant reduction of ADAMTS9 secretion (Fig. 7b). In contrast, knockdown of POFUT2 or B3GLCT had no effect on secretion of transfected Table S1. Note that lack of data for TSRs 1, 4, and 14 does not indicate that they are unmodified, but that the ions corresponding to those sites were not detected. (*= contaminating peptide). Although the present studies clearly demonstrated ADAMTS9 post-translational modification by POFUT2 and B3GLCT, and B3GLCT has long been known as the causative gene in PPS, there is little known about the spatial and temporal regulation of expression of these transferases during eye developement. Because this information is crucial for providing a physiological context for the action of these enzymes on ADAMTS9, we determined their mRNA expression pattern using ISH. As shown in Supplementary Fig. S8, both Pofut2 and B3glct were expressed in the developing eye from E11.5 to birth. Their mRNAs were widely expressed, being evident in the optic cup/retina, lens epithelium, cornea and peri-ocular mesenchyme. They not only overlap entirely with Adamts9 mRNA distribution (Fig. 2a, Supplementary Fig. S2), but show a broader expression than Adamts9 mRNA consistent with their having 48 other potential targets, which may each have very different expression patterns. We conclude that Pofut2 and B3glct mRNAs are expressed in the very same cells that express Adamts9 and are therefore, physiologically relevant to ADAMTS9 post-translational modification and secretion in the eye. Discussion We demonstrate here that congenital corneal opacity and Peters anomaly results from Adamts9 haploinsufficiency in mice. Because enzyme deficiencies are typically recessive in their manifestation, the hemizygous impact of Adamts9 is remarkable and implies a major role for this gene in ocular morphogenesis. Corneal opacity occurs in two distinct Adamts9 alleles, Adamts9 del/+ and Adamts9 LacZ/+ , but not in mice hemizigous for a hypomorphic gene trap allele Adamts9 GT , which has ADAMTS9 function intermediate between the wild-type and haploinsufficient 39 . Together, these observations indicate that a critical dosage threshold of ADAMTS9 is involved in eye development. Adamts9 LacZ/+ and Adamts9 del/+ alleles differed in the timing of detection of the ocular defect. In Adamts9 LacZ/+ mice, corneal opacity was evident in only 20% after 5 weeks of age and in 80% after 25 weeks of age 32 , whereas in Adamts9 del/+ mice, the corneal opacity was obvious as early as 2 weeks of age and present in nearly all mice by 3 weeks. Although Peters anomaly was not observed in Adamts9 LacZ/+ mice, anterior synechiae (lens-iris-cornea adhesions) and lens extrusions similar to those detected in Adamts9 del/+ mice were present. These differences could result from different targeting strategies employed in the two alleles or slightly different genetic backgrounds of the two strains. The Adamts9 LacZ/+ allele was generated in a hybrid genetic background (129Sv X C57Bl/6) by targeting exon 3 (encoding the propeptide), and was backcrossed for 10 generations into the C57Bl/6 background 32 , whereas the Adamts9 del/+ allele was generated and subsequently maintained in the C57Bl/6 background, and it was engineered to lack exons 5-8 (encoding the catalytic domain) 29 . Each targeted allele results in a frame-shift and the Adamts9 mRNA, if stable, would generate only the N-terminal propeptide in the mutants, to which no innate activity has been ascribed in any ADAMTS protease. Consistent with a possible role for the genetic background, the ocular phenotype was not initially apparent in 129Sv X C57Bl/6 Adamts9 LacZ/+ mice, and was only noted after six to eight crosses into the C57Bl/6 strain 32 . Although it was previously reported that eye abnormalities occur spontaneously at low frequency in C57Bl/6 mice, and have a strong predilection for the right side 40,41 , we did not notice eye anomalies in wild type littermates and the preponderance of anomalies in the right eye of Adamts9 del/+ mice is presently unexplained. Although Adamts9 del/+ eyes do not exhibit morphologic anomalies prior to E12.5, the presence of Peters anomaly indicates that lens separation, which occurs at E11, is compromised in a significant proportion of eyes. After the lens separation period, Adamts9 del/+ eyes consistently showed impaired lens growth and loss of structural integrity of the lens capsule resulting in lens fiber extrusion posteriorly, and migration of the lens epithelium into the cornea anteriorly through an EMT-like process. The lens capsule is a specialized and exceptionally thick basement membrane composed mainly of collagen IV, laminin, entactin/nidogen and perlecan 42 . It is deposited by lens epithelium anteriorly and lens fibers posteriorly as successive basement membrane layers during development. PAS stain and immunostaining demonstrated that the lens capsule composition was transiently altered during development of Adamst9 del/+ eyes. Other mouse mutants reported to have lens extrusions and/or outgrowth of lens epithelium had defined defects in the lens capsule components, i.e., perlecan 43 and laminin 44 . Col4a1 mutant mice develop ASD, although lens fiber extrusions through the lens capsule were not described 13,45 . We conclude that ADAMTS9 may participate in stabilization of basement membrane or in lens capsule remodeling during lens growth. Such a role, if compromised, could also have an adverse effect on "pinching off " of the lens and result in Peters anomaly. Interestingly, mice deficient in peroxidasin, an enzyme responsible for collagen IV crosslinking through formation of sulfilimine bonds 46 , developed ASD associated with a small lens, loss of lens capsule integrity and posterior and anterior lens extrusions 47 . These mice also have white spotting closely resembling the pigmentation defect in Adamts20 bt/bt mice 48 which is exacerbated in Adamts20 bt/bt ;Adamts9 del/+ newborns 28 . We speculate that these findings could suggest that ADAMTS9 and peroxidasin may work in the same developmental pathway, a potential future direction for this work. Adamts9 haploinsufficiency may affect assembly of collagen IV or laminin subunits expressed in the embryonic period, such as laminin α 1, which is essential for embryonic lens capsule development, and collagen IV assemblies with chain composition α 1α 1α 2:α 1α 1α 2 and α 1α 1α 2:α 5α 5α 6, which are expressed in the embryonic lens capsule and replaced at birth by collagen IV with the chain composition α 3α 4α 5:α 3α 4α 5 42 . The ADAMTS9 substrate versican is present in the hyaloid space from E10-E14, but its staining in the mutant eyes was not consistently different from the wild-type. A proper interaction of the lens epithelial cells with the lens capsule at their basal aspect maintains their polarity, as demonstrated in integrin and integrin-linked kinase knockout mice [49][50][51] . Our findings indicate that lens epithelium cells transitioned to an intermediate EMT state between P0 and P21, since they deposited lens capsule-like basement membrane, identified by immunostaining and TEM analysis and retained E-cadherin expression, yet lost their polarity and acquired α -SMA and N-cadherin expression. In addition to Peters anomaly, corneal invasion by lens epithelium, lens capsule deposition in the corneal stroma, as well as increased keratocyte density are together likely to be significant contributors to corneal opacity. We propose that the consecutive changes we observed comprise a pathogenic sequence arising from a flawed lens capsule. Of 49 known B3GLCT targets, ADAMTS9 is the first to be associated with Peters anomaly. The similarities between the ocular phenotype of Adamts9 del/+ mice and the ocular phenotype of PPS in humans suggests ADAMTS9 as the first B3GLCT substrate whose impairment potentially explains the ocular anomalies of PPS. In strong support of this possibility, we have further shown, 1. That several ADAMTS9 TSRs are O-fucosylated, 2. That B3GLCT or POFUT2 knockdown, i.e. defective O-fucosylation, is essential for ADAMTS9 secretion and 3. That Adamts9, B3glct and Pofut2 have overlapping expression in the eye throughout development. POFUT2 and B3GLCT likely affect ADAMTS9 secretion by a quality control process occurring in the ER that was previously established by analysis of ADAMTSL1, ADAMTSL2 and ADAMTS13, which showed that the modifying enzymes recognized properly folded TSRs and stabilized them by glycosylation 21 . Because TSR modification by B3GLCT is dependent on prior attachment of O-linked fucose, POFUT2 knockdown inevitably affects both monosaccharide and disaccharide forms of O-fucosylation. Consistent with the severe reduction of ADAMTS9 secretion upon POFUT2 knockdown, Pofut2 null embryos, like Adamts9 nulls embryos, die in early development with essentially identical phenotypes 37,52 , although Pofut2 +/− mice do not have eye anomalies. In contrast, knockdown of B3GLCT reduced, but did not eliminate ADAMTS9 secretion in vitro, predicting a milder outcome than loss of POFUT2. Although B3glct deficient mice are currently unavailable for comparison with Pofut2 and Adamts9-deficient mice, survival of humans with PPS supports the milder-than-expected outcome of B3GLCT than POFUT2 deficiency. ADAMTS9, in addition to the ASD reported here, has additional relevance to PPS. Notably, Adamts9 haploinsufficiency resulted in cardiovascular defects and, in combination with Adamts20 deficiency, cleft palate, which are anomalies seen frequently in PPS 28 . Since Adamts9 del/+ mice were of normal size and limb-specific Adamts9 conditional deletion did not affect limb length 29 , impairment of ADAMTS9 does not explain short stature in PPS. PPS is predicted to be a composite phenotype resulting from reduced secretion (and therefore functional loss) of a subset of developmentally crucial B3GLCT substrates from among 49 predicted targets. Of these, not all are likely to be subjected to quality control by B3GLCT, so the number of functionally impaired targets in PPS is likely to be even smaller. Which are some of the other likely participants in PPS from among the 26 target ADAMTS proteins? Geleophysic dysplasia, which is caused by recessive ADAMTSL2 mutations, results in short stature, and brachydactyly, which are two major defining characteristics of PPS, and affected individuals have similar facial features to PPS 17,53 . Appropriately, B3GLCT knockdown abolished ADAMTSL2 secretion 21 . ADAMTS10 mutations lead to short stature and brachydactyly in Weill-Marchesani syndrome 54 , whereas ADAMTS17 mutations lead to short stature 55 . Thus, ADAMTS10 and ADAMTS17 are potentially relevant to PPS, but the effect of B3GLCT on their secretion is presently unknown. On the other hand, some potential targets could be excluded. ADAMTS2 and ADAMTS13 are required for skin integrity and hemostasis respectively, but neither function is impaired in PPS, and, indeed, ADAMTS13 is not subject to B3GLCT quality control 21 . From these observations, it seems reasonable to conclude that impairment of ADAMTS9 (in the eye, palate and heart) and ADAMTSL2 (in skeletal growth) contributes to the PPS phenotype, but that other relevant targets remain to be evaluated. Intriguingly, both ADAMTS9 and B3GLCT were recently identified in genome-wide association studies as loci linked to age-related macular degeneration [56][57][58] . The direct functional link between their protein products demonstrated here for the first time, expression of Adamts9 in the microvasculature 32 , and its role in eye development strengthens this bi-allelic association. Future functional analysis of ADAMTS9 and B3GLCT in age-related macular degeneration is thus warranted. In addition, the mouse eye phenotype reported here suggests consideration of ADAMTS9 itself as a candidate gene for ASD. Although Adamts9 mRNA was strongly expressed in the anterior pole of the optic cup, especially in E11.5 eyes, the current study demonstrated severe ASD, and not retinal anomalies. Because the eye develops from crosstalk between the optic cup and lens ectoderm, this suggest that ADAMTS9, a cell-surface and secreted protease produced mainly by the optic cup and hyaloid plexus, may well act in trans on the lens vesicle. Future experiments using Adamts9 conditional mutagenesis in specific ocular structures or embryonic lineages contributing to the eye will be insightful in this regard. Gross Morphology and Optical Coherence Tomography. We evaluated adult mouse eyes for the presence of leukoma (corneal opacity) using a Leica MZ6 stereomicroscope coupled to an Insight Spot2 FireWire camera (Diagnostic Instruments, Inc, Sterling Heights, MI) immediately after dissection and immersion in PBS. E12.5 and E15.5 mouse embryos were fixed in 4% paraformaldehyde (PFA) and their eyes were photographed. The eye surface and eye diameter was measured using ImageJ ® software (NIH, Bethesda, MD). At E15.5, the lens surface area, rendered opaque by 4% PFA fixation, was also measured using ImageJ ® software. Optical Coherence Tomography (OCT) was performed on enucleated P21 mouse eyes fixed overnight in 4% PFA. The eyes were imaged using a custom-built Fourier domain OCT system with a quasi-telecentric scanner, linear-in-wavenumber spectrometer and a line-scan camera with a line rate of 47 kHz 40,41 . The axial resolution as well as the lateral resolution is approximately 10 μ m in tissue. Custom MATLAB programs (MathWorks; Natick, MA) were utilized to create OCT images from the raw data. Amira software (FEI Visualization Sciences Group; Burlingtong, MA) was used to visualize OCT data and quantify anterior chamber volume. All mouse work was performed under a protocol approved by the Cleveland Clinic Institutional Animal Care and Use Committee. Animal husbandry and euthanasia was done in accordance with guidelines established by the American Veterinary Medical Association. Histology, immunohistochemistry (IHC) and electron microscopy. Mouse embryos or enucleated eyes were fixed overnight at 4 °C in 4% PFA prior to paraffin embedding, and 5 μ m thick sections were taken for hematoxylin and eosin (H&E) stain, periodic acid-Schiff stain, oxytalan fiber staining, or immunohistochemistry/immunofluorescence. Cornea cell density and thickness were quantified on DAPI stained sections using ImageJ ® software. For electron microscopy, eyes were fixed with 2.5% glutaraldehyde plus 4% PFA in 0.2 M sodium cacodylate buffer, pH 7.4 prior to processing and embedding in epoxy resin. Thin sections (85 nm) were stained with osmium tetroxide and viewed with a Phillips CM12/STEM transmission electron microscope (FEI Company, Delmont, PA, USA) equipped with a digital 11-megapixel CCD camera (Gatan, Pleasanton, CA, USA). Lens capsule thickness was measured using Image J software. β-galactosidase (β-gal) histochemistry and RNA in situ hybridization (ISH). Whole mount β -gal staining of Adamts9 lacZ/+ eyes 32 was followed by paraffin embedding and 5 μ m sections were counterstained with eosin as previously described 32,61 . For RNA in situ hybridization, (ISH), mouse embryo heads or enucleated eyes were obtained fresh at various ages. Tissues were fixed overnight in 4% PFA/PBS, embedded in paraffin and sectioned (6 μ m). ISH was carried out using the RNAscope ® technique and custom-designed Adamts9, Pofut2 and B3glct probes and a HybEZ ™ Oven (RNAscope ® 2.0; Advanced Cell Diagnostics, Hayward, CA) according to the manufacturer's instructions. Target probe binding was disclosed by alkaline phosphatase-staining with Fast Red as substrate and Gill's hematoxylin as a counterstain. As a negative control, some sections were hybridized with target probe against DapB, a bacterial gene encoding dihydrodipicolinate reductase. A target probe directed against ubiquitously expressed Polr2a served as a positive control. Expression plasmids, cell culture, western blotting and protein purification. hADAMTS9-N-L2 expression plasmid was previously described 38 . hADAMTS9 TSR9-15 was generated by PCR amplification of the encoding cDNA and cloning into pSecTagA (Life Technologies) for in-frame expression with an N-terminal signal peptide and C-terminal myc-His 6 tag in mammalian cells. PCR was performed using full-length ADAMTS9 cDNA 27 as the template and primers 5′ -AAGGCGCGCCCTCGGTGGAAACCAGTGGAGAACT-3′ and 5′ -AACTCGAGTGCAATTCTGGGGTAACTCACAGTT-3′ (restriction enzyme recognition sequences employed for cloning are underlined). Plasmids were transfected into CHO cells (ATCC, Manassas, VA) using Fugene ® 6 (Promega, Madison, WI) and stably transfected clones were selected using 500 μ g/ml Zeocin (hADAMTS9 TSR9-15, Life Technologies) or 300 μ g/ml G418 (hADAMTS9-N-L2). Clones were expanded in DMEM supplemented with 10% fetal bovine serum; conditioned media were tested by western blotting using anti-myc mouse monoclonal antibody (clone 9E10, Life Technologies). At confluence, medium was removed and replaced with serum-free medium. After 48 h, the conditioned media were collected and affinity-purified by Ni-NTA chromatography (Qiagen, Germantown, MD) using an AKTA FPLC instrument (GE Healthcare). Mass spectral glycoproteomic analysis. The hADAMTS9 constructs purified as described above were reduced, alkylated, and subjected to digestion with Trypsin or Chymotrypsin (Promega). The resulting peptides were analyzed on an Agilent 6340 HPLC-Chip Cube nano LC-Ion trap mass spectrometer as described previously 62 . siRNA knockdown experiments. HEK293T cells were purchased (ATCC, Manassas, VA) and co-transfected with negative control (non-targeting siRNA), POFUT2 or B3GLCT siRNA 21 and plasmid encoding hADAMTS9-N-L2 using Lipofectamine 2000 (Life Technologies) according to manufacturer's protocol. A plasmid encoding the human IgG heavy chain was co-transfected with siRNAs as a control and IgG analysis in medium and cells was used for normalization of ADAMTS9 levels. 48 hours post-transfection, the cell and media fractions were collected and analyzed by quantitative western blotting using mouse anti-myc (9E10) or anti-human IgG (Rockland Immunochemicals, Limerick, PA) on an Odyssey 9120 infrared imaging system (LI-COR, Lincoln, NE). Statistics. All data are reported as the mean ± SD. Statistical differences between two groups were analyzed with a 2-tailed Student's t test, assuming a normal distribution. A p value of less than 0.05 was considered statistically significant.
2018-04-03T05:32:57.038Z
2016-09-30T00:00:00.000
{ "year": 2016, "sha1": "5cd3ef66b962281b4489c2f13d600f9c87466381", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/srep33974", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5cd3ef66b962281b4489c2f13d600f9c87466381", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
221221766
pes2o/s2orc
v3-fos-license
SAFety, Effectiveness of care and Resource use among Australian Hospitals (SAFER Hospitals): a protocol for a population-wide cohort study of outcomes of hospital care Introduction Despite global concerns about the safety and quality of health care, population-wide studies of hospital outcomes are uncommon. The SAFety, Effectiveness of care and Resource use among Australian Hospitals (SAFER Hospitals) study seeks to estimate the incidence of serious adverse events, mortality, unplanned rehospitalisations and direct costs following hospital encounters using nationwide data, and to assess the variation and trends in these outcomes. Methods and analysis SAFER Hospitals is a cohort study with retrospective and prospective components. The retrospective component uses data from 2012 to 2018 on all hospitalised patients age ≥18 years included in each State and Territories’ Admitted Patient Collections. These routinely collected datasets record every hospital encounter from all public and most private hospitals using a standardised set of variables including patient demographics, primary and secondary diagnoses, procedures and patient status at discharge. The study outcomes are deaths, adverse events, readmissions and emergency care visits. Hospitalisation data will be linked to subsequent hospitalisations and each region’s Emergency Department Data Collections and Death Registries to assess readmissions, emergency care encounters and deaths after discharge. Direct hospital costs associated with adverse outcomes will be estimated using data from the National Cost Data Collection. Variation in these outcomes among hospitals will be assessed adjusting for differences in hospitals’ case-mix. The prospective component of the study will evaluate the temporal change in outcomes every 4 years from 2019 until 2030. Ethics and dissemination Human Research Ethics Committees of the respective Australian states and territories provided ethical approval to conduct this study. A waiver of informed consent was granted for the use of de-identified patient data. Study findings will be disseminated via presentations at conferences and publications in peer-reviewed journals. INTRODUCTION Modern hospital care is fast-paced, complex and expensive. While this care has undoubtedly led to advances in care for many patients, the Institute of Medicine report 'To Err is Human' highlighted an uncomfortable truth-modern care also leads to considerable harm with ~98 000 patient deaths attributed to medical error. 1 In addition to deaths, patients experience high rates of adverse events, with a systematic review of pivotal studies 2 from the Strengths and limitations of this study ► SAFety, Effectiveness of care and Resource use among Australian Hospitals is a population-wide study that uses routinely collected administrative data from all public and most (86%) private hospitals in Australia to estimate the national incidence of serious adverse events, deaths and unplanned hospitalisations, and measure how these outcomes vary among hospitals. ► It will also use publicly available summary cost data to estimate the avoidable direct healthcare costs associated with adverse outcomes of care. ► It has retrospective and prospective components that will enable monitoring of temporal change in patient characteristics and outcomes over time. ► The major strength of this study is comprehensive data linkage enabling reporting of outcomes at a national scale including patient outcomes after discharge. ► The main limitation of the study is unmeasured confounding from the use of routinely collected administrative data which is less granular than data collected solely for research, although a populationwide study on this scale would not be feasible without using administrative data. Open access USA, UK, Australia, New Zealand and Canada suggesting 9.2% of hospitalised patients suffer an adverse event, with nearly half (43.5%) of these events deemed preventable, despite these countries having advanced healthcare systems. Many of these adverse events occur after hospital discharge, for example, about 20% of patients experience an adverse event in the 1st month post-discharge, 3 a rate that is almost double the reported rate of in-hospital adverse events. Large-scale population-wide studies, predominantly from North America, also suggest high rates of death and unplanned readmissions following hospitalisations, 4 with these adverse outcomes being a major cause of avoidable healthcare costs. 5 6 Moreover, these studies also show that patient outcomes vary two fold to three fold among hospitals for many common conditions, 4 7 8 suggesting variations in quality of care. 4 7 Healthcare outcomes in Australia In Australia, far less is known about these harms because post-discharge outcomes of care are not systematically captured or analysed. More than 10 million hospitalisations occur annually in Australia for the treatment of a range of conditions across 1322 structurally diverse, and geographically dispersed, public and private hospitals ( Figure 1) at a cost of >$A60 billion/year. 9 The Quality in Australian Health Care Study, performed nearly two decades ago, found 16.6% of patients experienced an adverse event leading to disability or a longer hospital stay with half (51%) deemed preventable. 10 The disability in 13.7% patients was permanent and 4.9% of the patients died from a potentially preventable cause. Multiple subsequent studies show a persisting 6.9%-16.6% 9-11 rates of adverse events among Australian hospitals, 11 suggesting suboptimal patient safety. However, there are no largescale national studies of post-discharge adverse events, deaths or unplanned readmissions in Australia. The lack of post-discharge outcome data in Australia means that adverse outcomes of hospital care, and the full impact of these events on patients and the health system, are substantially underestimated. The main barriers have been accessing hospitalisation data and the inability to link individual patient data across hospitals and with time to measure post-discharge outcomes on a national scale. Direct healthcare costs associated with these adverse outcomes are also uncertain. In 2016-2017 healthcare cost Australia over $A112.0 billion, with 47.8% ($A63.8 billion) of the total cost spent on public hospitals, making hospitals the largest area of health expenditure. 12 Preventable in-hospital complications alone are estimated to contribute $A1.5 billion to this cost. 13 However, little or no information currently exists about the resource impact of post-discharge outcomes on the Australian health system, meaning that the impact of adverse events and related avoidable healthcare costs are likely to be underestimated. For example, the cost of unplanned readmissions within 30 days in the US Medicare population has been estimated at US$17 billion per year, 6 much of which is avoidable expenditure as many of these readmissions are preventable. 14 Cost and resource Figure 1 Geographical distribution of hospitals throughout Australia. Approximately 70% of the Australian population are in densely populated major cities with the remainder distributed through sparsely populated regional and remote areas. There are more than 1322 public and private hospitals distributed throughout eight states and territories. Public hospitals that predominately provide acute care are indicated in the figure. Geographical region is based on the Australian Standard Geographical Classification. Protected by copyright. considerations are major drivers of decision-making for clinicians, health services and government. Compressively quantifying avoidable healthcare expenditure in the Australian setting may, therefore, provide a strong catalyst for change, and may assist the development of cost-effective interventions to improve patient outcomes. Public concerns also exist about unwarranted variation in processes of care among Australia hospitals although there are only a few studies that have examined variation in outcomes. 11 15 16 While these studies focus on a limited number of conditions, they consistently show marked variation in use of treatments or procedures among hospitals which are not explained by variations in patient demographics, suggesting disparities in care. The Australian Productivity Commission, 17 an independent policy advisory body on a range of issues affecting the welfare of Australians, recommended reporting on basic outcome measures such as mortality and readmissions to inform patients and build greater transparency and accountability, citing significant opportunities to improve healthcare efficiency. Similarly, the Australian Commission on Safety and Quality in Health Care (ACSQHC), has produced the Atlas of Healthcare Variation which has shown substantial geographical variation in use of healthcare across Australia. 18 The commission has further proposed to report outcomes of care across public and private hospitals and make this information available to the public. Profiling hospital variations in outcomes allows rapid detection of 'outlier' hospitals with high adverse event rates, thereby providing a mechanism to rapidly intervene and minimise potential safety and quality problems. These methods can facilitate local quality improvement efforts by feedback of data to stimulate hospitals to critically examine their outcomes, and when necessary, invest in infrastructure, protocols and other strategies to reduce adverse outcome rates over time. Lastly, profiling variation in outcomes promotes knowledge translation from positive deviation that is, learning from facilities with low adverse outcome rates to identify innovative strategies to improve care. SAFER Hospitals: a national data linkage study SAFety, Effectiveness of care and Resource use among Australian Hospitals (SAFER Hospitals) is a nationwide cohort study that, for the first time in Australia, brings together linked hospitalisation and outcome data from all Australian states and territories in addressing these knowledge gaps. Every Australian hospital collects data on all hospital encounters using a nationally standardised set of data definitions. Individual records can be linked within the datasets, and with others such as death registries, making it feasible to track important outcomes after hospitalisations. The SAFER Hospitals study will create a national data collection to estimate hospital-wide incidence of outcomes of hospital-based care across all conditions, separating hospital-wide effects on patient outcomes from conditionspecific effects. It will also assess the variation in these outcomes among hospitals, quantify the consequences of adverse outcomes on patients in the short-term and longterm, and assess their impact on healthcare systems with a focus on potentially avoidable health system costs and resources used. The specific study aims are to 1. Assess the safety of hospital care by estimating the incidence of serious adverse events and consequences of these adverse events on short-term and long-term patient outcomes. 2. Assess the effectiveness of hospital care by (a) estimating the incidence of rehospitalisation (inpatient readmissions and emergency care encounters) and allcause mortality post-hospitalisation; and by (b) quantifying the proportion of these events that result from potentially preventable causes. 3. Assess healthcare costs and resource use associated with adverse outcomes defined in aims 1 and 2, and to estimate the proportion of costs and resources that may be avoidable. 4. Compare variation in serious adverse events, deaths and unplanned readmissions among hospitals using standardised methods that account for differences in hospitals' case-mix and volume. 5. Develop and test the application of advanced computational methods such as machine learning in facilitating more accurate detection and prediction of adverse outcomes. 6. Prospectively evaluate the temporal change in specific outcomes of hospital care by periodically re-evaluating these outcomes until 2030. METHODS AND ANALYSIS Study design A population-wide cohort study consisting of retrospective and prospective components, using routinely collected administrative data from all public and most (86%) private hospitals in Australia. Private hospital data are not available to researchers from South Australia, Northern Territory and Tasmania which collectively contain about 14% of all private hospitals in Australia. 19 Figure 2 outlines the study schema. Study cohort We include all hospitalised patients aged ≥18 years from each state and territory's Admitted Patient Collections (APC) from 1 January 2012 to 31 December 2018. Each APC records all inpatient and same-day admissions from all public and most private sector hospitals and day procedure centres. Patient data are collected using a standardised set of variables defined by the National Hospital Minimum Data set for admitted patient care. 20 The information collected in each APC includes patient sociodemographic characteristics, geographical region, source of referral to the service, acute and elective status of the encounter and service referred to on separation, primary and up to 50 secondary diagnoses and procedures, shown relatively good (>85%) accuracy of coded data. 21 From the APCs, we include all hospitalised patients irrespective of condition, as the primary objective is to study hospital-wide outcomes (ie, across all conditions). Adverse outcomes of hospital care are frequently driven by broad hospital-wide phenomenon such as hospital quality control systems and discharge practices in addition to model of care factors related to the underlying condition. Consistent with prior population studies, 11 22 this approach allows evaluation of hospital-wide phenomenon separate from condition specific effects. Examining hospital-wide outcomes at the onset also enables a 'topdown' approach to subsequently examining conditionspecific outcomes. To facilitate this, we will examine outcomes by the 23 Major Diagnostic Categories (MDCs) into which all patient diagnoses fall, with MDCs generally corresponding to the major organ systems of the body. Patient comorbidities Patient comorbidities are identified using the Condition Category (CC) clinical classification that groups ICD codes into clinically meaningful conditions using diagnosis and procedure codes from the index admission and from any hospitalisations in the preceding 12 months. We use the CC classification because it is widely used to derive comorbidities from routinely collected hospital data. 23 While the CC model uses ICD-9-CM coding, we have developed an equivalent model based on ICD-10-AM coding for use with Australian data. 24 Study outcomes The primary study outcomes are adverse events, allcause mortality, all-cause unplanned rehospitalisations (including unplanned inpatient readmissions and emergency care encounters) and measures of healthcare costs occurring in hospital and after discharge. The study design enables short-term and long-term outcome assessment. Adverse events We define adverse events using the Classification of Hospital Acquired Diagnoses (CHADx) developed by ACSQHC using ICD-10-AM diagnosis codes. Adverse events are identified from the APC using these diagnosis codes and the condition onset flag to indicate that the adverse event occurred in hospital and was not a preexisting condition prior to hospitalisation. The CHADx taxonomy can be further condensed to form a list of 16 hospital-acquired conditions which identify the more common and serious adverse events that can be accurately measured using routinely collected data. This was developed through a comprehensive process that included literature review, clinical engagement and validation using clinical data. All-cause mortality All deaths are determined by linking APC cohort records with each jurisdiction's Registry of Deaths which record the cause and date of death including out-of-hospital deaths. Death occurring in hospital and within the Open access emergency department is recorded within the APC and the Emergency Department Data Collection (EDDC, see below), respectively. All-cause unplanned rehospitalisations All-cause rehospitalisations are defined as the composite outcome of inpatient readmissions and emergency department presentations. 6 14 Readmissions to any hospital after the index hospitalisation are measured by linking the study cohort records to subsequent records within each region's APC. Emergency care visits will be assessed by linking hospitalisation records with each region's EDDC. EDDC records all emergency department (ED) encounters using a standardised set of variables from all public and selected private hospitals with ED facilities, including source of referral, mode of presentation, triage category, primary ED diagnosis and the service referred to on separation. We only count unplanned readmissions in the outcome of rehospitalisation as planned (elective) admissions for scheduled care are a part of normal care and are less likely to be related to care quality. We removed planned admissions from the outcome using the 'Care type' and 'Acute/elective' admission status variables that identify acute care from non-acute and subacute encounters. Since emergency care is reserved for acute presentations, all emergency care visits are regarded as unplanned. Direct costs and healthcare resource use Direct costs associated with hospital care is estimated using publicly available summary cost data from the National Hospital Cost Data Collection (NHCDC). 25 The NHCDC collates patient-level cost data from >400 public and private hospitals in Australia using standardised definitions and contains the average cost associated with hospitalisation for various conditions as defined by the Australian refined diagnostic-related group (DRG, derived from the principal diagnosis and procedures), the proportion of costs that are direct and indirect (overheads) and the components of the cost such as the average amount spent on beds, pharmacy, allied health and pathology. NHCDC also contains the average cost associated with ED encounters as defined by the urgency-related group (URG) for ED presentations. As NHCDC does not publish average DRG/URG costs from private hospitals, averages costs for these hospitals will be calculated using the average cost per DRG or URG of the public hospitals' data in the same year and the ratio of the cost weight per DRG (or URG) between private and public hospitals. Linkage of patient records within and across datasets is performed within each state or territory except where cross jurisdictional (across state) linkage is readily possible such as between New South Wales and the Australian Capital Territory, and South Australia and Northern Territory. All data linkages will be performed by designated Data Linkage Units within each jurisdiction using probabilistic matching using multiple patient identifiers (such as age, sex, date of birth and Medicare number) with reported accuracy exceeding 99%. 26 27 Once linked, de-identified patient data are released to researchers. Data from each jurisdiction are then aggregated to compose a national dataset consisting of equivalent variables from each region. All datasets and linkage components are listed in the online supplemental appendix. Analysis plan Aim 1: Estimate the national incidence of serious adverse events, deaths and unplanned rehospitalisations occurring in hospital and post-discharge. The incidence of major outcomes of hospital care (adverse events, deaths, unplanned rehospitalisations) is separately estimated by calculating the proportion of all hospitalisations that experienced at least one outcome within 90 days of their initial hospitalisation expressed as a percentage of all hospitalisations. To describe the timing of these outcomes, we will use time-to-event analysis (Kaplan-Meier method and Cox Regression) to generate unadjusted and adjusted event-free survival curves at each time point. In survival models for adverse events and deaths, time will be measured as the number of days from admission until the first occurrence of the outcome. For rehospitalisations, time is measured from discharge in patients who are discharged alive. Patients will also be censored if they do not experience the outcome of interest or reach the end of the 90 day follow-up period. Outcomes will be assessed by geographical region and adjusted for temporal change in baseline characteristics to account for changes in underlying population characteristics over time. Aim 2: Assess the direct healthcare costs associated with these adverse outcomes and determine the proportion that may be avoidable. The total cost associated with adverse events is defined as the composite of (1) the incremental cost associated with adverse events occurring in hospital and (2) the cost associated with one or more hospital encounters for adverse events occurring within the 90 days of the index admission. To estimate incremental cost of adverse events occurring in hospital, a generalised linear model with a log-link function and a gamma distribution 28 will be used, adjusting for differences in baseline characteristics between patients with and without an adverse event. The direct cost of post-discharge adverse events will be estimated by matching the DRG or URG associated with the episode of care to the average cost associated with the DRG or URG published in the NHCDC. For patients with more than one episode during the outcome time frame, costs will be summed to provide a total cost. All costs will be converted to current Australian dollars using health index deflators. Direct costs associated with unplanned rehospitalisations are estimated by matching the DRG or URG associated with the rehospitalisations to the average cost associated with the DRG published in the NHCDC in a similar manner to estimating costs associated with post-discharge adverse events. Costs will be assessed and compared by geographical region. Open access To estimate the proportion of readmissions that may be avoidable, we will use a classification system previously used to identify readmissions that are potentially related to hospital care from coded data. While it is challenging to capture the exact reasons for readmissions from coded or clinical data, this classification system provides a reasonable approach in determining the extent to which a readmission is preventable or avoidable. We will classify each readmission into four groups: (1) due to adverse event; (2) for the same diagnosis as the index admission; (3) potentially related to the index diagnosis and (4) other (unrelated) conditions. Group 1 identifies readmissions for hospital-acquired (iatrogenic) conditions such as pulmonary embolus and adverse drug events. This provides a conservative estimate of costs that are most avoidable. To a lesser extent, groups 2 and 3 (readmissions for same or a related condition) also indicate opportunities to reduce cost by optimising patient care, although recognising that it is difficult to distinguish from re-admissions due to disease progression. Group 4 represents cost due to unrelated readmissions, which are the least avoidable. All emergency care visits are deemed unplanned, and therefore avoidable, when estimating costs. We will then sum the total costs in each group representing the proportion of avoidable healthcare costs. Variation in costs will be assessed by geographical region. We will also explore the possibility of combining the four categories of hospital readmissions to create a robust overall index with the weighing of categories to be determined based on empirical testing, available literature and expert clinical opinion. Aim 3: To detect unwarranted institutional variation in these outcomes among hospitals accounting for case-mix differences which may suggest variation in care quality. To profile variation in patient outcomes among hospitals, adjusting for differences in case-mix, we estimate each hospital's risk-standardised outcome rate (RSOR) using a hierarchical generalised linear model (HGLM) that accounts for differences in the hospital case-mix, sample-size and clustering of patients within hospitals. For hospital-wide analyses, there are published riskadjustment models for different outcomes (eg, deaths, readmissions) for different clinical conditions that consider the complexity and diversity of procedures undertaken among hospitals. Such models do not typically estimate risk-standardised rates for one hospitalwide cohort as one model including all admissions would not account for differences in risk variables across different conditions. Instead, the risk models are based on a composite risk adjustment model that includes several subgroups of patients. For example, Horwitz et al 7 described the development and use of methods to profile hospital-wide 30-day unplanned readmissions. Here, all admissions are grouped into five cohorts made up of conditions or procedures with relatively similar readmission and post-discharge mortality rates, that are likely to be cared for by similar teams of clinicians and that would generate an adequate sample size for most hospitals. Risk adjustment models are developed within each cohort and a weighted score, based on the distribution among cohorts, is used to estimate an overall model. Similar composite risk-adjustment models are currently being developed for 30-day mortality. 29 Our goal is to develop similar approaches to risk-adjustment for each outcome using Australian data. The hospital specific RSOR is calculated as the ratio of predicted hospital outcomes over expected hospital outcome, multiplied by the crude national average outcome rate. The predicted number of outcomes is calculated based on the hospital's case-mix and the estimated hospital-specific intercept term. The expected number of outcomes is calculated based on the hospital's case-mix and national average intercept. The ratio is then multiplied for each hospital by the overall crude outcome rate for ease of interpretation. Bootstrapping with 1000 replications will be used to empirically construct a 95% CI estimate for each hospital's RSOR using the percentile method. A hospital is deemed a statistical outlier if the hospital's entire 95% interval estimate is above or below the national average. This approach for estimation of the RSOR ensures that the observed variation among hospitals is not due to underlying differences in case-mix or procedure-mix and is consistent with methods we have previously used for profiling variation in outcomes among hospitals. 15 30 All hospital-level analyses will be limited to unique hospitals with at least 25 hospitalisations during the study period to enable a robust estimate of the hospital outcome rate. Aim 4: Test machine learning methods to better predict adverse outcomes of hospital care to inform service improvement initiatives. This aim will test the accuracy of machine learning methods, compared with conventional regression models, for predicting adverse outcomes such as death and readmissions which may be useful in developing automated methods that identify high-risk individuals and facilitate service improvement. Machine learning, a subfield of artificial intelligence, refers to an array of data-driven automated analytical techniques that rely on sophisticated pattern-recognition methods, which can 'learn' from large-scale datasets. Machine learning is widely used in industry for rapid automated analysis of massive volumes of routinely collected data such as that collected in hospitals in the process of delivering care. While machine learning methods initially failed to suggest superiority over conventional statistical methods, more recent analyses suggest significantly better ability to predict outcomes from administrative datasets. 31 32 To achieve this goal, we will develop machine learning methods including deep learning that can automatically extract relevant features from data collected from hospitals. We will train models using a five-fold cross validation experiment, where for each of the folds, a random set containing 80% of the dataset samples will be used for training and the remaining 20% will then be used for testing the model. All models will be compared with Aim 5: To prospectively evaluate the temporal change in each of the outcomes of hospital care by periodically re-evaluating these outcomes until 2030. This prospective evaluation will assess how outcomes change with time at both the patient and hospital level. This aim will be achieved by prospectively extending the cohort every 4 years from 2019 until 2030 and comparing the longitudinal trend in each outcome using standardised methods. The Cochran-Armitage trend test or HGLM will be used to assess for statistically significant trends over time. Finally, the study results will be reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology statement. 33 Supplementary analyses In most regions, de-identified data are provided for analysis including de-identified hospital labels where hospital names and unique identifiers are replaced with a dummy identifier so that hospital-level analyses are possible, but the hospital cannot be identified. In New South Wales, Queensland and Tasmania, actual hospital identifiers are provided. In these regions, we will seek to further examine the association of hospital-specific characteristics with the outcomes using publicly available data from the National Public Hospital Establishments Database (NPHED) 34 and the MyHospitals 35 reporting portal. Using standardised data definitions, NPHED collects data on each hospital's revenue, staffing levels, expenditure, the number of available beds for admitted patients, geographical location, specialised service indicators and the type of non-admitted patient care. 34 NPHED data are only available for public hospitals. The MyHospitals national reporting platform allows users to explore information about hospitals and publicly reports a selected number of hospital-specific performance process and outcome indicators such as healthcare-associated infection rates, predominately from public hospitals. 35 New South Wales, Queensland and Tasmania include 54% of the Australian population and New South Wales and Queensland encompass 52.8% of all Australian hospital beds. Hospital bed numbers in Tasmania are not publicly released to protect the confidentiality of the small number of private hospitals in Tasmania. All HRECs granted a waiver of informed consent for the use of de-identified patient data. Patient and public involvement Privacy and data security De-identified study data will be securely stored at the Basil Hetzel Institute for Translational Research. A copy of the study data will be stored at the Secure Unified Research Environment (SURE). SURE is a highly secure, remote access data storage and analytic facility specifically designed for storage and analysis of large volumes of healthcare data. 36 It uses extensive measures to ensure data security including data encryption, secure firewalls, password protection, logging of access to files and a 'curated gateway' preventing data from being copied or moved out of the secure environment. Use of SURE facilitates secure access to study data for study investigators located throughout Australia. Dissemination and translation The study team involves stakeholder representatives and the study investigators are in advisory roles in safety and quality organisations such as the Quality and Safety Committee of the Royal Australasian College of Physicians, Australian Patient Safety Foundation and ACSQHC. As the study progresses, the investigators will seek further engagement with state and federal stakeholders and professional bodies to facilitate dissemination and translation. We will also establish an independent Study Advisory Panel which will include stakeholders' involvement. This panel will meet two times a year and provide independent study advice and guidance. This provides a format for continued engagement of stakeholders as the study progresses, gains buy-in and creates a mechanism for the investigators to address stakeholder priorities and concerns. The findings will also be disseminated through scientific publications and briefing documents. Plain language summaries and infographics will also be used to promote research outcomes to the wider community via media releases and social media. DISCUSSION In Australia, like other countries with comparable highly developed healthcare systems, there are concerns about the safety and quality of, and potential variations in, hospital care. Harms associated with hospital care are poorly reported, partly because of the inability to capture adverse outcomes after discharge. Consequently, there are no national studies of post-discharge adverse events, deaths or readmissions. The avoidable adverse outcomes and associated healthcare costs are uncertain and Open access whether these vary among hospitals, suggesting variation in care quality, is unknown. The SAFER Hospitals study purports to answer these questions and benefit the Australian community by (1) informing and prioritising target conditions for large-scale quality improvement efforts; (2) developing and supporting the implementation of standardised methods for hospitals to routinely measure these outcomes and (3) facilitating policy changes such as public reporting efforts and innovative funding models to incentivise safer and more effective care. National studies of adverse outcomes are largely limited to North America and there is considerable interest in the generalisability of these outcomes to other health systems. Therefore, the outcomes of this study are likely to have broad relevance to the international literature. This study has important limitations that need consideration. The study data are linked only within each state and territory, except for cross-jurisdictional linkage between the state of New South Wales and Australian Capital Territory, and between South Australia and Northern Territory. While this may limit the ability to track outcomes for patients who are transferred across state borders, prior studies show that such transfers involve <3% of all patients and hence the likely impact on our outcome measurements will be small. 37 Moreover, we limit our analyses to patients residents within each jurisdiction and exclude out-of-state residents to further minimise potential error. Our study is based on routinely collected administrative data which is less granular and potentially less accurate than data collected specifically for clinical research or quality registries. While well-developed standards for coding of clinical diagnoses exist, retrospective coding of diagnoses depends on the diagnoses being recorded in the patient's medical record, rather than based on the prospective application of external criteria, as would occur in a clinical trial. Routinely collected hospitalisation data also do not capture in-hospital medication use or the results of investigations such as pathology results. Nevertheless, validation studies have shown relatively good accuracy of diagnoses and procedures coding within administrative data compared with medical records. 21 Linking routinely collected data is also the most accurate method for tracking post-discharge outcomes of care. It is not feasible to collect detailed individual patient data at a national scale and thus an observational study using routinely collected data is the suitable data source for this purpose. South Australia, Tasmania and the Northern Territory do not release private hospital data to researchers although the impact on the study is likely to be small as most acute care in Australia is provided by public hospitals. These states only account for about 10% of the overall population. Furthermore, private hospitals in these states encompass a small proportion of overall hospital beds. For example, those in South Australia form 2% of all hospital beds in Australia and the number of private hospital beds in Northern Territory and Tasmania is not publicly released to protect the confidentiality of the small number of private hospitals in these regions. 19 Socioeconomic status may influence patient outcomes although socioeconomic characteristics are not routinely collected in the hospital Admitted Patient Data Collections.
2020-08-22T13:01:30.861Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "b24087af97a392bd5eca712931d3c7f7030fdae5", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/8/e035446.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f4a03ddbf654f4255000c2d04bcc87dd10debae0", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
234790076
pes2o/s2orc
v3-fos-license
On the twisted factorization of the $T$-transform The amalgamated $T$-transform of a non-commutative distribution was introduced by K.~Dykema. It provides a fundamental tool for computing distributions of random variables in Voiculescu's free probability theory. The $T$-transform factorizes in a rather non-trivial way over a product of free random variables. In this article, we present a simple graphical proof of this property, followed by a more conceptual one, using the abstract setting of an operad with multiplication. Introduction In reference [4], K. Dykema introduced and studied two central objects in free probability theory, i.e., the operator-valued R-transform, more precisely, the unsymmetrised R-transform, as well as the -interrelated -operator-valued unsymmetrised S-and T -transforms. Those transforms play a fundamental role in both scalar-as well as operator-valued free probability theory [11,12] as they allow for the effective -algorithmic-calculation of the distribution of a sum respectively product of free random variables. In the scalar-valued case, they can be traced back to the seminal works by Voiculescu [15,16]. Here, the R-and T -transforms with respect to a random variable a in a non-commutative probability space (A, φ) are formal power series in one variable, R a (z), T a (z) ∈ K z . If a and b are two free random variables in A, with φ(a) = φ(b) = 1, then these transforms are linear (1) R a+b (z) = R a (z) + R b (z) respectively multiplicative (2) T ab (z) = T a (z) · T b (z). The product on the right-hand side of (2) is the Cauchy product, defined for two series f (z), g(z) ∈ K z by: The S-transform is defined as the inverse (with respect to the Cauchy product) of the Ttransform, S a (z) = T −1 a (z). All three transforms are related through the distribution series Φ a (z) ∈ K z , a ∈ A, φ(a n )z n−1 , namely [4], The inverses on the left-hand sides of the above equations are computed with respect to composition of formal power series, here denoted by • and defined on zK z by the wellknown formula k,n 1 ,...,n k ≥1 n 1 +···+n k =n f k g n 1 · · · g n k . The so-called free additive and multiplicative convolution problems have been shown by Voiculescu to admit solutions by constructing canonical random variables. Dykema verified in [4] that in the operator-valued case, these results admit counterparts, where so-called multilinear function series play the role of formal power series. More specifically, let (A, φ, B) be an operator-valued probability space, that is, A is an unital algebra with unital subalgebra B ⊂ A and conditional expectation φ : A → B, φ(b) = b and φ(b 1 ab 2 ) = b 1 φ(a)b 2 for all b, b 1 , b 2 ∈ B and a ∈ A [11]. Note that further below (Definition 1.2) we will work with a slightly more general notion of operator-valued probability space. Dykema introduced the notion of multilinear function series, that is, the data of a sequence (α n : B ⊗n → B) n≥0 ∈ Mult[[B]] of multilinear maps on the algebra B. We adopt the convention that B ⊗0 = C. This implies that α 0 can be considered as an element of B. Let a ∈ A be a random variable with φ(a) = 1 B (= 1 A ). The multilinear function series Φ a = (Φ a,0 , Φ a,1 , . . .) ∈ Mult[ [B]] that replaces the scalar-valued distribution series (3) is defined through Φ a,0 = φ(a) and for n > 0 (6) Φ a,n (b 1 , . . . , b n ) = φ(ab 1 ab 2 · · · ab n a). Given two multilinear function series α, β ∈ Mult[[B]], we define their formal product by where q j := p 1 + · · · + p j−1 . Notice that composition is linear only in the left argument. The unit for this product is the multilinear function series (8) I = (0, id B , 0, . . .). Following an analogue approach as in (4) using Φ a defined by (6), one obtains operator-valued counterparts of the S-, R-and T -transforms that are now multilinear function series. By constructing random variables with prescribed T -transform, Dykema showed in [4] that the multiplicativity of the T -transform (2) in the scalar-valued case generalises to the following so-called twisted factorization in the operator-valued case: Note the appearance of the product defined in (7). R. Speicher showed in [14] that non-crossing set partitions underlie the combinatorics of moment-cumulant relations. To relate free cumulants to the coefficients of the T -transform, the same role is played by so-called non-crossing linked set partitions. In particular, Möbius inversion on a certain poset NCL (1) D of connected non-crossing linked set partitions relates coefficients of the T -transform to free cumulants, (10) κ n (ab 1 , . . . , ab n−1 , a) = π∈NCL (1) For small values of n, one has Here, t a (1 n ) = t a (n). We refer to [4] for the precise definition of the multilinear map t a (π). Let us just mention that t a (π) factorises in a certain way over the blocks of a non-crossing linked partition. In this article, this factorization is expressed in terms of a certain morphism, whose range comprises the values t a (π), π ∈ NCL (1) D , and which is compatible with an operadic composition on non-crossing linked partitions. The above relations (one for each integer n ≥ 1) can be cast into a fixed point equation on multilinear function series: The relations displayed in (10) are equivalent to where I is the identity for composition (8) In the first part of this work we will give a different (and shorter) proof of equation (9) which is operadic in nature. It does not involve the construction of canonical random variables and therefore yields to a more conceptual understanding of equation (9) as resulting from distributivity of the composition product • over the product . This distributivity together with the specific form of the composition product • are constitutive to the notion of Gerstenhaber algebra. Examples of such type of algebras are obtained by considering operads equipped with a distinguished operator of arity two that we call multiplication (for reasons explained below). This multiplication endows the set of operators with a monoidal law, over which the operadic product distributes. The operad with multiplication used in our work is the endomorphism operad of B spanned by multilinear maps over the algebra B. The multiplication coincides with the algebra product on B. Remark 1.1. It is worth noticing that these algebras admit a homotopical version [7]. In this context, the operadic multiplication can be used to define a differential on the underlying collection of the operad. This construction yields for example the Hochschild cohomology of B, if one starts with the endomorphism operad of B. In the second part of this work, we take a leap in abstraction and introduce in the setting of an operad P with multiplication m a free product on the set G inv of formal series of operators -with non-zero constant coefficient. The latter is supporting a monoidal structure stemming from the multiplication m. We define in this context the T -transform. This permits us to give another, in some sense more fundamental proof of the twisted factorization of the abstract T -transform. In particular, we highlight the role of left and right translations by the identity of the operad. These maps provide two different injections of the set G inv into the diffeomorphism group G of the operad P (certain series of operators on P) denoted ρ and λ. These translations are not group morphisms. However, they are injective and their inverses are cocycles with respect to a left action of the diffeomorphism group G on G inv . This algebraic setup is well understood. See for example [6]. We consider in addition a right action which is the conjugation action of the group (G inv , ) restricted to G. This permits to relate the translations ρ to λ. Ultimately, the twisted factorization of the abstract T -transform is implied by (1) Distributivity of the left action of the group G over the product , (2) The cocycle property of ρ with respect to the right action , (3) Compatibility of the right action with the product of G. Outline. After the introduction, the article is divided into two parts. In the first part (Sections 2 and 3) we provide the reader with the necessary background on operads and brace algebras. We then define the two main operads for the present work, which are the operad of non-crossing partitions (see Definition 2.5) and the operad of non-crossing linked partitions (see Definition 2.9). In Subsection 2.3, we define operads with multiplication as well as brace algebras. In Section 3, we first address the problem of computing operator-valued free cumulants with products as entries and give a fixed point equation for computing the free cumulants of the product of two operator-valued free random variables, see Subsection 2.3. To the extend of our knowledge, this fixed point equation is new. Given this fixed point equation, we deduce a short proof of Theorem 3.3, pointing out the relation in Proposition 3.2 as the key property of the concatenation and composition products for the twisted factorization to hold. In Subsection 3.3, we define the free product and the T -transform in this abstract setting. Basic notions and notations. We recall the definition of an operator-valued noncommutative probability space [11]. 4. The state, φ : A → B, is a B-B bimodule morphism 2 which is positive: In the present work, functionals of random variables are restricted to polynomial ones. As a consequence, we can drop all topological assumptions in the above definition. In particular, A and B are only assumed to be involutive algebras. Operads and brace algebras In this section, we recall the definitions of operads and brace algebras. The reader is directed to the monograph [10] for a detailed introduction. 2.1. Algebraic planar operads. Operads are models for composing operators with multiple inputs and a single output. They have been introduced by Boardman and Vogt in the 1970's. There exist multiple equivalent definitions of an operad. In this section, we adopt a rather algebraic point of view by defining an operad first as a monoid in a certain monoidal category that we introduce. It should be understood that what we call an operad in this article is also called a planar operad; the set of operators we consider are vector spaces and are not assumed to be endowed with an action of the symmetric group. Operators are organized in a collection C, that is a sequence of vector spaces (C(n)) n≥1 . The vector space C(n) comprises all operators with n inputs. We assume this collection to be reduced which means that all operators have at least one input. The number of inputs of an operator p is denoted |p| and is called its arity. A morphism between two collections C and D is a sequence of linear morphisms (φ(n)) n≥1 with φ(n) : C(n) → D(n), n ≥ 1. We denote by Coll the category of all collections. We remark that it is an abelian category in an obvious way, the sum C ⊕ D of two collections is the collection defined by (11) (C ⊕ D)(n) = C(n) ⊕ D(n). The category Coll can be endowed with a monoidal structure, where the tensor product, here denoted , is the 2-functor from Coll × Coll to Coll defined by: The unit element for this product on Coll is the collection denoted by C such that C (n) = δ n=1 C, this means (12) C C C and C C C. Given two collections C and D, the set Hom Coll (C, D) of collection homomorphisms from C to D is a vector space. However, the tensor product of two collection morphisms is linear only on its left argument, in particular f λg = λf g, for any f, g ∈ Hom Coll (C, D). It is common to use the symbol • to denote the composition γ C , γ C p ⊗ q 1 ⊗ · · · ⊗ q |p| = p • (q 1 ⊗· · ·⊗q |p| ). In the remaining part of this article we follow this convention. More practically, one can exploit the unital and associativity constraints to associate partial compositions to any operad (C, γ C , η C ), which compose an operator p in C with another operator q at a certain input of p: Associativity of γ C translates as follows for the partial compositions: In the next sections, we define two operads on non-crossing set partitions and on non-crossing linked set partitions. These two operads are in fact set-operads that we regard as operads (in the category of vector spaces) with preferred basis that behave well under the operadic products. . Recall that an internal vertex of a tree is a vertex with at least one input. The degree of a decoration of an internal vertex matches the number of its inputs. In a linear setting, that is if C is a collection of vector spaces, we identify a tree τ having a vertex decorated by a sum a + b of elements a, b ∈ C with the sum of two trees, each obtained by replacing the decoration a + b with a or b. A leaf is a vertex of a tree with no inputs. The number of leaves of a tree is the number of its inputs. The collection C corresponds to decorated corollas, trees with only one internal vertex. Composition of a tree τ with another tree τ at the i th input of τ (the leaves of τ are ordered from the leftmost to the rightmost leaf) is obtained by grafting the root of τ to the leaf of τ . The identity is the tree with a single vertex. 2. Given a vector space V , we denote by End V the operad whose underlying collection consists of all multilinear maps on V , is an unital algebra for the concatenation product, denoted ·, and with unit ∅. The degree |w| of an element w = w 1 · · · w n ∈ A ⊗n is |w| = n + 1 and |∅| = 1. We turnT (A) into the operad W by defining the operadic composition γ W for a word w = w 1 · · · w n : The definition of a planar operad uses the monoidal structure of the category Vect C of all vector spaces. Replacing the monoidal category (Vect C ) by another yields the notion of an operad in a symmetric monoidal category. For example, one can replace Vect C by the category Set of sets with bijections. It is a monoidal category for the cartesian product of sets. A monoid in the category of all set collections is called a set operad. We recall here two different tensor products on the category of operads, that will be briefly used in the forthcoming sections. Definition 2.3 (Hadamard product). Let P = (P, γ P ) and Q = (Q, γ Q ) be two operads. The Hadamard product of P and Q is the operad P ⊗ H Q = (P ⊗ H Q, γ P⊗ H Q ) defined by Definition 2.4 (Free product). Let P = (P, γ P ) and Q = (Q, γ Q ) be two operads. The free product of P and Q is the operad P Q obtained by quotiening the free operad on the collection P ⊕ Q by relations in the operad P as well as relations in the operad Q, but no other. It may happen that operators which we want to compose have different input and output ranges. A model for composing such operators is called a coloured operad. Collections are replaced by coloured collections, each vector space C(n) of operators with n inputs is split into a direct sum of spaces C c c 1 ,...,cn , c 1 , . . . , c n , c ∈ C comprising all operators with source spaces labeled c 1 , . . . , c n and target labeled c for some set of colors C. Formal composition of collections (the monoidal product •) admits a coloured version, for which operators are formally composed provided that colorations of inputs and outputs match. Operads of non-crossing partitions. 2.2.1. The gap insertion operad. In this section we recall the definition of the gap insertion operad on non-crossing partitions first introduced in [5]. We define a new operad on noncrossing linked partitions. Note that we adopt a more general definition of the latter, which are in fact coverings (blocks may intersect). To fix notations, we denote by NC(n) the set of all non-crossing partitions of the interval 1, . . . , n (with its natural order). Recall that π ∈ NC(n) if it is a partition of 1, . . . , n and no blocks of π cross, which means that for any sequence of integers a < b < c < d in 1, . . . , n , one has for two blocks V, W ∈ π that The operadic view on a partition π ∈ NC(n) is that of an operator with n + 1 inputs (|π| = n + 1). An input corresponds to a gap between two consecutive elements of the partitioned set, including the front gap before the element 1 and the back gap after the element n. Hence, we may therefore insert n + 1 non-crossing partitions into a non-crossing partition by stuffing them into the gaps of the latter. In particular, we have N C(1) = C{{∅}}. The unique partition of the empty set acts as the operad unit. Let π be a non-crossing partition and (α 1 , . . . , α |π| ) a sequence of non-crossing partitions, we define whereπ is the non-crossing partition of {|π 1 |, |α 1 | + |α 2 |, . . . , |α 1 | + · · · + |α |π| |} induced by π. The gap-insertion operad of non-crossing partitions admits the following presentation in terms of generators and relations. [5]). For any n ≥ 1, we put 1 n+1 = { 1, n }. Then the operad (N C, γ N C ) is generated by the elements 1 n , n ≥ 1 with the relation: Lemma 2.7 (Prop. 3.1.4 in Recall from [8] that the distribution of a random variable a ∈ A in an operator-valued probability space (A, φ, B) yields an operadic morphismΦ a on the gap-insertion operad with values in the operad of endomorphisms End B of B, prescribed by The free cumulants {κ n (a)} n>0 , a ∈ A, correspond to another operadic morphism 2.2.2. Nesting-or-linking operad. We now give the definition of a so-called non-crossing linked partition. As said, our definition is more general than what is usually given in the literature. Definition 2.8 (Non-crossing linked partitions). Let n be a positive integer. A noncrossing linked (ncl) partition is a collection π of subsets (blocks) of 1, n such that: For n ≥ 1, we denote by NCL(n) the set of non-crossing linked partitions of 1, n . We refer the reader to Figure 2 for examples of non-crossing linked partitions. Two blocks of a ncl partition are allowed to meet at their minimal elements, this is the main difference between our definition of ncl partitions and the one given in [4]. It allows us to define a natural operadic structure on ncl partitions. We denote NCL D the subset of all non-crossing linked partitions with no pairs of blocks intersecting at their minimal elements. Definition 2.9 (Nesting-or-linking operad). Define the degree |π| of a ncl partition by |π| = n if π ∈ NCL(n). We denote by N CL(n) the vector span of NCL(n), n ≥ 1. The operadic composition γ N CL : N CL N CL → N CL is defined as follows. Given a ncl partition α in NCL(n) and β 1 , . . . , β n a sequence of ncl partitions, we define: whereα is the ncl partition of the set {1, 1 + |β 1 |, |β 1 | + |β 2 | + 1, . . . , |β 1 | + · · · + |β n−1 | + 1} induced by α. We denote by | | | the unique element in NCL (1), which plays the role of the unit for γ N CL . To help understanding composition of ncl partitions in N CL, we introduce orders on the blocks of a ncl partition. Pick a ncl partition π. Two blocks of π can be nested or linked. The former refers to the case where a block V of π is contained disjointly in the convex hull, Conv(W ), of another block W . We write W ← V in that case, The transitive closure of this elementary relation on π yields a partial order on π which is denoted ←. Linking refers to the case for which the minimum of a block V is contained in another block W . We distinguish two cases, min(V ) = min(W ) and V is contained in the convex hull Again, taking the transitive closure of this elementary relation yields an order on π, that we denote . Definition-Proposition 2.10 (Nesting or Linking order ⇐). Pick π a non-crossing linked partition. The Nesting-or-linking order ⇐ on π is given by Proof. It is clear that ⇐ is transitive and reflexive. Pick two blocks V and W with V ⇐ W and W ⇐ V . The non-trivial cases are the following ones In the first case, from V ← W , the convex hull of V contains the convex hull of W and the two blocks are disjoint or equal. Now, W V implies either Conv(V ) ⊂ Conv(W ) or min(V ) = max(W ) or Conv(V ) ∩ Conv(W ) = ∅. The latter alternative can not hold since Conv(V ) ⊂ Conv(W ). If min(V ) = max(W ) the two blocks are not disjoint and are therefore Assume that τ ⇐ (π) contains a cycle and pick one cycle c with minimal length. If the cycle c has length two, there exists two blocks V and W , V = W , of π with V ⇐ W and W ⇐ V . One can assume that V W and W ← V . In that case, V is included in the convex hull of W , V ∩ W = ∅. At the same time V W implies that min(W ) ∈ V . Both can not hold and c has a length greater than 2. Since a Hasse diagram can not contain a cycle of length three, c has length greater than three. Assume that c is not oriented. The cycle c contains a triple of distinct blocks U, V, W such that U ⇐ V ⇒ W . Without loss of generality, we can suppose that U V and V → W . In particular, V is included in the convex hull of W , V ∩ W = ∅ and min(V ) ∈ U . Since π is non-crossing, this implies that U is included in the convex hull of W . Either min(U ) ∈ W and W U , either min(U ) ∈ W and W ← U . In all cases, W ⇐ U . This entails that τ ⇐ (π) contains a cycle of length 3, which is not possible. Assume that c is oriented, c = (c 1 , . . . , c n ), c i a block of π, c i = c j , 1 ≤ i, j ≤ n, c i ⇒ c j , c n ⇒ c 1 . This is a general fact that a Hasse diagram of a poset has no oriented cycles, but let us recall the argument for the Hasse diagram of the poset of non-crossing linked partitions. Notice that for all i, j ∈ [n], it holds that Conv(c i j ) ∩ Conv(c i k ) = ∅. Either Conv(c i ) ⊂ Conv(c i+1 ) for 1 ≤ i ≤ n and in that case c 1 = · · · c n in contradiction with our hypothesis, either c i 0 c i 0 +1 and min(c i 0 ) = max(c i 0 +1 ). Choose i 0 the smallest integer in [n] satisfying this property. Hence, This leads to a contradiction. As it will be clear after the proofs of the two propositions below, the Hasse diagram of ⇐ is reminiscent of a tree monomial representing a ncl partition in the operad N CL. In [4], Dykema introduced two "projections" from the subset of non-crossing linked partitions NCL D to non-crossing partitions to define a partial order on NCL D . We define one of these projections in our setting. Pick a ncl partition π. A connected component of π is a subset of blocks that once merged together form a block ofπ. Proposition 2.14 (Connected non-crossing linked partitions). Let n ≥ 1 be an integer and define the following subset of NCL(n) of connected ncl partitions: Denote by N CL (1) the collection of connected ncl partitions. Then, the operadic composition γ N CL restricts to N CL (1) . In addition, (N CL (1) , γ N CL ) is isomorphic to the free operad on the collection I of single block non-crossing linked partitions. Proof. Let α and β be two connected ncl partitions and pick an integer 1 ≤ i ≤ |α|. From the very definition of the operadic composition on N CL and as illustrated in Figure 3, the block of α containing 1 intersects with the block of β containing i in the ncl partition β • i α. Hence, β • i α is connected if α and β are. Then we construct a tree τ (α) which is the Hasse diagram of augmented with leaves in order to interpret it as a tree monomial on (one block) ncl partitions. A vertex of τ (α) corresponds to a block V of α and has |V | incoming edges. We connect the output of a vertex V to an input of another block W ∈ π if V W and if there is no other block U such that V U W . Since π is connected, there an unique minimal block for in π, the root of τ (α). Denote by F the free operad on the collection of one block ncl partitions and p the canonical projection p : F → N CL. Then τ (α) ∈ F and, clearly, p(τ (α)) = α. We show next that any tree τ in F such that p(τ ) = α is equal to τ → (α). We prove this fact by induction on the number of block of a ncl partition π. This is obvious if α has only one block. Assume that the result holds for ncl partitions with at most N blocks and pick a ncl partition α with N + 1 blocks and τ ∈ F such that p(τ ) = α. Then τ has N + 1 internal nodes. Write τ = V •(τ 1 , . . . , τ |V | ). Notice that V W for any block W in p(τ i ), 1 ≤ i ≤ |V |. Thus V is the minimal block of α for the order . We end the proof by applying the inductive hypothesis to the trees τ i and ncl partitions p(τ i ). Denote by I the collection of one block ncl partitions and by II the collection of ncl partitions defined by We denote by θ n the element of II n . Proposition 2.15. The operad N CL admits the following presentation Proof. Denote by F the free operad on the collection I ∪ II. The quotient of the free operad F by the relations (18) is isomorphic to the collectionF of trees with I ∪ II decorated internal vertices meeting the following constraint. If v is an internal vertex of such a tree τ decorated with a ncl partition in the set II then its leftmost input is a leaf of τ . We show then that any ncl partition can uniquely be written as a monomial inF. The proof is done by induction on the number of blocks of a ncl partition. First, the result is trivial for one block ncl partitions. We assume the result to hold for any ncl partition with at most N blocks and pick a ncl partition π ∈ NCL(p), p ≥ 2 with N + 1 blocks. Assume first there exists a tree τ ∈F such that p(τ ) = π and write τ = V • (τ 1 , τ 2 , . . . , τ |V | ), with V a ncl partition in I II. Follow two cases, (1) If V ∈ II, then τ 1 is the root tree and {1} ∈ π. In that case, V is equal to θ n where n is the cardinal of the block {2 < i 2 < · · · < i n − 1} containing 2. If we let π j be the restriction of π to the interval i j + 1, i j+1 − 1 with the convention that π j = {1} if i j + 1 = i j+1 and i n = p then p(τ j ) = π j . The proof follows by applying the induction hypothesis to the ncl partitions π j . (2) If V ∈ I then V is equal to the block that contain 1. The proof follows using the same line of arguments exposed in the previous case. To construct a tree monomial on ncl partitions in I ∪ II representing a ncl partition π, we start from the Hasse diagram τ ⇐ (π) of π for the order ⇐. First, we augment τ ⇐ with as many leaves as needed for the degree of each corolla in τ ⇐ to match the degree of the block of π it is decorated with. We place the additional leaves so that if W ∈ π meets V at its i th element, the corolla W is connected to the i th input of the corolla V . Do the same if W is nested in V : if W is contained between the i th element and the (i + 1) th element, the corolla W is connected to i th output of the corolla V . From this process results a forest of blocks of π. Then, follow this rules. Firstly, (1) If {1} ∈ π, erase the corolla representing this block and decorate the corolla representing the block of π containing 2 by θ n where n is degree of this block minus one. (2) If {1} ∈ π, decorate the corolla representing the block containing one with the corresponding block in I. Secondly, if V W decorate the corolla representing W with the corresponding element in I and if V ← W , decorate the corresponding corolla with θ |W |+1 and a leftmost leaf. Finally, connect the root of each trees to the rightmost leaf of the previous one (if the forest is read from left to right). Corollary 1. The morphism of collections (19) j : 1 → I 1 n → θ n extends to an operadic morphism between the gap insertion operad and the linking-and-nesting operad. Let us provide reason for introducing an operadic structure on non-crossing linked partitions. The T -transform of a random variable a (in an operator-valued probability space (A, φ, B)) with φ(a) = 1 B is a sequence of multilinear maps on B, with t a (0) = 1 B . This sequence can be inductively defined using the following formula relating the moments of a, understood as multilinear maps on B, to the sequence (t a (n)) n≥0 , t a (π)(b 1 , . . . , b n ), with b 1 , . . . , b n ∈ B. The function NCL D (n) π → t a (π) ∈ End B is defined in [4], equations (67)-(70). Alternatively, by using the operadic structure we defined on non-crossing linked partitions, t a (π)(b 1 , . . . , b n ) can be seen to match the valuê T a (π)(b 1 , . . . , b n ) of the operadic morphismT a : N CL → End B prescribed by the following equations Operads with multiplication. In this section we introduce the concept of multiplication in an operad, which is a distinguished operator of arity 2. All operads we introduced so far admit such a multiplication. We begin with the definition of brace algebra and refer to [1], [13], and [2] for details. Recall that T (A) denotes the vector space of all non-commutative polynomials with entries in A. We shall use word notation, a 1 · · · a n = a 1 ⊗ · · · ⊗ a n , a i ∈ A, for elements in T (A). Definition 2.17 (Multiplication in an operad ). An operator m ∈ P (2) of arity 2 satisfying is called a multiplication. We now browse through some examples, found among the operads we introduced in the previous sections. Remark 2.19. In a graded context, that is, if the general term in the summation on the right hand side of (23) is multiplied by a sign, existence of a multiplication in an operad provides a rich structure on the collection P as observed by Gerstenhaber and Voronov in [7]. In particular, the multiplication m together with a certain graded pre-Lie product yields a differential complex (P, d). We now assume that P is an operad with multiplication m. Following the above remark (that isno signs are involved), the non-graded pre-Lie product, denoted , is defined by: In fact, x (y z)−(x y) z is symmetric in y and z. We denote by [ −, −] the commutator bracket induced by the pre-Lie product : which satisfies the Jacobi identity. In [3], the authors define a product, denoted ×, on the vector space C[[P]] of formal series on operators in P, Equation (27) is key to the twisted factorization of the T -transform as explained below in Subsection 3.3. Twisted factorization of the T-transform In this section, we give a concise graphical proof of Theorem 7.18 in reference [4]. The starting point is a formula, for operator-valued free cumulants with the product of two free random variables as entries, expressed in the language of operads. We then show how noncrossing linked partitions are naturally entering the picture due to degree reduction resulting from filling inputs of a multilinear map with the unit of the algebra B. Free cumulants of products of random variables. In this subsection, we explain how to compute the multilinear function series corresponding to free cumulants of the product of two free random variables as the solution of a certain fixed point equation. The proof of this fixed point equation (31) is sketched below. This formula is well known in the scalarvalued case. The authors have not been able to locate the operator-valued version in the literature. Let us fix once and for all two free random variables a and b in the operator-valued probability space (A, φ, B) with φ(a) = φ(b) = 1 B , and recall that we denote by K x , x ∈ {a, b, ab}, the multilinear function series Recall that in the scalar-valued case, i.e., when B = C, one has the intriguing formula [12] (29) κ n (ab, . . . , ab) = Here, the non-crossing partition Kr(π) ∈ NC(n) is the Kreweras complement of π ∈ NC(n), first introduced in [9]. For two non-crossing partitions α and β in NC(n), one denotes by α ∪ β the partition of the interval 1, 2n whose restriction to the odd integers, respectively to the even integers, coincides with α, respectively β. By definition, Kr(π) is the maximal noncrossing partition (for the refinement order) such that π ∪ Kr(π) is a non-crossing partition of NC(2n). In the operator-valued case, since the cumulants of a and b do not commute with each other (they are elements of the non-commutative algebra B), the right-hand side of equation (29) does not factorise over π and its Kreweras complement Kr(π). In fact, we should maintain the linear order between random variables in the word a ⊗ b ⊗ · · · ⊗ a ⊗ b. Recall that the operadic morphismsK a andK b from N C to the operad of endomorphisms End B of B are defined in equation (15). The free productK a K b is the unique operadic morphism on the operad N C N C such that (K a K b )(π) =K a (π) if all blocks of π are coloured with 0 (that is π belongs to the first copy of N C in N C N C) or (K a K b )(π) =K b (π) if all blocks of π are coloured with 1 (π belongs to the second copy of N C in N C N C). With these definitions, we claim that the operator-valued counterpart of formula (29), computing cumulants of the product of two free random variable, reads (30) x 0 κ n (ay 1 bx 1 , ay 2 bx 2 , . . . , ay n b)x n = π∈NC(n) y 1 , x 1 , . . . , y n , x n ), x n ∈ B and y 1 , . . . , y n ∈ B. Here, we will denote byπ the non-crossing partition π ∪ Kr(π) of 1, 2n . Each block ofπ is coloured with 0 or 1, according to the parity of the elements in the block yielding an element of the free product N C N C. Validity of formula (30) will be ascertained by The previous non-crossing partitions are of the form π ∪ Kr(π), but in contrast to the scalar case the partitioned cumulant κ π∪Kr(π) (ay 0 , bx 1 , ay 1 , bx 2 , ay 2 , bx 3 ) is not the product of the cumulants κ π (ay 0 , ay 1 , ay 2 ) and κ Kr(π) (bx 1 , bx 2 , bx 3 ) because blocks of π and Kr(π) are nested one into the others. For the above listed partitions the corresponding cumulants are The expressions above on the right match the values of (K a K b )(π) following the interpretation ofπ as an element of the free product N C N C. To obtain the operator-valued free cumulants of ab, seen as multilinear maps on B, we set the y's equal to 1 B ∈ B in formula (30). We explain how this degree reduction yields a sum over non-crossing linked partitions in place of a sum over non-crossing partitions. Pick a non-crossing partition π ∈ NC(n). Figure 6 displays a partitionπ with the blocks coloured according to the parity of the elements. Recall that the front gap of a block is the gap located in front of the first leg and the back gap is the one located just after the last leg. We symbolize evaluation to the unit 1 B of B of the variables y's by crosses. It is clear from the drawing in Figure 6 that each block sees a cross either in its back gap or in its front gap. Looking separately at each block of the partitionπ, one notices that a (odd) black block has its back gap marked with a cross whereas a (even) blue block has its front gap marked with a cross. The blue outer block plays a special role, as will become clear further below. We notice that this blue outer block is trivial (which means here a singleton) if the partition π is irreducible non-crossing. The multilinear map associated to the partitionπ after evaluation of the y variables to 1 B can thus be obtained by composing in the operad End B the following multilinear maps k a L (n)(b 1 , . . . , b n ) = κ n (b 1 a, . . . , b n a), Figure 6. In the upper half, we have a partitionπ, made from the non-crossing partition π, drawn in black, and its Kreweras complement, drawn in blue. Below, we symbolized with a cross evaluations to 1 B of a variable in B (that fall in a gap) How these multilinear maps are composed together to obtain (30) is best understood by associating toπ a non-crossing linked partitionπ . First, we choose a tree monomial τ (π) representingπ. We explained thatπ, with the blocks coloured, should be seen as an element of the free product N C N C. Hence, the tree monomial τ (π) representing π has coloured corollas, too. We can impose on τ (π) the following requirements: 1. the root corolla is blue and its rightmost input is a leaf of τ (π), 2. the leftmost and rightmost inputs of a corolla are leaves of τ (π). Owing to the very definition of the Kreweras complement of a non-crossing partition, the restriction ofπ to an interval of integers bounded by two legs of a block ofπ is irreducible. This entails property 2 for any tree monomial representingπ. Next we erase leaves from the tree monomial τ (π) according the following rules: 1. we erase the rightmost leaf of a black corolla, 2. we erase the leftmost leaf of a blue corolla different from the root, 3. finally, from the set of corollas obtained by applications of the two above points, we erase corollas with a single input, excepted the root corolla. By definition, the tree monomial τ (π ) always has a blue root corolla. Figure 7. On the left, the tree monomial τ (π) representing the partition in Fig. 6. In the center, we applied item 1 and item 2. On the right hand side we see the resulting monomial representing the non-crossing linked partitionπ . We obtain a tree τ (π ) which, by making the substitution 1 n → 1 n , n ≥ 2 on each corolla different from the root and 1 n → θ n for the root corolla can be seen as a tree monomial on one-block ncl partitions representing a ncl partitionπ in the nesting-or-linking operad. In Figure 8, we have represented the (non connected) ncl partitionπ resulting from the process described above, starting with the partition pictured in Figure 6. Notice that all singletons in π are eliminated by this process. The association N C π →π ∈ N CL is of course not bijective for the image does only contain ncl partitions with nested blocks alternatingly coloured black or blue. Besides, a blue block ofπ always has its rightmost leg free and a black block always has its leftmost block free. Finally, notice that the non-crossing linked partitionπ does not meet the requirement of Dykema to have no blocks sharing their minimal elements, as shown in Figure 6. This is the reason why we consider a more general definition of a non-crossing linked partition. To a blue root corolla of τ (π ) (representing the outer blue block in Figure 8) corresponds a left and right B linear map from the sequence k b LR , while to the other blue corollas correspond right B linear maps from the sequence k b R . To black corollas correspond left linear multilinear maps in the sequence k a L . This entails thatπ should in fact be seen as an element of the triple free product N CL 3 , with the blue outer block seen as element of the third copy of N CL in N CL 3 and the other blocks distributed to the two remaining copies depending on their colour. We callK L a andK R b the operadic morphisms on N CL with values in End B that evaluate on one-block ncl partitions as k a L and k b R respectively. Finally, set for any non-crossing linked partitions π . Ifπ is an irreducible non-crossing partition, the blue root corolla of τ (π) has only two inputs and the ncl partitionπ has a singleton for outer blue block. Hence, to an irreductible noncrossing partition π in NC(n) corresponds a connected and bi-coloured ncl partition obtained by restrictingπ to 1, n − 1 . Define the multilinear function series V a,b whose homogeneous component of order n is the sum of the multilinear mapsK a,b (π ) with π ranging over the set of all irreducible non-crossing partitions, (1, b 1 , . . . , b n , 1). The next proposition follows from the discussion above. Recall that the series K a , K b ∈ C[[End(B)]] have been defined at the beginning of this section. The two products and × are defined in Section 2.3. In addition, we set and the cumulant series K ab is given by Short proof of the twisted factorization of the T -transform. In this subsection, we give a short graphical proof for the twisted factorization of the T -transform (9). Following the work of Dykema, we define two subsets of multilinear function series, 1 . Each of these operators is drawn as a corolla with its root decorated by W and with at most n leaves. Each leaf corresponds to composition of the multilinear function series with one of the letters E i , followed by concatenation of the resulting multilinear function series. For example, in the case W = E 1 E 2 , we have drawn in Figure 9 the associated operators. Notice that the edges of the corollas drawn in Figure 9 should be coloured, with 1 for the inputs and with 0 for the output, respectively 1, if E 1 · · · · · E n 0 = 0, respectively, E 1 · · · · · E n 1 = 1. We omit these colourizations to lighten notations. In Figure 10, we represent graphically the defining relation of the T -transform and in Figure 11 the two equations (31) and (32). . Graphical representation of equations (31) respectively (32). Note that we omitted the indices at V lighten notation. [4]). Let A, C and D (D 0 = 0) be three multilinear function series, then Theorem 3.3 (Theorem 7.18 in The proof of the statement is represented diagrammatically in Figure 12 and Figure 14. Note that we have omitted the indices at V = V a,b to lighten the notation. We detail the computations of Figure 12. For the first equality, we use equation (31) and for the second one equation (32). The third one follows from inserting the defining equation for the T -transform of b (see Figure 10). We then recognize the equation (31) in the leftmost tree attached to the node T a T b . The fourth and fifth equalities proceed from the same computations. To continue, we use the relation drawn in Figure 13 for the expression circled with a dotted line in Figure 14. The following proposition is a direct consequence of associativity of m and of the operadic product γ. x 0 = 1}. The set G inv if endowed with the concatenation product is a group. We have introduced so far two formal groups, G and G inv . The group G is sometimes called the diffeomorphism group of the operad P. Regarding notations, a generic element of the group G will be denoted g and a generic element of G inv will be denoted h. We define next two actions, a right action of the group G on G inv , compatible with the product , and a left action of G inv on G by conjugation. We begin with the former. The two actions and are compatible in the following sense. The computations with ρ in place of λ are similar. The second statement is obvious. The above proposition can be restated by saying that λ −1 and ρ −1 are two cocycles with respect to the right action and the group product on G λ , respectively G ρ , λ −1 (g × g ) = λ −1 (g) λ −1 (g ) g . If one chooses for the operad with multiplication P the endomorphism operad of B, with m equal to the product in B, the above argument gives another proof of the twisted multiplicativity for the T -transform in operator-valued free probability.
2021-05-21T01:15:55.809Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "50d03caa86c89349a1c2637dc6a135bf1d4f932e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "50d03caa86c89349a1c2637dc6a135bf1d4f932e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
219743109
pes2o/s2orc
v3-fos-license
Development of a proactive project monitoring model The article proposes a model of proactive monitoring of the project performance consisting of a set of works. This model will allow you to predict in advance the values of the principal features of the project, the main of which is its duration. The use of this model in project management tasks will make it possible to make decisions at earlier stages. In the case of predicting the excess of their critical values, it becomes possible to take measures to eliminate the problem in advance. The mathematical model is based on the methods of regression analysis, namely, on modeling and forecasting the trends of the time series, consisting of the project deadlines at various points in time. The results of computational experiments are presented, which showed that the developed forecasting model provides adequate estimates and the forecasting results are in good agreement with empirical data. In addition, the model depends on a small number of parameters, the influence of which is studied in this paper. Introduction One of the main distinguishing features of any project, consisting of a sequence of work related to each other, is its limited time, that is, the presence of a start and end point. Based on the amount of work, as well as the number of processes that must be completed to execute the project as a whole, it is possible to calculate the approximate dates of the project task [1]. In order for the project to be completed on time, constant monitoring of the execution time of individual works is required, as a rule those that are responsible for the duration of the entire project, that is, lying on the critical path of the network schedule [2]. If there is a lag in the deadlines for their implementation, measures must be taken to reduce this lag, for example, transfer additional resources to critical work from those that have non-zero time reserves. At the same time, resource management should be proactive, since with significant delays in the project schedule, it becomes more difficult to reduce this lag, as more and more additional resources are required, and their number is always limited [3]. In this paper, we propose a model of proactive monitoring of the execution time of a project, consisting of a set of works that is based on the theory of time series, and more precisely, on modeling and predicting the trend of a time series, consisting of the project deadlines at different points in time. Setting a problem Consider a project containing a sequence of interrelated work. The methods of network planning and management highlight critical work that allows you to calculate the project time. At each current point in time t it is possible to evaluate the degree of lag of the real time of the project at this stage w(t) from the planned time wp(t), that is, the value: u(t) = w(t) -wp(t), which can be and negative. Given the stochastic nature of most works, the quantity u(t) can be considered as some random process [2,5]. For small lags u(t) surgical intervention in the work package is usually not required, since it is caused by the influence of random factors and is natural for random processes. However, a significant lag behind the project implementation schedule at the current moment of time with a high probability can lead to the fact that in subsequent periods of time this lag will grow even more. This is due to the fact that when you deviate from the schedule, various costs increase when performing work, and if you do not carry out operational management to eliminate them, the backlog from the schedule will continue to grow. We assume that there is some critical value of the lag ukr, upon reaching which it is necessary to apply operational control. The task is to predict in advance the passage of the lag for a critical value in order to take a measure in advance to eliminate it. This can be done by modeling the time lag trend. Figure 1 is a graphical interpretation of proactive monitoring. Mathematical forecasting model We will directly consider a mathematical model that allows us to predict the values of the time lag from the project schedule for several time periods. The forecasting model will be based on the theory of time series [6,7]. Let there be a time series of the indicator u(t), which was measured over k time periods: u1, u2, …, uk, where uk -is the last (current) value of the indicator. Let us find a forecast of the value of this indicator in p periods, that is, at time moment with the number k+p. To do this, we will use the levels of the time series from the period from (k-l+1) to k, where l is the lag of the series or its maximum lag. The forecasting scheme is shown in figure 2. We will use the linear trend model constructed by the analytical alignment method [7]. To do this, it is necessary to build a linear regression model according to: In accordance with [6,8], it will have the form: where ( ) (2) Next, we find the average forecast error Su, that is, the average value by which the forecast value deviates from the real values of the indicator when forecasting: Based on this, we find the maximum error by which the true value of the parameter and its forecast will differ with probability p: where t1-α(m) -Student's quantile of distribution [9] with significance level α and with degrees of freedom m. The confidence interval of forecasting in this way will be equal to ( ) . Thus, preparation for operational management in the case of high reliability against a false forecast should occur when the condition k k kr u u u  − ~. In this case, the probability of a false forecast is α. In this case, the preliminary readiness mode for the onset of operational management can be set already when the condition k kr u u . Next, we consider some properties of the forecasting model presented in the work. The study of the forecasting model properties To study the properties of the model, a computational experiment based on simulation was carried out, which was as follows. A time series was randomly generated, which until the 30th time period did not have a directed trend, but had a random component distributed according to the normal law. Starting from the 30th to the 70th time period, a positive trend was connected. The simulation model made it possible to control the dispersion (dispersion), the magnitude of the trend, and the parameters l and p. IOP Conf. Series: Materials Science and Engineering 862 (2020) 042043 IOP Publishing doi:10.1088/1757-899X/862/4/042043 4 The time series had autocorrelation (aftereffect), which is typical of real random processes of this kind. Examples of simulation results are presented in figure 3. The figures along with the graphs of the series and the forecast also show the confidence interval calculated by (4). In these figures and all subsequent ones, the significance level in calculating the confidence interval is taken α=0,05. should also be noted that for clarity, the forecast graph is depicted on the same time periods as the series graph, although it was obtained on p rows earlier. Since a series of length l, is initially necessary to obtain a forecast, the forecast schedule does not start from the first period, but from the period l+p. IOP Conf. Series: Materials Science and Engineering 862 (2020) 042043 IOP Publishing doi:10.1088/1757-899X/862/4/042043 5 Other realizations of the simulation experiment give a similar picture. It can be seen from the analysis of the figures that for small dispersions, a fairly accurate prediction of the levels of the series is observed, and for large dispersions, a random spread, if it is unidirectional and gives some false trend in a certain interval of periods, leads to forecast fluctuations that in amplitude exceed the fluctuations of the series. However, if you use time series without autocorrelation, the forecasting picture becomes more stable, because in such series, the probability of a false trend is much less. Also, the results of a computational experiment showed that if the trend of the series is significant, or the dispersion is small, then to start the decision-making mechanism for the reallocation of resources, you can use not the lower limit of the confidence interval, but the upper one, which will increase the lead time. However, with small trends and a large random spread of at least one component, this can lead to a false trend. We now turn to the influence on the forecast parameters l and p. The influence of the length of the time interval l on the basis of which the forecast is built, as expected, is the stability / flexibility of forecasting. With decreasing l the forecasting flexibility increases and the stabilization interval for the appearance of a trend decreases (time periods from 30 to 40), but it leads to an increase in the spread of forecasting points, the reaction to series fluctuations increases, and the confidence interval widens. An increase in l leads to an increase in the periods of forecast stabilization with structural changes in the series, but smoothes out the series instability, reducing the likelihood of false trends. As a result of numerous computational experiments, it was noted that the most optimal result is given by models in which the parameters l and р are of the same value. The influence of the parameter р lies in the fact that its decrease leads to an increase in accuracy, a narrowing of the confidence interval, and a decrease in instability, but all this is done by reducing the forecast range. The influence of one more parameter, the significance level α consists in the fact that its decrease also reduces the probability of "false positives", but increases the width of the confidence interval. As a result of computational experiments, the Pearson pair correlation coefficient [8,9] between the levels of the series and the forecast was also calculated. The experiments showed that the correlation strongly depends on the variance of the time series, but when the variance of the adequacy and reproducibility is not more than 6 times, the correlation is on average not less than 0.85 and is significant at a significance level of not more than 0.05, which indicates the adequacy of the forecasting model at a given significance level. Conclusion In general, the results of computational experiments based on simulation methods have shown that the forecasting model presented in this paper gives adequate estimates, the forecasting results are in good agreement with empirical data, the model is easily controlled by a small set of parameters, the influence of which is determined and regulated. All this allows us to conclude that the presented model of exclusive management can be used in practice when solving problems of planning and project management [10,11].
2020-05-28T09:14:41.949Z
2020-05-28T00:00:00.000
{ "year": 2020, "sha1": "b87bca59f0cce1a7253a11f8aec67f1779f62965", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/862/4/042043", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ff875f3f28c26f251cf33408c7859dda2977745d", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
33307661
pes2o/s2orc
v3-fos-license
Pathology Isolation and Identification of Canine Herpesvirus ( CHV-1 ) in Mexico This work presents the pathology description, isolation and identification of canine herpesvirus (CHV-1) in Mexico, a virus that causes a generalized hemorrhagic infection in puppies from the canidae family. Methods: Isolates were obtained from puppies that died within the first four weeks of life and had lesions consistent with canine herpesvirus. Results: The main gross lesions were petechial and ecchymotic hemorrhages in kidneys, liver and lungs; proliferative interstitial nephritis; multifocal necrosis in liver and kidneys; and encephalitis with intranuclear inclusion bodies. Herpesvirus was confirmed through direct immunofluorescence, electron microscopy and polymerase chain reaction for DNA polymerase and glycoprotein B genes. Discussion: Eight strains were isolated and identified as canine herpesvirus corresponding to three of the working cases with gross and microscopic lesions very similar to those described in the literature; then, isolates were confirmed by PCR gene amplification, positive reactions on immunofluorescence and observations from electron microscopy. This work represents the first report of this disease, including gross and histological lesions, and confirmation by isolation and identification of the canine herpesvirus in Mexico. Introduction The canine herpesvirus (CHV) has been isolated in various countries around the world, and recent studies indicate that it is a prevalent disease in the European canine population [1]- [4].However, in Mexico, there are no data for this disease; furthermore, it has not been recognized as a disease in Mexico and the World Organisation for Animal Health (OIE) has not included it on any of their current lists. Evidence of CHV infections has been found mainly through molecular techniques, such as PCR; however, only a few publications have performed viral isolation and identification [5]- [9]. Therefore, our aim was to detect, isolate and identify CHV in Mexico City in a fashion consistent with the current literature and evaluate pathological changes in dogs less than 25 days old that were submitted for necropsy.We accomplished the isolation of MDCK cells and identified CHV using immunofluorescence, polymerase chain reaction and electron microscopy. Material and Methods This study used puppies that died before reaching four weeks of age that were submitted for necropsy.Gross lesions were evaluated and samples were taken from affected organs for histology, immunofluorescence, PCR and viral isolation.Puppies who died during the first four weeks were those with lesions and clinical data consistent with the literature reviewed.We have virus isolated of puppies from 4 to 6 weeks, but the lesions are combined with secondary agents, mainly respiratory bacteria. Histological evaluations used paraffin embedding and hematoxylin and eosin stain (HE).For immunofluorescence, a commercial conjugated anti CHV-1 (VMRD™, Inc., WA, USA) was used and the recommended procedure was followed. Briefly, for the first reaction, we used 400 ng of DNA, 400 nm of each primer, 100 μM of each dNTP, 10 mMKCl, 10 mM (NH 4 ) 2 SO 4 , 20 mM Tris-HCl, 2 mM MgSO 4 , 0.1% Triton X-100 at pH 8.8 and 1 unit of Taq polymerase (Bioline™, USA).The mixture was placed in a Multigene thermocycler (International Labnet™) with an initial incubation at 94˚C for 5 min, 35 cycles at 94˚C for 20 sec, 46˚C for 30 sec, and 72˚C 30 sec, with a final step of 10 minutes at 72˚C.For the second nested PCR reaction, the same concentrations were used, with 5 µl from the first reaction as template DNA.Finally, horizontal electrophoresis was performed with an agarose 3% gel, contain 0.5 µg/ml ethidium bromide and was visualized on a UV light transilluminator Different cell lines were used for viral isolation; however, the effect was only observed in the canine kidney line (MDCK, in vitro™, p26), and the rest of the study was performed on MDCK.The tissues were cut into fragments of 0.5 cm 3 , macerated, passed through a glass Potter™ homogenizer, was manipulated manually for 5 min in an ice bath, suspended 1:10 in PBS and centrifuged twice at 1200 g for 15 min at 6˚C.The supernatant was filtered through 0.22-μ Millipore™ membranes.Then, 0.5 ml were inoculated in a 24-well microplate, Nunc™ containing a monolayer of MDCK cells at 70% confluence.The plates were incubated for 1 hour at 34˚C -35˚C.Next, 1.5 ml of minimal essential medium (MEM, in vitro™) was added, with 2% newborn calf serum (Gibco™) and allowed to incubate at 34˚C -35˚C for 5 days with daily observations of cell effects.Immunofluorescence was performed on cell cultures that exhibited a cytopathic effect.For this technique, cells were grown to 70% confluence on treated coverslips (cell culture coverslip, sterile, Thermanox Plastic 13-mm diameter, NUNC™).Cells were infected with viral isolates that were previously made and incubated for 48 -72 hours at 34˚C -35˚C.Staining was performed by adding 75 µl of conjugate (anti-CHV1 polyclonal antiserum conjugated fluorescein isothiocyanate VMRD™) for 30 min at 37˚C in a moist chamber and visualized by UV microscopy using an Olympus UV™. Two PCR protocols were used on the viral isolate, one directly amplified a fragment of the DNApol gene with an expected molecular size of 215 to 315 bp [10].The other protocol directly amplified a 120-bp fragment from the glycoprotein B gene.The last protocol used one forward (P1: 5 'CAG GACTATTGGACTATAGT3') and one reverse (5 'TTG CAATGCCCCTCATAATT3') primer [11]. Transmission electron microscopy (TEM) was conducted at the electron microscopy laboratory in the Faculty of Superior Studies of Cuautitlan, UNAM.Negative stain was used to stain the Karnofsky fixed sample.Then, a drop of 1% phosphotungstic acid at pH 7.2 was added for one minute.Grids were withdrawn from the excess reagent and dried at room temperature.The samples were observed and photographed with a transmission electron microscope (JEOL JEM100S™), and reference pictures of a standard particle size of 60 nm (Sigma ® ) were taken. Results Necropsies were performed on three different puppies.Similar lesions were observed, including corneal opacity, serohemorrhagic fluid in the abdominal cavity, liver enlargement, petechial and echymotic areas, on the liver, lungs and kidneys (Figures 1-3). The histopathology (Figures 4-6) showed moderate multifocal hemorrhages, moderate to severe diffuse proliferative interstitial pneumonia, moderate multifocal suppurative bronchitis and moderate multifocal lung edema.In the kidneys, we found moderate diffuse congestion, severe diffuse albuminous degeneration, mild multifocal vacuolar degeneration, moderate multifocal hemorrhages with moderate lymphocytic infiltration and moderate interstitial proliferative nephritis, and multifocal necrosis. The liver showed moderate diffuse congestion, severe diffuse albuminous degeneration, mild to moderate proliferative perivascular hepatitis, mild multifocal hemorrhage, and moderate multifocal necrosis.In the brain, we observed severe congestion, severe diffuse non-suppurative meningoencephalitis, perivascular lymphocytic infiltration, and the presence of intranuclear inclusion bodies in neurons.All of the results agreed with previously reported findings for this disease [12]- [14]. An additional culture sample was donated by Dr. Laura Cobos of UNAM, Mexico.Cytopathic effects on MDCK cells, ranging from mild to severe, are shown in Table 1.It was possible to identify cytopathic effects on infected cells in all of the organs (Figure 7).This indicated viral shedding in virtually all of the tissues from the puppies and corresponded with the histopathological lesions previously observed. Figure 10 shows the results for the PCR protocol described by Burr et al. in 1996 [11], which amplified a 120-bp fragment of glycoprotein B. Particles associated with herpesvirus were identified with electron microscopy (Figure 11).The viral capsid was 100 nm in diameter, and the complete virion was between 200 and 280 nm in diameter. Discussion In newborn pups, the virus enters the bloodstream through leukocytes.Then, viral replication occurs in the vascular endothelium, leading to necrotizing vasculitis and hemorrhages [14] [15]. CHV-1FESC-3 Spleen ++ Characteristic lesions of CHV-1 infections in newborn puppies include petechial to ecchymotic hemorrhages in multiple organs [12] [14] [15].The three necropsies performed during this study showed hemorrhagic lesions in several organs, mainly the kidney, liver and lungs.Pathological findings were highly suggestive of canine herpesvirus; however, there are other etiologic agents that produce lesions in the vascular endothelium leading to secondary bleeding.Sepsis and endotoxemia are common causes of endothelial damage [16], while the canine adenovirus (CAV-1) replicates in vascular endothelial cells and hepatocytes, leading to ecchymotic and petechial hemorrhages in the organs of puppies [17].Necrosis and hemorrhage were observed in the kidney and described as primary evidence during histopathology as features of CHV infection-1 [16], but to a lesser degree. CHV The histopathological findings from natural or experimental infections in newborns puppies were characterized by scattered foci of necrosis with peripheral hemorrhages in the kidneys, lungs, spleen, small intestine and brain [12]- [14].These foci of necrosis and hemorrhages were found in the examined puppies, with hemorrhages being the most apparent, while necrosis was less prominent; however, no necrotic foci were found in the brain.Moreover, a strong leukocyte infiltration, with lymphocyte predominance, was found in the kidneys, brain and lungs.This was contrary to the results from Carmichael in 1965, who mentioned that low to mild leukocyte infiltrates can be found, although in general, are rare. We were able to observe intranuclear inclusion bodies, consistent with the report by Greene in 2012 [18].The immunofluorescence reaction was more pronounced in the vascular endothelium, mainly in the kidney and brain.This was consistent with McGavin and Zachary in 2007 [16], who indicated that viral replication occurs in vascular endothelium cells. The amplified 250-bp product obtained through PCR was consistent with previous results because the amplified product was expected to be between 215 and 315 bp [10]. After inoculating the tissue samples, cytopathic and cytolytic effects were observed in canine cells (MDCK) in accordance with Strandberg and Aurelian, 1969 [8], who reported that the cytopathic effects were observed 48 hours post-infection, while cytolytic alterations were found after 72 hours.The viral isolates were identified by PCR, immunofluorescence and transmission electron microscopy.The PCR assays amplified DNA fragments within the expected ranges: approximately 250 bp for the DNA polymerase gene (Van de Vanter, 1996) and approximately 120 bp for the glycoprotein B gene [11].Both results were consistent with those of the authors who developed the protocols. The immunofluorescence technique, using a polyclonal conjugate against CHV-1, showed a fluorescent label in the nucleus and the cytoplasmic membrane, consistent with the evolution of the viral cycle and replication in the nucleus with the acquisition of proteins from tegument and nucleocapsid, while the envelope proteins are acquired through vesicles from the Golgi complex [19].The virus leaves remnants of the envelope (including glycoproteins) on the cell membrane when entering another cell [20], which may explain the location of the fluorescent label.Notably, the manufacturer of the immunofluorescence reagent (VMRD ® ) mentioned that there was a reaction with canine distemper virus (CDV) and virus type 2 parainfluenza (CPI-2); therefore, it should not be used as a single diagnostic method.Moreover, confirmation must be supported by another technique, such as PCR. The results obtained by TEM for the structure and size of the virion matched those presented by Roizman and Knipe, 1992 [21].This study also showed amorphous material around the capsid that was partially asymmetric [22]. Conclusions and Implications We isolated CHV-1 viral strains and identified them with PCR, direct immunofluorescence and transmission electron microscopy.In addition, we established a relationship between the macroscopic and microscopic lesions from necropsied pups that died within the first 25 days and confirmed the presence of canine herpesvirus. To date there is no evidence of the presence of canine herpesvirus in Mexico; the confirmation performed in this work implies that appropriate measures should be taken to prevent its spread and condition of animal health in Mexico; in addition they must implement appropriate diagnostic measures. Further work should sequence the amplicons to establish the genetic lineage of these isolates and determine the phylogeny of the virus isolated in Mexico.
2017-10-16T09:23:12.595Z
2016-06-29T00:00:00.000
{ "year": 2016, "sha1": "9140826c19e61bd5651b870c075220f4b52e46bc", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=67833", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9140826c19e61bd5651b870c075220f4b52e46bc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
250540984
pes2o/s2orc
v3-fos-license
An Indicator for Assessing the Relative Impact of Library Events This article details one library’s attempt to create a simple assessment method for evaluating the relative engagement of program attendees across a variety of events. The indicator—a combination of perceived level of engagement and calculated level of certainty—can be used alongside other metrics to give a fuller view of the overall impact of library programming. By conducting this study, the authors created a method for quickly assessing and prioritizing the most and least impactful events within a particular set. quantitatively. This study outlines our attempt at creating an “apples to oranges” method of comparison across a wide range of library programs, providing a way to measure relative engagement across multiple events. This simple indicator—a combination of overall level of engagement with a level of certainty—can be used alongside other metrics to give a fuller view of the overall impact of library programming. ABSTRACT Public librarians are increasingly recognized as community partners who improve the reach of organizations focused in whole or in part on public health promotion. The capacity of librarians to support public health initiatives has previously been studied through case studies of particular communities. Few national studies have considered how and why public librarians are perceived as part of the public health infrastructure. This article analyzes data from interviews with 59 public library partners in 18 communities in 16 states across the United States. These interviews were collected as part of a larger study on how public librarians collaborate with partners to promote healthy eating and active living, or HEAL. Case study selection utilized a purposive sampling technique to recruit public libraries that self-identify as actively involved in public health initiatives. Representatives of those libraries introduced the research team to their community health partners. Findings indicate that in these communities, librarians are seen as trusted connectors, community experts, and as professionals that share goals with public health partners. Nevertheless, the strength of these partnerships is diminished by several factors. The discussion focuses on how a) increased knowledge and b) more strategic conversations on this topic, both within the public health and the public library sectors, could contribute to building better collaborations, locally, regionally, and nationally. Building and sustaining these collaborations could, in turn, help public librarians make more strategic and effective contributions to public health issues that appear both in their workplaces, and in their communities. and organize a downtown walking club to boost physical activity.” KEYWORDS Programming, Assessment, Visualizations, Outreach, Surveys I t is a well-worn trope within professional LIS literature that library outreach is difficult to assess. Like comparing apples to oranges, the variability of event inputs, outcomes, and measures of engagement make it seemingly impossible to evaluate the overall success of a library's outreach work. Authors such as Farrell and Mastel (2016); LeMire, Graves, Farrell, and Mastel (2018); and Diaz (2019) have organized and categorized various types of library outreach, thus mapping out the landscape, but a universal assessment method still eludes practitioners. Simply put, the goal of library outreach is to create engagement with and within the library. Therein lies a substantial problem with assessing library outreach: the quality and character of engagement at one event may not be comparable to the quality and character of engagement at another event. For the purposes of this study, the amount and quality of an individual's engagement during a library event does not matter as much as whether or not engagement is simply present. A positive, non-zero marker of engagement is sufficient for our purposes, thus making it possible to compare one event to another, quantitatively. This study outlines our attempt at creating an "apples to oranges" method of comparison across a wide range of library programs, providing a way to measure relative engagement across multiple events. This simple indicator-a combination of overall level of engagement with a level of certainty-can be used alongside other metrics to give a fuller view of the overall impact of library programming. The William H. Hannon Library at Loyola Marymount University (LMU) serves a campus of 6,564 undergraduate students and 1,869 graduate students (as of 2020). LMU is a private Jesuit college in Los Angeles, California. On average, the library hosts between forty to fifty individual programs each year, including speaker events, tours, workshops, exhibitions, and other creative events. Our attendance at these events ranges from 5,000-5,500 students, staff, faculty, and campus guests each year. However, like many university libraries, the outreach team is small and has limited resources compared to other units within the library. Our department consists of three full-time librarians (the department head, a programming/exhibitions librarian, and a student engagement librarian), one full-time professional staff member (an event manager), and the equivalent of one part-time student employee (i.e., the combination of multiple student employees working a few hours each week). By conducting this study, we hope to create a method by which to quickly prioritize and weigh the most and least impactful programs in our repertoire. Literature Review The American Library Association (2014) conducted a multi-year, multi-part research project to document the characteristics, outcomes, and value of library public programs, and determined that public programming has become central to libraries' work and increasingly important. Moreover, discussion groups with library practitioners from a variety of library settings, including academic libraries, determined "evaluation" to be one of nine essential competencies for programming work. The white paper defines "evaluation" as " [working] toward using statistical and qualitative tools to measure program effectiveness and impact on all community audiences, including those that have historically been un-and underserved; and using this information to iteratively improve the development and delivery of programs." Some of the program evaluation characteristics include whether participants learn new knowledge, change their attitudes, or change their behaviors. However, of the fifty-eight ALA-accredited graduate programs evaluated in the study, none required coursework in library programming or evaluation. The difficulty in evaluating and assessing library programming generally, or at a broader institutional level, is a recognized concern in LIS literature. As Farrell and Mastel (2016), Santiago, Vinson, Warren, and Lierman (2019), and Wainwright and Mitola (2019) point out, there is no one-size-fits-all method for either collecting or evaluating the overall impact of library programs. Farrell and Mastel's (2016) brief survey shows that librarians generally rely on only a few assessment methods for programming, even though they are familiar and comfortable with a broader range. They go on to categorize and define six types of outreach that are commonly used in libraries and recommend assessment strategies for each. Farrell and Mastel note that qualitative and quantitative assessment more often happens in the classroom, and less so for co-curricular library programs. Due to a variety of limiting factors (such as time, resources, and training) many librarians rely solely on head counts. The authors caution, however: "By only focusing on head counts we undermine our ability to accurately understand the qualitative and quantitative relevance of the assessments made when evaluating library outreach objectives and goals." Wainwright and Mitola (2019) outline various assessment measures, including surveys, whiteboard questions, post-reflections, and summary reports, to demonstrate qualitative methods that go beyond head counts to provide a more holistic perspective on their libraries' outreach efforts. However, their experience confirms what Farrell and Mastel discovered; namely, "[because] learning experiences [offered by academic libraries] can often be unique or serendipitous, measuring how these efforts are contributing to the library's teaching, learning, and research missions can be difficult." By using a variety of assessment methods, as evidenced by the two case studies described in their article, Wainwright and Mitola create assessment plans that are integrated with institutional goals and use mixed-methods approaches. At the University of Houston, library staff created a team tasked with evaluating the return on investment for the libraries' outreach activities outside the classroom in relation to student success goals, as detailed in Santiago, Vinson, Warren, and Lierman (2019). By conducting an environmental scan, categorizing their programs, and reflecting upon various attributes (e.g. impact, purpose, partners), the task force was able to develop eleven recommendations for future outreach work. As the authors note, this type of top-down assessment of library programming had never been conducted before at their institution. However, the results could lead to significant improvements, such as "wiser allocation of resources, richer reporting and documentation, [...] and focusing on new outreach opportunities in high-impact areas." LeMire, Graves, Farrell, and Mastel (2018) conducted one of the most comprehensive surveys of academic library outreach, the SPEC Kit 361: Outreach and Engagement, in which they determined that "systematic outreach programs are still very much in their infancy and highly dependent on local organizational culture." Their survey found that libraries used a wide variety of assessment methods for programming, including headcounts, observations, peer and participant feedback, interviews, and focus groups. Most of the methods reported were fairly unobtrusive and easy to administer. Most importantly, the authors found that twenty-seven percent of respondents indicated that no one was responsible for overall program assessment. Similarly, Meyers-Martin and Borchard (2015) conducted a meta-analysis of final exams week library outreach initiatives (e.g. therapy dogs, extended hours, arts and crafts, etc.), including the assessment methods used by libraries. While most libraries collected feedback from users in-person and tracked the number of attendees at these events, others also collected social media feedback, used questionnaires, and tracked the overall number of users in the library. As noted by LeMire, Graves, Farrell, and Mastel (2018), most assessment methods used by librarians are "unobtrusive and easy to administer." However, some practitioners have attempted to use more complex methods. Strub and Laning (2016) outline a robust hierarchy of event evaluation methods to create a rubric that differentiates "how well" an event went with "what good" the event produced. "How well" examines the overall quality, as defined by success and efficiency, and measured by whether the event reached its target audience (e.g. number of attendees or market reach) and satisfaction or learning (e.g. content evaluation or space feedback). "What good" examines the impact, as defined by effectiveness and value, and measures factors such as whether learning occurred, behavior changed, or impact would be seen. The authors developed a question bank for all these levels of the rubric to be used as needed when assessing library programming. German and LeMire (2018) also take a mixed-methods approach in their assessment of a major outreach event, Texas A&M University Libraries' annual open house. In addition to counting the number of attendees, the authors counted the number of visits to specific stations within the event, the number of give-away items taken by students, a poll of students' favorite station, a "one-word" assessment questionnaire, and a participant survey that collected both behavioral and attitudinal information. Chan and Kwok (2013) also used a mixed-methods approach in their assessment of an exhibition and three associated talks developed by technical services librarians at Hong Kong Baptist University Library. For each of the talks, librarians used questionnaires to collect feedback and an open comment sheet (i.e. a large sheet of paper) to collect remarks from visitors to the exhibition. Surveys and questionnaires, like the ones used in this study, are a common assessment tool among outreach and programming librarians because of their ease of use. Jalongo and McDevitt (2015), in their study of the impact of using therapy dogs to help increase library usage, asked students "Would events with dogs influence your use of library resources, spaces and services in the future?" using a Likert scale. Similarly, Lannon and Harrison (2015) asked students to rank their level of stress before and after interacting with therapy dogs. Both studies used open-ended questions to gather additional data. Preand post-surveying-like those above as well as Sclippa (2017) and Budzise-Weaver, Anders, and Bales (2020)-can provide "excellent insight," immediately showing what worked during a library event and what did not. Surveys used by outreach librarians run the gamut between "quick" preand post-surveys and more robust questionnaires. Nicholas, Sterling, Davis, et al. (2015), in their study of the efficacy of a residence hall librarian program, employed a survey of library usage that included various multiple choice, ranking, binary, and open-ended questions. Oravet (2014), in assessing their library's "Human vs. Zombies" event, used a seventeen question survey intended to gather demographic information, information about previous library use, and assess whether students' future use and perception of the library would change as a result of the event. Methodology Between 2016 and 2020, we collected feedback at forty-four library events using brief, printed surveys that we handed out to every attendee. These surveys asked attendees to respond to three questions: (1) Why did you decide to attend today's event? (2) What did you learn from attending today's event? And (3) was there anything that surprised you and if so, what? Jackson (2019) outlines the intent and justification for using these three questions. A student assistant typed the handwritten forms into an online form which generated a spreadsheet of the 884 resulting responses. Additionally, we counted the number of attendees at each event. Using the number of attendees and number of feedback forms, we calculated a "response rate" for each event (number of feedback forms / number of attendees). This ratio will be used to determine a level of confidence in our data. For example, if half the attendees filled out a feedback form, then the confidence level for the feedback on that event would be fifty percent. An event in which all attendees filled out the forms would have a confidence level of one hundred percent. Relatively, we can be more confident in the perceived level of engagement (described below) for the latter event. To determine the level of engagement (on the basis of perceived indicators of engagement in each feedback form), we needed to code each response. We used a binary yes/no code to determine if a response showed evidence of engagement. We decided that "engagement" would be determined by whether the feedback responses showed a change in behavior, attitude, or knowledge related to the goals of the event. Once again, we should emphasize that we did not rank the level or quality of engagement, as doing so would make it difficult to compare one event to another (note the "apples and oranges" problem described above). However, by using a binary yes/no coding system that could function without having to accord with the unique goals of each of forty-four events, we felt we could confidently compare different types of library programs. We divided the spreadsheet of attendee responses into six sections and, following a norming exercise, randomly assigned each author (n=4) to code three of the six sections. The authors were grouped into pairs, and each pair compared their initial coding which found an intercoder agreement of between 89.8 percent and 97.5 percent. Each pair of authors then met to discuss the discrepancies in their initial coding until they reached consensus. Using the data from the coding exercise, we calculated an "engagement rate" for each event (percent of respondents who showed evidence of engagement). Results Most of the events fall into one of three categories: (1) Archives & Special Collections Exhibition Openings; (2) Faculty Pub Night; and (3) Other. Archives & Special Collections exhibition openings usually consisted of a lecture by one or two invited speakers, a talk by the exhibition curator, an opportunity for guests to explore the exhibition gallery, and catered food. Faculty Pub Night events usually consisted of a lecture by an invited faculty member and catered food (Hazlitt and Jackson, 2016). Other events included in the review set include: Women's Voices (featuring dramatic readings of famous historical figures); LMU Speaks (an autobiographical storytelling program); Careers in LIS (a panel discussion for graduating seniors); Luis Rodriguez (a panel discussion with a local poet); and Collaboration as Creative Synthesis (a panel discussion with a local artist). Figure 1 shows the relationship between engagement rates and response rates, with programs categorized by event type. Plots toward the right side of the graph had a higher response rate. Plots toward the top of the graph had a higher engagement rate. It should be noted that in the following figures, the y-axis is intentionally set to start at 0.65 (or, sixty-five percent engagement) to most effectively show the relative difference among various plot points. Thus, points near the bottom of the graph do not represent events with absolute low engagement but events with relative low engagement. It is important to note that all events plotted in these figures had moderate to high engagement, with more than sixty-five percent of attendees showing evidence of engagement. The visualizations that follow (figures 2-4) show the same data, but with different factors emphasized graphically within the chart: by type of attendee, by attendance numbers, and by a combination of various factors (number of attendees, semester in which the event was hosted, and event category). Discussion It should be noted before we discuss these visualizations that one would not need to assess four years' worth of feedback forms to use this method. As noted in the introduction, we sought to create a simple method for quickly comparing the relative success of multiple events, even if those events had different expected outcomes. For example, to use this method, all one needs to do is (1) determine a simple means for assessing whether a program attendee was engaged and (2) determine how many attendees showed evidence of engagement. The threshold for what constitutes engagement in step #1 could vary from one event to the next, but for the purposes of this method, only the presence of engagement is necessary. Instead of providing a more robust means of quantitative assessment, the visualizations above offer "food for thought." These rough sketches of library programming outcomes provide one lens, however hazy, through which to discuss the merits, problems, and impact of a large number of library events relative to each other. While it would be difficult to draw conclusions from the data with a high level of certainty, the visualizations offer an opportunity to generalize and inspire trains of thought that can inform future program development. For example, events that fall in the upper right quadrant of the visualization can generally be said to be "highly successful" in that they show high levels of engagement with a high level of certainty. Examining the events that fall into this general area of the graph, we find a predominance of Faculty Pub Night programs, specifically those that focused on a science topic (Brain, Ford, Moffet, and Okada are all names of faculty in our School of Science & Engineering). What potential conclusions can we draw from this observation? While it was not within the scope or methodology of our study to determine why any one event was more successful than another, it is tempting to speculate. For one, we know from personal experience that science faculty frequently offer extra credit for their students to attend extra-curricular events (relatedly, the difficulty of science courses makes the offer of extra credit even more attractive). Second, the topics are highly specific (e.g. Okada spoke about the neural organization of language using functional neuroimaging). Perhaps the specificity of the topic attracted an audience that attended knowing full-well the subject matter to be covered. Applying the various assessment methods mentioned by Wainwright and Mitola (2019) could confirm the truth of these conjectures. We also noticed that all Archives & Special Collections opening receptions, with the exception of one, have a response rate below fifty percent. Upon reflection, it became clear to us why. The typical structure of an Archives & Special Collections reception is that a series of speakers present on a topic related to the library's current gallery exhibition; following a question-andanswer period, attendees are then invited to leave the event space to enter the " With one exception, all events classified as "Other" ... ranked an engagement rate of over ninety percent. Events in this category include non-standard or ad-hoc programming. One possible reason for this high level of engagement is that the uniqueness of these programs offers an experience that is different enough from the library's regular programming to encourage a more enthusiastic response. " Apples and Oranges: An Indicator for Assessing the Relative Impact of Library Events, continued gallery and adjoining atrium to explore the exhibition, partake in food and drink, and mingle with other attendees. At Faculty Pub Night events, food is provided in advance and throughout the event, and we ask attendees to fill out the feedback forms while they are sitting and before they leave the event. We also encourage attendees at Archives & Special Collections receptions to fill out feedback forms, but at the moment just before they are invited to explore the exhibition (and the buffet). It is reasonable to conclude that many attendees skip the feedback forms altogether so they can partake in the food and gallery walk. Until reviewing the visualizations, this generalization was not obvious to us. Knowing this, we could change the program for future Archives & Special Collections receptions to accommodate more time for feedback forms, thus increasing the response rate and level of confidence in the engagement ranking. One additional trend presents itself as worth noting. With one exception, all events classified as "Other" (i.e., not Faculty Pub Night or Archives & Special Collections receptions) ranked an engagement rate of over ninety percent. Events in this category include non-standard or ad-hoc programming. One possible reason for this high level of engagement is that the uniqueness of these programs offers an experience that is different enough from the library's regular programming to encourage a more enthusiastic response. Anecdotally, we know that many of our event guests are frequent attendees at other library events (e.g., library staff, faculty champions, student employees). However, without further analyzing and tracking individual attendance at multiple events, we cannot confirm this. It is also just a plausible that the uniqueness of the program attracted an audience wholly different from our usual patron. Once again, these visualizations offer directions for future assessment needs. When the authors met to analyze the results, we noted the following additional observations: • Events with predominantly off-campus guests (labeled "Other") or audiences with no clear majority of attendees (between students, staff, and faculty) seem to have higher engagement rates. • Events with mostly faculty attendees seem to trend closer to the bottom left quadrant (thus, lower engagement and response rates). • No Archives & Special Collections reception had a one hundred percent engagement rate (although other events did). • All events with more than fifty-five attendees have response rates under fifty percent. These observations, as well as others not noted in this paper, prompted a number of questions which will be used to further assess and improve library programming, including the following. To what extent does faculty involvement (i.e., their promotion and ability to bring a class) influence these results? What is it about each event that determines its response rate? What are the most important variables to capture in future assessment? One significant area for future research would be to build upon this model using more rigorous data analysis, such as regression analysis, to determine the certainty of the trends and conclusions drawn above. To make these types of analyses possible, future studies would need to improve the feedback rate of program attendees (e.g., requiring feedback during the event). A higher feedback rate would increase the reliability of the results and allow for more complex coding of the engagement level beyond a simple binary instrument. For example, future research could look for indicators of change in attitude, behavior, and knowledge separately. Additionally, future studies should also collect additional data to determine if other factors possibly contribute to engagement, such as: time of day, presence of food, various event formats (e.g. lecture, workshop), expenditures, and staffing. Practitioners wishing to apply this method for prioritization and assessment can conduct a top-level review of all library programming as we have done, or it can be used in smaller circumstances, such as determining which of a handful of library outreach events needs additional improvement. This method could be employed to justify canceling a program. Conclusion In this article, we detailed the development of a convenient and useful indicator for quickly assessing the relative impact of a variety of library events, many of which vary greatly in their format, intent, and expected learning outcomes. Using a widely-used instrument (i.e., survey) and data that is regularly collected by many outreach and programming librarians, this methodology could easily be replicated and expanded by other practitioners. As we have shown, the visualization of these data offers food for thought over which outreach teams can reflect and ruminate to discover generalizations that can inform future outreach work. University of North Carolina Greensboro To cite this article: Lenstra, Noah, and Martha McGehee. 2022 How Public Health Partners Perceive Public Librarians in 18 US Communities ABSTRACT Public librarians are increasingly recognized as community partners who improve the reach of organizations focused in whole or in part on public health promotion. The capacity of librarians to support public health initiatives has previously been studied through case studies of particular communities. Few national studies have considered how and why public librarians are perceived as part of the public health infrastructure. This article analyzes data from interviews with 59 public library partners in 18 communities in 16 states across the United States. These interviews were collected as part of a larger study on how public librarians collaborate with partners to promote healthy eating and active living, or HEAL. Case study selection utilized a purposive sampling technique to recruit public libraries that self-identify as actively involved in public health initiatives. Representatives of those libraries introduced the research team to their community health partners. Findings indicate that in these communities, librarians are seen as trusted connectors, community experts, and as professionals that share goals with public health partners. Nevertheless, the strength of these partnerships is diminished by several factors. The discussion focuses on how a) increased knowledge and b) more strategic conversations on this topic, both within the public health and the public library sectors, could contribute to building better collaborations, locally, regionally, and nationally. Building and sustaining these collaborations could, in turn, help public librarians make more strategic and effective contributions to public health issues that appear both in their workplaces, and in their communities. KEYWORDS Collective impact, community partnerships, health promotion, public libraries, qualitative research, community coalitions, health coalitions The public librarian may play any of several roles in a community-wide action system: information specialist, catalyst change agent, interpreter of community need, channel to community resources, expert in planning and group process. . . . The versatile librarian may exercise leadership and bring library resources and services to bear in a variety of ways -Margaret E. Monroe, a public librarian before becoming a professor of Library Science at the University of Wisconsin-Madison, Library Trends, 1976 I n 2017, the health-focused Robert Wood Johnson Foundation characterized "public libraries" as one facet of community-based "cultures of health," alongside "housing affordability, access to healthy foods, youth safety, residential segregation, early childhood education, complete street policies, and air quality" (Chandra et al. 2017). Despite being increasingly framed as part of our public health infrastructure, public libraries and public librarians are not widely studied as partners within the public health research literature. Within that literature, the topic of the perception of librarians among health partners remains unexplored. Existing evidence suggests that health partners tend to focus more on the public library as a site than on public librarians as partners. For instance, within the sub-field of public health focused on prevention, or "intervening before [negative] health effects occur" (Centers for Disease Control & Prevention n.d., 1), public libraries have been studied as sites for Play Streets (Umstattd Meyer et al. 2019), healthy aging classes (Matz-Costa 2019, 1007-1016), and summer meal and nutrition programs (de la Cruz et al. 2020(de la Cruz et al. , 2179(de la Cruz et al. -2188. This literature tends to focus on the potential of the public library as a trusted community space, and not on public librarians as active community agents. This article aims to empirically understand how public librarians in particular communities are framed by the organizations that work with them to support public health. The focus of the partnerships studied is the promotion of what public health professionals call HEAL, or healthy eating and active living (Journal of Healthy Eating and Active Living, 2021). Results, derived from qualitative interviews with partners who have worked with public librarians in 18 communities across the country, illustrate some of the strengths, weaknesses, and opportunities associated with these partnerships. These case study results lead into a discussion of the further work needed to integrate the public library sector more fully into our understanding of public health infrastructure. Literature Review What Is Public Librarianship? Perceptions and Realities. Public libraries are dynamic, socially responsive institutions that change and evolve along with their communities. A study commissioned by the American Library Association found that over 20% of public libraries offered fitness and nutrition classes in 2014, primarily by leveraging community partnerships (Bertot et al. 2015, 270-289). As these public health partnerships have become more widespread, they have prompted public librarians to reassess what skills are critical to being a public librarian. The Public Library Association (2018) found that the second most needed job skill in the profession is how to be a "Community Liaison/ Partner." Public librarians increasingly work as community partners to address topics as diverse as homelessness (Terrile 2016, 133-146) the opioid crisis (Allen et al. 2019) early childhood development (Tilhou et al. 2021, 111-123), the reading gap (Pasini 2018), and adult education (Daurio 2010). Although the idea of public librarians as community partners has received increased national attention over the last decade, it is not a new idea. In the 1960s and 1970s, work by scholars such as Margaret E. Monroe at the University of Wisconsin-Madison analyzed the various ways in which public librarians participate in community organizing efforts (Monroe 1976), finding that librarians across the country work creatively and nimbly alongside their partners. Nevertheless, a gap in our knowledge centers around the perception of public librarians among actual and potential community partners. Scattered evidence suggests that public librarians are typically not considered as community partners on contemporary community concerns. Aldrich (2018) notes in her analysis of media representations of public librarianship that, "rarely does a writer miss the opportunity to speak to her own nostalgia about libraries, the printed word, and the quiet solitude of the libraries of her youth" (1). She argues these media messages make it difficult for librarians to be seen as community partners; she also points out that librarians struggle to embed community outreach and community partnerships into their work. Empirical work supports the idea that librarians are not always seen as community partners, even in core areas like literacy. In a study on adult literacy partnerships, Daurio (2010) concluded potential partners "did not see the library as a partner" (ii). This finding was confirmed in a recent study of library partnerships relating to the opioid crisis (Allen et al. 2019), wherein researchers found that potential partners did not think of librarians until librarians reached out to them. A report commissioned by the American Library Association found that most voters do not see public librarians as individuals who are well known in the community, knowledgeable about the community, or understand community needs and how to address them (OCLC and American Library Association, 2018, p.10). The literature suggests those working outside of libraries would generally tend not to see public librarians as community partners, unless librarians first suggest the idea to them. Public librarians as HEAL partners. Despite the absence of a national conversation on public librarians as community partners, over the past decade an emerging research literature has highlighted how, in particular places, public librarians do work with partners to promote public health, including in the domain of healthy eating and active living. A state-wide study in South Carolina found librarians there already doing initiatives "around healthy eating and active living and [wanting] to do more" with community partners (Draper 2021, 1). A state-wide study in California found that librarians there recognized a need for a summer meal programs, and were thus motivated to serve meals at libraries in collaboration with summer meal sponsors, such as school districts (de la Cruz et al. 2020). Similar findings have emerged from studies of particular communities. An Appalachian Regional Commission (Cecil 2018) study highlights how in McCreary County, Kentucky, library director Kay Morrow "understands that the library is an important component of a community that can offer a lot more than books …. The library's meeting room serves as a place for healthy-cooking classes …. Always eager to make a better life for residents here, Morrow is spearheading efforts to rebuild the crumbling sidewalks downtown, secure more lighting at night, and organize a downtown walking club to boost physical activity." (Cecil 2018, 49) McGladrey, M., et al. (2019) examine the efficacy of a multisectoral approach to development of rural physical activity promotion coalition in Clinton County, Kentucky, concluding that public librarians are key participants in multi-sector efforts to increase physical activity in rural America. In Eastern North Carolina, Flaherty and Miller (2016) discussed how the Farmville Public Library director worked with a parks and recreation department and a university public health department to start circulating pedometers and to organize the town's first 5K fun run. In rural Oklahoma (Umstattd Meyer et al. 2019) and Columbus, Ohio (Adhikhari et al. 2021), two separate research teams independently found public librarians to be willing and eager participants in multi-sector efforts to bring Play Streets, temporary closures of streets for active play, to their respective communities. Bedard, Bremer, and Cairney, (2020, 101-117) recruited four public librarians in Southwestern Ontario to become trained Move 2 Learn program leaders, demonstrating "the feasibility of teaching staff without specialized training [i.e. librarians] in physical education to implement" (114) a physical literacy intervention. Also in Canada, kinesiologists made 90 pedometers available for circulation from five public libraries, finding libraries to be ideal sites for this form of physical activity promotion (Ryder et el. 2009, 588-596). Freedman and Nickell (2010) studied the impact of after-school nutrition How Public Health Partners Perceive Public Librarians in 18 US Communities, continued workshops in a public library. Sandha and Holben (2021) analyzed stakeholder perception of a summer meal partnership at a rural library in Mississippi. Together, these studies give us some glimpses into how those outside public librarianship frame librarians as health partners, but since the partnership itself was not a central focus in these studies we are left without any in-depth understanding of the perceptions of the partners working with the librarians. This study seeks to apply this literature to assess how librarians are perceived by the organizations with which they work to advance HEAL outcomes: Research question: How do partners that work with or include libraries in HEAL initiatives frame libraries and/or librarians? Methods Case studies show how certain practices are developed in specific communities and, therefore, help elaborate theories related to those practices (Ospina et al. 2018). Qualitative case studies allow the study of research questions in depth, while leaving room for unexpected, interesting findings that can form the basis for concrete hypotheses to be tested in future research (Yin 2013). Case studies are especially useful when there is little existing research on a topic, as is the case here. Case study research has been successfully used in the public library research literature, most recently by Coleman, Connaway, and Morgan (2020) and by Norton, Stern, Meyers, and DeYoung (2021). The former studied how in eight communities, public librarians worked with others to respond to the opioid crisis. The latter studied how in 12 communities, public librarians support social wellbeing. The goals in these and other case studies are to identify and articulate practices and trends that can be further elaborated in subsequent studies. Case study research has also been widely used in the field of public health, which has as one of its goals conducting "epidemiological surveillance," or "the systematic collection, analysis and dissemination of health data for the planning, implementation and evaluation of public health programmes" (Thacker, Parrish & Trowbridge 1988, 11). Over the last thirty years, public health researchers have recognized and struggled with the limitations of existing surveillance systems, leading to a call for more case study research on how cultures of health emerge from the ground up in particular places. Most notably, the Robert Wood Johnson Foundation funded a series of case studies on what they call sentinel communities, geographical communities selected not because they are normal, but because they may be unique, because they may offer researchers the opportunity to observe how a culture of health takes hold and evolves at the local level in a particular place (Chandra et al. 2017). The broader study of which this article is a part has the goal of understanding "how, why, and with what impacts do public libraries collaborate with others to co-develop programming around healthy eating and active living?" (IMLS 2020). To answer that question, public libraries in 18 communities across the United States (Table 1) were purposively sampled to try to secure representation of an array of community types and regions. The purpose sampling of communities emerged in part through public librarians in these 18 communities self-identifying as communities involved in multi-sector HEAL promotion efforts through a call for participation circulated online in the Let's Move in Libraries newsletter in February 2020. " The literature suggests those working outside of libraries would generally tend not to see public librarians as community partners, unless librarians first suggest the idea to them. The participating libraries are in 16 states, and serve a range of communities, with the largest library serving a population of 2,095,545 and the smallest serving a population of 12,960. Like libraries nation-wide (IMLS 2021), most of their funding comes from local governmental sources, with some exceptions, such as the McArthur Public Library, which as a 501(C)3 nonprofit receives large amount of revenue from donations, and Delaware's Laurel Public Library, which like other Delaware libraries, receives a substantial amount of revenue from the state government. The total revenue libraries have per capita also varies widely, with a high of $88 per person per year at Elgin, Illinois, and a low of $9 per person per year in rural Rutherford County, North Carolina. Per capita library funding serves as a barometer for both the political climate of a community and its relative affluence. In these communities, the identification and recruitment of public library partners for interviews emerged through interviews with public librarians. Librarians introduced the research team to their partners. The 59 partners interviewed (Table 2) represent a heterogeneous array of community partners -including local non-profits, public health departments, parks and recreation agencies, and K-12 schools -that work with public librarians in these communities. As with any case study research, these interviewees represent a small number of the potential respondents at their organizations, and therefore their experiences cannot be generalized as the experience of the entire organization. The research team did not construct a sample of potential partners to interview but instead interviewed partners through the case study process of identifying key stakeholders (Yin 2013). How Public Health Partners Perceive Public Librarians in 18 US Communities, continued The interview guide was developed from the Wilder Collaboration Factors Inventory, a widely used tool to understand how different sectors collaborate in communities (Perrault, et al. 2011). The guide was further developed based on the first author's previous work on this topic (Lenstra 2018, Lenstra and Carlos 2019, Lenstra and D'Arpa 2019, as well as with the input of the project's advisory board, which includes experts from both the public library sector and from the sectors that would engage in the interviews as partners (e.g. public health, parks & recreation). The recorded interviews, which took place over Zoom in Fall 2020 and Spring 2021, were semi-structured and based around a series of prompts designed to elicit narratives about the development and utilization of public library partnerships, and of the roles of particular individuals, including the interviewee, in those partnerships. These methods received IRB approval from the UNCG Office of Research Integrity. The protection of stakeholder identities in case study research is a complicated process, particularly when communities are named (Yin 2013). Coleman, Connaway, and Morgan (2020) discuss these ethical dilemmas in their research on public librarians and the opioid crisis. All efforts have been made to protect the privacy of interviewees, but they were informed there is a risk of being identified. This study's IRB application was modeled on that used Coleman, Connaway, and Morgan (2020), and one member of their research team served on the advisory board of this project and provided input to this project's ethical framework (additional details in Allen et al, 2019, p. 25). Data analysis drew upon the case study tradition of qualitative analysis (Yazan 2015). Transcripts were analyzed to develop case study narratives about how partnerships formed, impacts, and how they were sustained over time. Simultaneously, the P.I. and graduate student researchers used grounded theory techniques (Charmaz 2014) to extract themes that cut across the different conversations and cases. Table 3, below, which conceptually lays out the framework developed from this iterative coding process, emerged from four months of intensively moving across the three levels of analysis (interview quotation, thematic code, theoretical memo), until the research team came to a consensus about the nine themes that encompass the range of attitudes partners conveyed about their experiences collaborating with public librarians on public health initiatives. Each of these themes is illustrated below using a representative example from the different case studies. Limitations As with all case study research, this study does not claim to offer generalizable trends. At every level of sampling (community, partner organization, partner representative), purpose sampling techniques were deployed that undercut generalizability. It is impossible to extrapolate from a case, or from 18 cases, to make broad conclusions on a topic. Future research will need to do that extrapolation, and the discussion section concludes with a call for precisely that. Findings Across the interviews, libraries are seen as trusted connectors (Table 3). In some cases, though, the partnership is diminished because of weak ties to the institution. An opportunity identified is to cultivate more connections between public libraries and partners. Public libraries are seen as community experts. Weakening this perception is the idea that library partnerships are aberrant. An opportunity emerges to cultivate more awareness of transformations in public librarianship. Partners see librarians having shared goals with them. Weakening this perception is the fact that other librarians do not share those goals, with a related opportunity being to cultivate more HEAL champions within the library workforce. Section 1: Connections Trusted connector. Since 2009, the staff of the Laurel Public Library have worked to cultivate a reputation as a trusted community connector, with that work leading to transformations in partner perceptions. An early institutional partner was the University of Delaware Cooperative Extension. An Extension agent said that although he has worked in Laurel since the 1990s, he did not perceive the library as a connector until 2009. He now sees the library as: "Instigators. So basically I reached out to the library and said, 'Can we use you?'" As a result, the library became the host of the Extension's 4-H program, and as that relationship developed it led to the library and the Extension working together to transform the built environment in 2014 (Figure 1). Another of the library's long-term partners, a faith-based organization, remembered that: Table 3: Strength, weaknesses, and opportunities associated with public libraries as HEAL partners. Themes developed from qualitative analysis, see Methods, above. How Public Health Partners Perceive Public Librarians in 18 US Communities, continued "The first big thing that... we partnered with them to do [was] to put exercise stations in a local park down the street. They got the grant. They got the equipment shipped in. I put people together to get it done. And it still is used today. That was one of the first and biggest things we did together." Since 2014 the library has extended their connections, offering nutrition classes in partnership with the Delaware Food Bank (a SNAP-Ed implementing agency), becoming a summer feeding site in 2017, adding indoor exercise equipment and 2019, and during COVID-19 starting a Farm-to-Patron initiative where extra produce from surrounding farms is dropped off at the library for anyone to take. Weakly connected to partner. Since 2006, staff of the Gail Borden Public Library in Elgin, Illinois, have participated in Activate Elgin, a city-wide initiative to engage all sectors of the community to provide opportunities to improve health, particularly around HEAL. One librarian had been the key liaison to Activate Elgin since 2009, and when she retired in summer 2020, the partnership was put into jeopardy. The combination of the COVID-19 Pandemic and the retirement of a key staff member illustrates weaknesses that can emerge when HEAL partnerships are dependent on particular individuals. A community educator at a local hospital stated she was, "heartbroken when I heard that [the librarian] was leaving, because we have a super good relationship." At the time of the interview, she did not know if the library would appoint a new representative to Activate Elgin. She said that during the pandemic she has been thinking about, "how can we continue to work with the library? [For example] can I download or check out a DVD from the library that would lead me in yoga because I can't go in and see my yoga instructor? Can I go check out a cookbook that would have some healthier recipes? So what can we do? How can we partner together?" She said that she is unable to answer these questions because she no longer has a contact at the library. Having lost a key contact in the library, she feels the partnership has ground to a stand-still. The future of the library's role in Activate Elgin is uncertain. Cultivate more connections. The McCracken County Public Library in Western Kentucky has been a key player in multi-sector coalitions organized by a local hospital and the United Way. As the library director became more involved in these coalitions, she sought to involve library staff at all levels. The leader of the Healthy Paducah community coalition said that as a result of her efforts the library is "so visible in the community." As much as possible, library staff spend time outside of the library, attending community meetings, doing programs at farmer's markets, and bicycling around town on their 'Brary (short for library) Bike. This example illustrates how the library director empowered staff to cultivate connections with partners. A youth services librarian shared the story of how the library became a summer feeding site through her community connections: "[It] started with a conversation I had at the food bank, when I was volunteering there with the nutrition coordinator from the school. I was at the food bank because [a local nonprofit that] was bringing meals to the library parking lot. [The nonprofit] put out a call for volunteers, and since I knew him through his work in the library, when the call went out, I decided to volunteer." Throughout the interviews with librarians and partners of the McCracken County Library, stories like this one occurred again and again. Partnerships lead to partnerships, creating a dense weave of different institutions working together to address persistent community health issues. The leader of Healthy Paducah said they "would be lost without them [library staff]." Section 2 : Community expertise Librarians as community experts. In the sprawling jurisdiction of Harris County, Texas, staff from Harris County Public Health see the public library as their "go to partner" for everything from mosquito control and testing to childhood obesity prevention. This intergovernmental partnership began around 2005 with jointly hosted "kid dance parties.... We've had smoking cessation, we've had exercise, family nutrition, and it's just grown through the years," particularly once the library became a member of the health department's Healthy Living Matters coalition. Three staff from Harris County Public Health were interviewed. In 2015 they started working on creating Mobile Health Villages that include free check-ups alongside fun activities like active play stations and farmers' markets. From the beginning, library staff were involved in planning: "I had met with the library early on. We started partnering with Harris County Public Library because we felt they had tremendous reach into the community. All you have to do is look at their branches to know what the needs are in that community. That was one of the reasons we wanted to work with them. And they've been such a good partner [with the Mobile Health Villages] since then. They make it easy, and we've established so many different kinds of partnerships on so many different levels [with them]." Throughout the interview, they identify libraries as valuable partners because of their expertise on community needs. Health department staff later stated that library staff are "in touch with the community, integrated with target communities, they know how to connect with everyone in the community," and "the community that we're trying to target already perceives libraries as much more of a resource than a place where you can get a book [and] not only is the library a resource for us, but we're a resource for the library." The shared goals at the heart of this partnership will be returned to later in this article. Partner library seen as aberrant. In Clinton, Massachusetts, the library director has been an avid proponent of HEAL partnerships, even serving on a multisector HEAL committee convened by the Community Health Network of North Central Massachusetts. Nevertheless, partners tend to see their library partner as aberrant, an exception rather than the norm. The local hospital started working with the library in 2017 to co-sponsor a Walk with a Doc(R) program. The library had a walking club, and the hospital added their program on top of that. Asked how that partnership became established, the hospital's community health specialist stated "we like to collaborate with non-traditional organizations that we wouldn't typically partner with in the community." The framing of the library as an organization a hospital typically would not work with recurred again and again throughout the conversation. How Public Health Partners Perceive Public Librarians in 18 US Communities, continued This attitude appeared in other interviews in this community. The food bank coordinator said her partnership with the library, focused around cooking classes, emerged "because [the library director is] so open to it. When I look at her, I really don't look at her as a librarian. I guess because I have a stereotype in my head about what that means. She actually is more of a community advocate, and she's kind of turned that whole position into that." The framing of "community advocate" and "librarian" as separate roles illustrates how partners, even as they work closely with librarians, see those partnerships as aberrant. Cultivate awareness of public library transformations. When a new director of parks and recreation moved to Scotch Plains, New Jersey, the second person he met was the public library director. From that moment, the public library and parks and recreation department have worked closely together on everything from StoryWalk installations in parks to taster classes of recreation center offerings provided for free at the library. He stated, "at the end of the day they have resources I can't get," including their community expertise. His awareness of the public library as a partner was not shared with his predecessor. According to the library director, there were no park-library partnerships until the new director came to town. His success, and his knowledge that not all parks and recreation personnel share his recognition of librarians as community experts, has led him to seek to inspire others. At the time of the interview, he was working: "With the New Jersey Recreation and Parks Association on a [continuing] education opportunity, 'Leverage the Library.' I have a whole outline for it. It's something that I've considered, how to work with your library: Obviously, you need to have trust. And, obviously, you need to understand that you're going to benefit as much as they're going to benefit. There are all kinds of ways to leverage and work with them and, and provide the programs and facilities that can benefit both [partners]." Section 3 : Cultivating shared goals Shared goals. In Anne Arundel County, Maryland, a community health educator who has worked with the health department since 2000 said that during that time she has always seen the library as "a spot to hold classes and meetings. It was a location to be at, rather than a deep, deep partnership." This transactional relationship evolved over time into a "deeper partnership. Connecting [with library staff] about how to work more together" which led to the realization that both partners have the fundamental goal of "better serving the community." The realization of shared goals emerged through a community coalition. The coalition was "key in opening up the connection between [library staff] and me. The [librarian] is an active participant in those meetings, and so I got to know what she's trying to accomplish, and then how she can help [meet our goals]. Being part of coalition meetings: That's something that libraries do, they are active participants, I really wanted to emphasize that." Her desire to emphasize librarians as active coalition partners emerges from her reflecting on the fact that earlier in her career she merely saw libraries as passive spaces. Asked to give an example of what kinds of shared projects emerged through the coalition, she responded: " When I look at her, I really don't look at her as a librarian. I guess because I have a stereotype in my head about what that means. She actually is more of a community advocate, and she's kind of turned that whole position into that. "They even helped us with some of our research: We did a food assessment and we utilized the library staff in designing this project. In the food pantry that we are working on, [we asked] 'How can we have a better volunteer system?' [The librarian said] she runs a volunteer system for the library. So we connected with her about how to develop that volunteer system for the food pantry. She's got great experience, and advised in an important way." By cultivating awareness of their shared goals, these partners work together to develop solutions. Other libraries don't share goals. In Biddeford, Maine, the library works with the Coastal Healthy Communities Coalition, a SNAP-Ed implementing agency. A Nutrition Education Program Manager shared both her positive experiences working with the Biddeford library, and her struggles securing similar partnerships in other parts of her service area. She said the adult cooking programs she had at the library have "the most diverse class I've ever worked with. When it comes to age, race, ethnicity, gender, it was very diverse, which I think is a sign that they're doing something right [at the library]." Based on this success, the Nutrition Educator naturally sought out similar partnerships in other libraries, but has thus far been unsuccessful: Cultivate more champions within the library workforce. Before moving to Western Montana in 2005, the director of the Belgrade community library worked in the corporate sector, and there became passionate about workplace wellness, eventually becoming a part-time fitness instructor with training from the YMCA. As a library director, she has infused the principles of workplace wellness into her leadership, and in the process has cultivated champions of HEAL within her workforce. She said that workplace wellness is "part of how I live and work and breathe. It's a natural thing, a natural component of being a librarian." She empowers her staff to see health as a priority, for themselves, and for communities. One of her initiatives has been to work with the town government to secure paid walking breaks not only for library staff, but for every employee of the town of Belgrade. For her, the library can not only be a space that cultivates wellness among library staff, but can also be a community hub for health and wellness. These efforts culminated in the library securing the title of Library Journal's Best Small Library in American in 2015. These efforts have led to the library being seen as a partner by everyone from the senior center to the regional hospital. By foregrounding the importance of workplace wellness, this library leader sets the stage for librarians to become champions of HEAL partnerships. Discussion Public librarians are increasingly recognized as community partners work with others in their communities to support public health (Allen et al. 2019), including around the promotion of healthy eating and active living (McGladrey 2019, 62-67). This study found that partners in these case study communities see librarians as individuals who help them increase their reach, while also creating opportunities for new voices to be heard in community planning. By extending the lens beyond a single community or intervention (e.g. Bedard et all. 2020, 270-289;de la Cruz et al. 2020de la Cruz et al. , 2179de la Cruz et al. -2188, this study How Public Health Partners Perceive Public Librarians in 18 US Communities, continued broadens the national conversation about public librarians as partners in the public health infrastructure. Although much more is needed to understand this topic, this study has set the stage for future research on the unique roles of this poorly understood (Aldrich 2018), if ubiquitous (IMLS 2021), social infrastructure (Klinenberg 2018). The idea of public librarians as community partners on heterogeneous community concerns has been part of the research literature since at least the 1970s (e.g. Monroe, 1976), and yet there is still much to learn about why in some cases librarians partner with others while others do not. This study shows how in some cases partners work well with some libraries but struggle to connect with others, in others librarians struggle to sustain partnerships across staff turnover, while in other cases strong leadership and investment in partnerships by library administrators support this practice. This research could be extended by surveying the membership of national organizations that represent the professional interests of the local organizations interviewed in this project, such as the National Recreation & Park Association, Feeding America, the American Public Health Association, Partnership for a Healthier America, Alliance for a Healthier Generation, the National Association of County and City Health Officials, the Society for Public Health Education, the Farm to School Network, among others. Such a survey could use the perceptions identified in this study as a starting point for more systematically evaluating how public librarians are perceived by others working in communities across the country to promote healthy eating and active living. The research could be extended even further to more systematically understand how potential partners more generally perceive public librarians as community partners. Much work remains to be done, and this study does not claim to be the definitive research on this topic. Implications To ensure the power of public librarians is fully leveraged in multi-sector initiatives, it is important to understand the characteristics of successful partnerships, as well as what motivates partnerships. One promising practice is the identification and/or cultivation of health champions within the library workforce, as well as finding ways to more strategically educate those outside of librarianship to the reality of librarians as health partners. This work may require over-turning stereotypical ideas of libraries and librarians (OCLC and American Library Association, 2018) within the perceptual frameworks of partner organizations. Beyond addressing perceptions of librarians, work could be done to better institutionalize "partnerships" as a core facet of public librarianship. Library leaders could share how they support partnerships at their libraries, as well as how they make investments of time and resources to enable library staff to participate in community coalitions and in other settings that would enable library staff to build relationships with others in their communities. Within partner organizations, coalitions play a vital role in bringing librarians to the planning table. A concrete tactic would be to encourage anyone organizing or leading a health coalition anywhere in the country to, at the very least, reach out to their local public library to see if anyone on staff there may wish to attend a meeting, or join the coalition. Public librarians can also be on the lookout for such convenings. A convenient way to identify such health coalitions is through regular library participation in general community organizations --such as United Way, Chambers of Commerce, or the Rotarythat will typically include overlapping memberships with health coalitions. More generally, this study suggests that a promising practice for public librarians is to simply talk more about public health. The results of this research suggest that the more public librarians talk about public health within their institutions and within their communities, the more potential partners see them as partners. The power of conversation is not to be under-estimated in terms of its capacity to change cultures of health. COVID-19 Addendum This study was conceived and proposed before the COVID-19 pandemic's arrival in North America. All the interviews were conducted during the pandemic. The fact that public and community health workers were willing to take time out of their efforts to combat the pandemic to talk about their experiences partnering with public librarians illustrates the critical nature of these partnerships to the work of public health, in both good times and bad.
2022-07-15T15:15:01.737Z
2022-07-12T00:00:00.000
{ "year": 2022, "sha1": "e7ad4f37b42b98e03b4a34ab6de98e11d392686f", "oa_license": "CCBY", "oa_url": "https://iopn.library.illinois.edu/journals/jloe/article/download/898/801", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d64e8b3d047d896d982faf7dd8febe0a5fa066b9", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
20944090
pes2o/s2orc
v3-fos-license
Risk Assessment and Alternatives Assessment: Comparing Two Methodologies The selection and use of chemicals and materials with less hazardous profiles reflects a paradigm shift from reliance on risk minimization through exposure controls to hazard avoidance. This article introduces risk assessment and alternatives assessment frameworks in order to clarify a misconception that alternatives assessment is a less effective tool to guide decision making, discusses factors promoting the use of each framework, and also identifies how and when application of each framework is most effective. As part of an assessor's decision process to select one framework over the other, it is critical to recognize that each framework is intended to perform different functions. Although the two frameworks share a number of similarities (such as identifying hazards and assessing exposure), an alternatives assessment provides a more realistic framework with which to select environmentally preferable chemicals because of its primary reliance on assessing hazards and secondary reliance on exposure assessment. Relevant to other life cycle impacts, the hazard of a chemical is inherent, and although it may be possible to minimize exposure (and subsequently reduce risk), it is challenging to assess such exposures through a chemical's life cycle. Through increased use of alternatives assessments at the initial stage of material or product design, there will be less reliance on post facto risk‐based assessment techniques because the potential for harm is significantly reduced, if not avoided, negating the need for assessing risk in the first place. INTRODUCTION The concept of synthesizing and selecting chemicals and materials with less hazardous human health and/or environmental profiles is becoming more mainstream, with phrases such as "Cradle to Cradle," "green chemistry," and "informed substitution" used by both industry-funded trade groups and nongovernmental organizations. This concept reflects a paradigm shift from reliance on risk minimization through exposure controls to hazard avoidance. Much as formal risk assessment found its footings in the 1980s with the dissemination of reports such as the U.S. National Research Council (NRC) publication "Risk Assessment in the Federal Government: Managing the Process" ("the Red Book") (1) and the Royal Society's report titled "Risk Assessment: A Study Group Report," (2) the concept of alternatives assessment has developed into a decision-making methodology that recognizes the importance of adhering to a transparent, rigorous framework and drawing a clear distinction between hazard reduction and hazard management in the selection of alternatives. This introductory article is one of three articles in this issue of Risk Analysis relating to alternatives assessment, and is designed to introduce risk assessment and alternatives assessment frameworks in order to dismiss a common misconception about chemical alternatives assessment and identifies how and when application of each framework is most effective. In the second article of this series, Malloy et al. discuss the value of alternatives assessments to guide decision making for regulated chemicals when selecting safer alternatives. In the third article in this series, Geiser et al. presents a detailed discussion of the chemical alternatives assessment process and clearly establishes the utility of alternatives assessment methods to guide and inform decisionmakers. A COMPARISON OF FRAMEWORKS Risk assessment is designed to answer the question: "Is this chemical or product safe enough for the intended use?" in contrast to an alternatives assessment, which is intended to answer the question: "Which chemical or product poses a lower hazard?" The steps taken to answer these two questions are quite different, but it is important to recognize that both frameworks are intended to provide a standardized approach to organizing knowledge and comparing alternatives. (3,4) Hazard has always held a prominent place in risk assessment, and the widespread adoption of the Globally Harmonized System of Classification and Labeling of Chemicals (GHS) has compelled even fervent advocates of risk assessment to spend more time in the pursuit of quality hazard analysis. GHS grew out of the 1992 U.N. Conference on Environment and Development ("the Earth Summit"), and since its adoption by the United Nations in 2002, its use around the world has increased steadily, with 67 countries now adhering to GHS. (5,6) GHS is focused on classifying chemicals and mixtures of chemicals by types of hazard (with the exception of risk-based labeling of consumer products for chronic health effects in GHS Annex 5) and promotes a harmonized system of communicating hazards on labels and safety data sheets. Because hazards are communicated when following GHS, there is a stimulus to find less hazardous or safer chemical ingredients. Risk Assessment Terminology used in the risk assessment community of practice is well defined, as detailed in Table I. Risk can be defined as the probability of suffering harm (injury, disease, death) from a hazard. (7)(8)(9) As specifically relating to adverse effects following exposure to a hazardous substance, risk is defined as the likelihood that the toxic properties of a substance will be produced in a population of individuals under their actual conditions of exposure. (10,11) Hazard is the inherent capacity of a substance or action that can cause harm. Risk assessment is the actual practice of estimating the severity and likelihood of harm to human health or the environment occurring from exposure to a chemical substance, biological organism, radioactive material, or other potentially hazardous substance or activity. (7) The four distinct steps of a risk assessment first outlined in the 1983 NRC Red Book are still used in current risk assessments: hazard identification, dose-response assessment, exposure assessment, and risk characterization. (1) In short order, risk assessment methods found widespread adoption around the world, first by governments and related organizations (such as the European Union), followed by intergovernmental organizations such as the United Nations and its related agencies including the World Health Organization, along with industryfunded trade organization such as the European Chemical Industry Council (CEFIC) and the American Chemistry Council, and then more recently, nongovernmental organizations (NGOs) such as Friends of the Earth, Greenpeace, and the Natural Resources Defense Council, among others. (12,13) Risk assessment follows an established framework that when applied correctly can estimate how likely a chemical will harm a target (e.g., a child, adult, organism in the environment) under specific conditions of exposure. For example, risk assessment methods can adequately estimate the likelihood that a worker will develop cancer following exposure to asbestos under certain occupational exposure scenarios, predict whether a shampoo with high levels of 1,4-dioxane poses an unreasonable cancer risk to consumers, or assess whether a PCB-contaminated site has been adequately remediated. Risk assessment methods have matured over the past 30 years; however, this maturation has not been without growing pains, as detailed in the National Research Council report titled "Science and Decisions," (14) which gives examples of risk assessments (e.g., the U.S. EPA's formaldehyde risk assessment) taking more than a decade to complete and identifies major shortcomings in the ability of risk assessments to adequately inform decision making both in terms of timeliness and answering questions that help guide decisionmakers. Factors Advancing the Practice of Risk Assessment The practice of risk assessment was originally driven by regulatory authorities to fulfill legislative mandates pertaining to human health or environmental protection, but is now equally practiced by Table I. Risk-Assessment-Related Terminology Hazard An intrinsic property of a substance, activity, or risk source that enables it to cause harm (15)(16)(17) Exposure Contact with a chemical or physical agent and a target (18,19) Dose Fraction of an exposure to a chemical that actually enters the body following absorption from one or more routes of exposure (20,21) Exposure assessment Estimate or direct measurement of quantities of risk agents received by individuals, populations, or ecosystems (7,9,18,22) Risk assessment The characterization of the probability of potentially adverse effects from human exposure to environmental hazards (1) Risk analysis A process for controlling situations where an organism, system, or (sub)population could be exposed to a hazard; the risk analysis process consists of three components: risk assessment, risk management, and risk communication (18) Risk management The process of identifying, selecting and implementing actions to reduce risk to human health and ecosystems (23)(24)(25) industry and NGOs to evaluate process, product, or site remediation safety, prioritize risk reduction measures, or demonstrate regulatory compliance (or lack thereof). (17,26) At the international level, risk assessment has been advanced by the E.U.'s REACH regulation (registration, evaluation, authorization, and restriction of chemicals) that came into force in 2008 and requires that all nonexempt substances imported into or produced by E.U. countries undergo registration, with assessment of hazard and lifecycle risk, and subsequent determination of whether safe use can be established. In the United States, legislation such as the Toxic Substances Control Act (TSCA) of 1976 and Clean Air Act (CAA) Amendments of 1990 firmly affixed risk assessment as part of the U.S federal government's chemical evaluation process. Although most would agree that the U.S. EPA's restriction of toxic substances under Section 6 of TSCA has been ineffective (only five substances were actually restricted based on an unreasonable risk determination: polychlorinated biphenyls, fully halogenated chlorofluoroalkanes, dioxin, asbestos, and hexavalent chromium), TSCA's inclusion of the undefined phrase "unreasonable risk" was an early factor entrenching the role of risk assessment at the federal level. The CAA Amendments included authorization in the United States for the Presidential/Congressional Commission on Risk Assessment and Risk Management to identify proper use of risk assessment and risk management in regulatory programs under various federal laws to prevent cancer and other chronic human health effects that may result from exposure to hazardous substances. (27) The CAA Amendments of 1990 also authorized the U.S. EPA to engage with the National Academy of Sciences to review methods used by the U.S. EPA to estimate risk. (23) Public-sector funding at the national and international level subsidized risk-related research, and in a relatively quick manner created a risk assessment community of practice, with hundreds of toxicologists, statisticians, environmental scientists, and chemists working together to create standardized approaches to assessing human health and ecological risks, as evidenced by more than 70 reports and guidelines issued by U.S. EPA's Risk Assessment Forum from 1986 onwards ("the Purple Books"), (28) and equal number of risk-related publications and guidelines issued by the World Health Organization's International Programme on Chemical Safety from the 1970s onwards. (29) At the U.S.-state level, California's Proposition 65 requirement to label consumer products that contain Proposition 65 listed carcinogens or reproductive/developmental toxicants has propelled the use of quantitative methods to estimate exposure and subsequent risk because of the Proposition 65 provision that exempts products from labeling when safe harbor can be established using prescribed risk assessment procedures. (30,31) Chemical Alternatives Assessment Chemical alternatives assessment is a newer methodology than risk assessment, and can be defined in its most simple form as a system to identify alternative chemicals, materials, or product designs to substitute for the use of hazardous substances. Term Definition Alternatives assessment A process for identifying and comparing potential chemical and nonchemical alternatives that can be used as substitutes to replace chemicals or technologies of high concern on the basis of their hazards, performance, and economic viability (38,43) Chemical hazard assessment A systematic process of assessing and classifying hazards across an entire spectrum of endpoints and levels of severity (44) Comparative chemical hazard assessment A type of hazard assessment that evaluates hazards from two or more agents, with the intent to guide decision making toward the use of the least hazardous options via a process of informed substitution (45) Informed substitution An approach for replacing chemicals of concern with safer chemicals or nonchemical alternatives (46) Regrettable substitution Selecting an alternative that turns out to pose an equal or greater hazard than the original toxic substance (4) A publication that is often cited as introducing the formal concept of alternatives assessment is a book titled Making Better Environmental Decisions: An Alternative to Risk Assessment. (32) What is often forgotten is that the U.S. National Environmental Policy Act of 1969 actually advocated the concept of alternatives assessment when making environmental decisions, as called out in an book review published in 2000 in Risk Analysis that misguidedly paints O'Brien's book in poor light. (33) Just as many entrenched and traditional risk assessors were not interested in learning about the principles of green chemistry, (34) informed substitution, (35) or comparative hazard assessment, (36) which are integral parts of an alternatives assessment, early alternatives assessors were not keen to be part of the risk assessment community. Most of the early discord between alternatives assessors and risk assessors can be explained by recognizing a philosophical difference between the two methodologies: many alternatives assessors place great value on hazard avoidance, while risk assessors place great value on exposure controls, despite the human health and/or environmental harm posed by the substance undergoing assessment. As more risk assessors are trained in alternatives assessment methods (and vice versa), this discord has lessened based on a greater understanding of the scientific basis underlying each methodology. Because it is a newer discipline than risk assessment, terms used by alternatives assessors are still undergoing definition. Key terms used by alternatives assessors are defined in Table II. Although there are multiple alternatives assessment frameworks (discussed in detail later in this issue in the article by Tickner), alternatives assessments all use standardized procedures to assess whether potential alternatives have improved human health and envi-ronmental profiles compared to a conventional ingredient. In addition, an alternatives assessment will address whether chemical or nonchemical alternatives are commercially available, perform adequately, and are cost effective. When properly conducted, an alternatives assessment provides the means to avoid regrettable substitution, and promotes the selection of safer chemicals or materials. Examples of completed alternatives assessments using frameworks such as the U.S. EPA's Design for the Environment (DfE) Alternatives Assessment Criteria, (37) the IC2 Alternatives Assessment Guide, (38) and the Lowell Center for Sustainable Production's Alternatives Framework (39) are available online. (40)(41)(42) These alternative assessments examples include selection of less hazardous halogenated flame retardants, coppercontaining antifungal boat paints, and substitutes for lead, formaldehyde, and perchloroethylene, among others. The recent (2014) NRC report "A Framework to Guide Selection of Chemical Alternatives" evaluates the strengths and weaknesses of seven different alternatives assessment frameworks as part of its proposed alternatives assessment framework. (4) Factors Advancing Alternatives Assessment The recent NRC report "A Framework to Guide Selection of Chemical Alternatives" identifies major drivers advancing alternatives assessment. (4) These include U.S. state initiatives in California, Washington, and Maine that identify chemicals of concern. A number of large U.S. retailers have begun to track such lists and now encourage substitution/phaseout of chemicals of concern throughout the supply chain. At the international level, the E.U.'s Substances of Very High Concern (SVHC) list (Annex XIV of REACH) (currently, 31 chemicals) and the SVHC Table III. Green Chemistry Principles (34) Number Principle (hazard-based principles are shaded) 1 Prevent waste 2 Atom economy 3 Less hazardous chemical syntheses 4 Design safer chemicals and products 5 Use safer solvents and auxiliaries 6 Design for energy efficiency 7 Use of renewable raw materials/feedstocks 8 Reduce derivatives 9 Use catalytic reagents not stoichiometric reagents 10 Design chemicals and products to degrade after use 11 Analyze in realtime for pollution prevention 12 Minimize the potential for accidents through safer process chemical selection Alternatives Assessment Enable prioritization of chemicals for reduction or phaseout COEXISTENCE OF RISK ASSESSMENT AND ALTERNATIVES ASSESSMENT FRAMEWORKS Instead of viewing risk assessment and alternatives assessment frameworks as competing paradigms, it is critical to recognize that they are intended to perform different functions. Admittedly, each framework incorporates an exposure assessment component, but unlike a risk assessment, an alternatives assessment uses differentiation provided by the alternatives assessment process to select environmentally preferable chemicals and materials, not just incrementally better ones. For a business trying to select safer chemicals, an alternatives assessment provides a more realistic framework with which to make decisions because of an alternatives assessment's primary reliance on assessing hazards, and secondary reliance on exposure assessment. Relevant to other life cycle impacts, the inherent hazard of a chemical cannot be changed, and although it may be possible to reduce or minimize exposure (and subsequently reduce risk), it is challenging to accurately, precisely, or realistically assess such exposures under all phases of a chemical's life cycle. (34,50) Five of the 12 principles of green chemistry are focused on hazard reduction (shaded in Table III), and when followed as part of an alternatives assessment, promote the design (or redesign) of materials and products that reduce or eliminate the use or generation of hazardous substances. This is the definition of green chemistry, and when practiced as part of a chemical alternatives assessment, benefits not only the company selecting the less hazardous alternative, but also the world at large. As illustrated in Fig. 1, alternatives assessment and risk assessment methods each function to empower decisionmakers, but facilitate quite different end goals. CONCLUSION In his book titled Risk, U.K. geographer John Adams describes risk by quoting turn of the century U.S. economist Frank Knight: "If you don't know for sure what will happen, but you know the odds, that's risk." (51) Particularly for materials, products, or contaminated sites that contain inherently hazardous substances, risk assessment is a powerful tool to assess the likelihood of harm, and if misused, has the potential to promote the continued use of substances that at sufficient levels of exposure may result in adverse human health and/or environmental effects. In contrast, an alternatives assessment begins with a different end game, and that is to inform the selection of less hazardous chemicals and materials so that the concept of acceptable risk is eliminated from the equation altogether. Ideally, through increased adoption of alternatives assessment methods at the initial stage of material or product design, there will be less reliance on post facto risk-based assessment tech-niques because the potential for harm is significantly reduced, if not avoided, in the first place.
2018-04-03T01:45:35.423Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "8a3d990c5f7454cf2761cb7cc7c8bc19f6ed4f77", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/risa.12549", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8a3d990c5f7454cf2761cb7cc7c8bc19f6ed4f77", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Engineering" ] }
231300286
pes2o/s2orc
v3-fos-license
Thirty-Year Trends in Complications in U.S. Adults With Newly Diagnosed Type 2 Diabetes OBJECTIVE To assess the prevalence of and trends in complications among U.S. adults with newly diagnosed diabetes. RESEARCH DESIGN AND METHODS We included 1,486 nonpregnant adults (aged ≥20 years) with newly diagnosed diabetes (diagnosed within the past 2 years) from the 1988–1994 and 1999–2018 National Health and Nutrition Examination Survey. We estimated trends in albuminuria (albumin-to-creatinine ratio ≥30 mg/g), reduced estimated glomerular filtration rate (eGFR <60 mL/min/1.73 m2), retinopathy (any retinal microaneurysms or blot hemorrhages), and self-reported cardiovascular disease (history of congestive heart failure, heart attack, or stroke). RESULTS From 1988–1994 to 2011–2018, there was a significant decrease in the prevalence of albuminuria (38.9 to 18.7%, P for trend <0.001) but no change in the prevalence of reduced eGFR (7.5 to 9.9%, P for trend = 0.30), retinopathy (1988–1994 to 1999–2008 only; 13.2 to 12.1%, P for trend = 0.86), or self-reported cardiovascular disease (19.0 to 16.5%, P for trend = 0.64). There were improvements in glycemic, blood pressure, and lipid control in the population, and these partially explained the decline in albuminuria. Complications were more common at the time of diabetes diagnosis for adults who were older, lower income, less educated, and obese. CONCLUSIONS Over the past three decades, there have been encouraging reductions in albuminuria and risk factor control in adults with newly diagnosed diabetes. However, the overall burden of complications around the time of the diagnosis remains high. 126 mg/dL (6), and a sharp increase in the diagnosis of diabetes followed (12). In 2009, glycated hemoglobin $6.5% was first recommended for use in diagnosis (13). The objective of our study was to assess the national prevalence of and trends in risk factors and microvascular and macrovascular complications among U.S. adults with newly diagnosed diabetes. To accomplish this, we analyzed three decades of data from the National Health and Nutrition Examination Survey (NHANES). All participants were asked whether they had ever been diagnosed with diabetes other than during pregnancy. Those reporting a diagnosis of diabetes were asked how old they were when they received their diagnosis. We calculated duration of diagnosed diabetes by subtracting participants' age of diabetes diagnosis from their age reported during the interview. We limited our analytic sample to nonpregnant adults aged $20 with newly diagnosed diabetes, defined as being diagnosed within the past 2 years (n 5 1,486). Risk Factor Treatment and Control Hemoglobin A 1c (HbA 1c ) was measured using high-performance liquid chromatography methods. To account for changes in laboratory methods over time, we calibrated HbA 1c using an equipercentile equating approach (15). We examined the proportion of participants with an HbA 1c ,7.0% (,53 mmol/mol) (16). We defined receiving diabetes treatment as the self-reported current use of blood glucose-lowering pills or insulin. Blood pressure was measured up to three times with a mercury sphygmomanometer, and the mean of all available readings was used in the analysis. We defined hypertension as having elevated mean blood pressure (mean systolic/ diastolic blood pressure $140/90 mmHg or $130/80 mmHg) or the self-reported current use of antihypertensive medication (17,18). We defined receiving treatment as the self-reported current use of antihypertensive medication, and blood pressure control as having a mean blood pressure ,140/90 mmHg or ,130/80 mmHg. Serum total cholesterol was measured enzymatically, and measurements from fasting and nonfasting participants were included in the analysis. We defined hyperlipidemia as having elevated lipids (total cholesterol $240 mg/dL or $200 mg/dL) or the self-reported current use of lipid-lowering medication (19). We defined receiving treatment as the selfreported current use of lipid-lowering medication and lipid control as total cholesterol ,240 mg/dL or ,200 mg/dL. Microvascular Complications Serum creatinine was measured using the Jaffe method. All creatinine measurements were recalibrated to standardized creatinine measurements using recommended equations that minimize the effects of laboratory drift (20). We determined the estimated glomerular filtration rate (eGFR) using the Chronic Kidney Disease Epidemiology Collaboration formula (21). We defined reduced eGFR as having an eGFR ,60 mL/min/ 1.73 m 2 . Urine albumin and creatinine concentrations were measured in a random urine sample using fluorescent immunoassay and a modified Jaffe method, respectively. We defined albuminuria as an albumin-to-creatinine ratio $30 mg/g. We defined any chronic kidney disease as having reduced eGFR, albuminuria, or both (22). Medication use was assessed through pill bottle examination review. We evaluated use of ACE inhibitors or angiotensin II receptor blockers among those with chronic kidney disease. Participants aged $40 had film photographs taken of one randomly selected eye in the NHANES III (1988-1994) and digital photographs taken of both eyes in the 2005-2008 NHANES. Retinopathy was assessed by graders using the Early Treatment Diabetic Retinopathy Study (ETDRS) protocol (23). We defined diabetic retinopathy as having any retinal microaneurysms or blot hemorrhages, with or without more severe lesions (24). Participants aged $40 participated in a lower-extremity examination during the 1999-2004 cycles of the continuous NHANES. These participants received monofilament testing on three sites on each foot. We defined peripheral neuropathy as having one or more insensate areas. Blood pressure measurements were taken at participants' ankles and right arm. We computed the anklebrachial pressure index by dividing systolic blood pressure measured at each ankle by the systolic blood pressure measured at the arm. We defined peripheral artery disease as having an ankle-brachial pressure index of ,0.9 for either ankle (25). Participants reported whether they ever had an ulcer or sore on their legs or feet that lasted .4 weeks. We defined any lower-extremity disease as having peripheral neuropathy, peripheral artery disease, or a history of ulcers (26). Cardiovascular Disease Participants reported whether they had ever been diagnosed with congestive heart failure, stroke, or heart attack. We defined any cardiovascular disease as having at least one of these conditions. Sociodemographic Measures and BMI Participants self-reported their age, sex (male/female), race/ethnicity (non-Hispanic White, non-Hispanic Black, Mexican American, other), education (high school or less, some college, college graduate or above), family income (income-to-poverty ratio ,130%, 130-349%, $350%), health insurance status (uninsured, any health insurance), access to a usual source of care (has access, no access), and smoking status (current, former, never). We calculated BMI as measured weight in kilograms divided by height in meters squared and classified participants into three weight status groups (normal, BMI ,25 kg/m 2 ; overweight, BMI 25-29.9 kg/m 2 ; obese, BMI $30 kg/m 2 ). Statistical Analyses We calculated participant characteristics, the prevalence, treatment, and control of risk factors, and the prevalence of microvascular and cardiovascular disease over time. Because of the limited sample sizes in the individual 2-year survey cycles, we pooled survey years into three time intervals (1988-1994, 1999-2008, and 2009-2018) to improve the precision of our estimates (14). We assessed trends using logistic (binary outcomes), linear (mean of continuous outcomes), or quantile (median of continuous outcomes) regression models. Following NCHS guidelines to test for trends, we modeled the midpoint of each survey period as a continuous, linear predictor in the regression models (27). We examined the distribution of risk factors and compared changes over time using x 2 tests. For complications that changed significantly over time, we used multivariable logistic regression models to explore how changes in social demographic characteristics, diabetes risk factors, and weight status might explain the observed trends. We examined risk factors for complications by combining data from 1988 to 2018 and estimating age-, sex-, and race/ethnicity-adjusted logistic regression models. In sensitivity analyses, we repeated our trend analyses 1) adjusting for age, sex, and race/ethnicity using predictive margins (28); and 2) defining newly diagnosed diabetes as being diagnosed in #1 year, a common cut point used in surveillance research (29,30). Because the approach to assessing retinopathy changed over time, we also performed a sensitivity analysis using a randomly selected fundus photograph from one eye (rather than both eyes) for the NHANES 2005-2008 cycles. Following past studies (24), we used photographs from the right eye to classify participants with an even study identification number and the left eye for those with an odd number. All analyses were conducted using Stata version 16.0 (StataCorp). The recommended sample weights were used, making our results representative of the civilian, noninstitutionalized U.S. adult population with newly diagnosed diabetes. A two-sided P value of ,0.05 was considered statistically significant. RESULTS The age and sex distribution of U.S. adults with newly diagnosed diabetes did not change significantly from 1988 to 2018 (Table 1), whereas the proportion who were non-White, college educated, or had obesity increased substantially over the 30-year period. Adults with newly diagnosed diabetes achieving blood pressure control increased during the 30-year period (Supplementary Fig. 1), as did the use of blood pressure-lowering medication ( Table 2). The overall prevalence of hypertension increased when defined as $140/90 mmHg. When defined as $130/80 mmHg, the prevalence of hypertension was unchanged. An increasing proportion of adults with hypertension were treated and controlled to ,140/90 mmHg (47.8 to 65.9%, P for trend 5 0.02) and ,130/80 mmHg (9.0 to 36.8%, P for trend ,0.001), respectively. The proportion with cholesterol control increased ( Supplementary Fig. 1). The use of lipid-lowering medication rose significantly (Table 2), and the prevalence of hyperlipidemia was stable. An increasing share of those with hyperlipidemia were treated and controlled to total cholesterol ,240 mg/dL (17.1 to 71.2%, P for trend ,0.001) or ,200 mg/ dL (9.4 to 52.4%, P for trend ,0.001), respectively. The prevalence of any chronic kidney disease declined from 40.4 to 25.5% (P for trend 5 0.003) ( Table 3). These gains were driven by declines in albuminuria (38.9 to 18.7%, P for trend ,0.001). In contrast, reduced eGFR remained stable over time (7.5 to 9.9%, P 5 0.30). The use of ACE inhibitors/angiotensin II receptor blockers increased substantially among those with low eGFR or albuminuria (Supplementary Table 1). The prevalence of retinopathy among U.S. adults aged $40 with newly diagnosed diabetes was unchanged from 1988 to 2008 (13.2 to 12.1%) ( Table 3). Results were similar in sensitivity analyses using one fundus photograph to classify participants in the 2005-2008 NHANES (results not shown). The prevalence of any self-reported cardiovascular disease was stable from 1988 to 2018 (19.0 to 16.5%) ( Table 3). We explored factors that might explain declines in the prevalence of albuminuria. Differences in albuminuria across time periods increased after adjusting for age, sex, and race/ethnicity but decreased after adjusting for education (Supplementary Table 2). Changes in HbA 1c , blood pressure, and total cholesterol partially accounted for the population-level improvements in albuminuria. Adjusting for weight status increased differences in albuminuria over time. After adjusting for age, sex, and race/ ethnicity, the prevalence of any complication for adults with newly diagnosed diabetes was higher among those who were older, lower income, less educated, current or former smokers, and obese (Table 4). Trends in risk factors and complications were similar after adjusting for age, sex, and race/ethnicity (Supplementary Tables 3 and 4) and when defining newly diagnosed diabetes as being diagnosed within 1 year (Supplementary Tables 5 and 6). CONCLUSIONS From 1988 to 2018, there were marked improvements in the treatment and control of risk factors (HbA 1c , blood pressure, or cholesterol) and a substantial decline in the prevalence of albuminuria in U.S. adults with newly diagnosed type 2 diabetes. However, the burden of complications remained high. Approximately 26% had chronic kidney disease, 24% had lower-extremity disease, 12% had retinopathy, and 17% had a history of cardiovascular disease. Our findings extend population research on the health status of adults with newly diagnosed type 2 diabetes. A prior U.S. population-based study using data from the National Health Interview Survey (NHIS) found that from 1997 to 2003 the prevalence of obesity rose among adults with newly diagnosed diabetes, while the prevalence of cardiovascular disease and hypertension was unchanged (31). However, data in the NHIS are entirely self-reported. When examining a broader range of objectively measured risk factors and comorbidities, we confirmed the increase in obesity but also found evidence of improvements in glycemic control and kidney health. The reduction in albuminuria was likely related to major improvements in the detection of diabetes over the study period (4). This is consistent with research showing that the proportion of undiagnosed diabetes cases has decreased in the past two decades (2,32). Declines in albuminuria were especially pronounced from 1988-1994 to 1999-2008, corresponding to the reduction of the fasting blood glucose diagnostic threshold and increased emphasis on diabetes screening (5-9). We also found that declines in HbA 1c , blood pressure, and total cholesterol explained some of the decrease in albuminuria. Results for HbA 1c and blood pressure are consistent with landmark trials demonstrating the benefits of tight glycemic and blood pressure control (33,34), and findings for total cholesterol are congruent with research suggesting an association between dyslipidemia and kidney disease risk (35). Increasing educational attainment was another important contributor and suggests the fundamental importance of education in health. Growing awareness of the importance of albuminuria among clinicians, along with rising use of renin-angiotensin system blockers, were likely important factors as well. The high burden of complications suggests that timely detection of diabetes remains a challenge for some patients. In particular, we found that adults who were older, lower income, less educated, or obese had the highest prevalence of complications at the time of diagnosis. Approximately half of eligible U.S. adults receive recommended diabetes screenings, although uptake is significantly lower among certain high-risk groups, such as those who are low-income (11). More targeted screening programs for high-risk, underserved patients may thus reduce complications at diagnosis. Our findings also indicate that more aggressive treatment of risk factors immediately after diagnosis may be needed. In particular, we found that control of hypertension or hyperlipidemia failed in up to 63% and 48% of adults with newly diagnosed diabetes, respectively, highlighting the need to prioritize blood pressure and lipid management. We observed a nonsignificant increase in the prevalence of reduced eGFR from 1988-1994 to 1999-2008, followed by little change in 2009-2018. These trends are consistent with trends in the total population of adults with diabetes. In U.S. adults with diabetes, the prevalence of reduced eGFR increased in 1988-1994 to 2003-2004 before subsequently leveling off in 2011-2012 (36). Prior studies speculate that rising blood pressure treatment and control may account for some of the increase in reduced eGFR in adults with diabetes due to their hemodynamic effects (37,38). Consistent with this suggestion, we found that trends in blood pressure-lowering medication use followed trends in reduced eGFR, rising from 1988-1994 to 1999-2008 and leveling off in 2009-2018. We also did not observe any major improvements in the prevalence of retinopathy or cardiovascular disease in adults with newly diagnosed diabetes over this 30-year period. However, these findings must be viewed in light of some methodological limitations. The NHANES III (1988-1994) used film photography to assess retinopathy, whereas the NHANES 2005-2008 used higher-quality digital photography. Detection of retinopathy may therefore have been more sensitive in the later survey years, potentially affecting the comparability of estimates across years (24). Likewise, we likely underestimate the true prevalence of cardiovascular disease, because this information is self-reported in NHANES. In particular, subclinical cardiovascular disease is common in older adults and those with diabetes (39). Trends will also reflect improvements in detection and survival; studies of the general population with diabetes have found steady declines in cardiovascular complications and all-cause and cardiovascular mortality (40)(41)(42). Our study has several additional limitations. First, there may be misclassification of incident diabetes cases, because our definition relies on participants accurately reporting their diabetes status and age of diagnosis. However, prior research indicates that these measures are highly specific and reliable (43,44). Second, because of sample size limitations, we may have lacked the power to detect small changes in complications over time. Third, retinopathy and lower-extremity disease assessments were only performed in those aged $40 years. Thus, we were not able to draw conclusions regarding these outcomes in younger individuals. Fourth, our study was cross-sectional, and we cannot determine the temporality of the observed associations. Strengths of the study include the contemporary, nationally representative sample of U.S. adults with newly diagnosed diabetes spanning 30 years. With the exception of cardiovascular disease, the assessment of risk factors and complications was based on objective, rigorous, and systematic measurement. Over the past three decades, there were significant reductions in albuminuria and improvements in the treatment and control of HbA 1c , blood pressure, and cholesterol in adults with newly diagnosed type 2 diabetes. These results suggest that there have been improvements in diabetes screening and that we are diagnosing cases earlier in the disease process. Nevertheless, the overall burden of complications and uncontrolled risk factors remains high. Targeted screening of high-risk populations and aggressive risk factor treatment immediately following diagnosis are important strategies for sustaining progress moving forward. The corresponding author, E.S., attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. M.F. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
2021-01-10T06:20:46.881Z
2021-01-08T00:00:00.000
{ "year": 2021, "sha1": "c5adb2a4ae63323a4ae866b803b9e38d53eb37ef", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "e370bc05e291c2a743166ceec43b5d68151e56ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253359893
pes2o/s2orc
v3-fos-license
Multisite surveillance for influenza and other respiratory viruses in India: 2016–2018 There is limited surveillance and laboratory capacity for non-influenza respiratory viruses in India. We leveraged the influenza sentinel surveillance of India to detect other respiratory viruses among patients with acute respiratory infection. Six centers representing different geographic areas of India weekly enrolled a convenience sample of 5–10 patients with acute respiratory infection (ARI) and severe acute respiratory infection (SARI) between September 2016-December 2018. Staff collected nasal and throat specimens in viral transport medium and tested for influenza virus, respiratory syncytial virus (RSV), parainfluenza virus (PIV), human meta-pneumovirus (HMPV), adenovirus (AdV) and human rhinovirus (HRV) by reverse transcription polymerase chain reaction (RT-PCR). Phylogenetic analysis of influenza and RSV was done. We enrolled 16,338 including 8,947 ARI and 7,391 SARI cases during the study period. Median age was 14.6 years (IQR:4–32) in ARI cases and 13 years (IQR:1.3–55) in SARI cases. We detected respiratory viruses in 33.3% (2,981) of ARI and 33.4% (2,468) of SARI cases. Multiple viruses were co-detected in 2.8% (458/16,338) specimens. Among ARI cases influenza (15.4%) were the most frequently detected viruses followed by HRV (6.2%), RSV (5%), HMPV (3.4%), PIV (3.3%) and AdV (3.1%),. Similarly among SARI cases, influenza (12.7%) were most frequently detected followed by RSV (8.2%), HRV (6.1%), PIV (4%), HMPV (2.6%) and AdV (2.1%). Our study demonstrated the feasibility of expanding influenza surveillance systems for surveillance of other respiratory viruses in India. Influenza was the most detected virus among ARI and SARI cases. Introduction Respiratory viruses like influenza and respiratory syncytial virus (RSV) are responsible for substantial global morbidity and mortality annually, with substantial burden shared by young children and older adults [1][2][3]. While many countries have sentinel surveillance generating data on epidemiology of influenza, data about other respiratory viruses (ORV) are scarce especially in lower-Middle income countries (LMIC) although they are estimated to cause substantial morbidity and mortality [4,5]. Infection with ORV usually presents with influenza-like-illness with symptoms like cough with or without fever but exhibits differences in epidemiology, severity and seasonality [5]. In view of India's geographic location, vast population and diverse seasonality, it is crucial that the influenza surveillance system be strengthened to detect ORVs. Understanding epidemiology of influenza and ORV in India would help in timing the use of empirical antiviral treatment for influenza, influenza vaccination, and implementing nonpharmacological interventions. The WHO Global Influenza Surveillance and Response System (GISRS) has been monitoring influenza viruses for more than six decades. GISRS operates through a global network of National Influenza Centers (NIC), which are laboratories that provide information on the circulation and evolution of influenza viruses globally. In 2004, US Centers for Disease Control and Prevention (CDC) started collaborating with Indian Council of Medical Research-National Institute of Virology, to strengthen a network of 10 surveillance centers across India [6]. Annual percent positivity of influenza in influenza like illness(ILI)/ severe acute respiratory infection (SARI) specimens was found to be between 12-14%, but the prevalence of ORV was unknown [7]. The emergence of the COVID-19 pandemic has further highlighted the need for surveillance for ORV. WHO has been evaluating the feasibility of leveraging the GISRS platform for surveillance of COVID-19, RSV and other common respiratory viruses [8]. In order to strengthen the capacity for pandemic influenza preparedness and response, as well as to assess the feasibility of conducting surveillance for multiple respiratory viruses and understand their epidemiology, the Indian Council of Medical Research-National Institute of Virology (ICMR-NIV), in collaboration with US Centers for Disease Control and Prevention (CDC) implemented a multi-site sentinel surveillance system for severe acute respiratory infection (SARI) and acute respiratory infection (ARI) surveillance in India from 2016 to 2018. We present the results of this multi-site pan-respiratory viral surveillance network. Study setting The acute respiratory infection surveillance network included six centers specifically selected to represent different climates and geographic areas of India. From north to south, the participating centers were Sher-i-Kashmir Institute of Medical Sciences (SKIMS), Srinagar (34.0˚N, Jammu and Kashmir); All India Institute of Medical Sciences (AIIMS) (28.6˚N, New Delhi); Indian Council of Medical Research (ICMR)-Regional Medical Research Center (RMRC), Dibrugarh (27.5˚N, Assam); ICMR-National Institute for Cholera and Enteric Diseases (NICED), Kolkata (22.6˚N, West Bengal); ICMR-National Institute of Virology (ICMR-NIV), Pune (18.5˚N, Maharashtra); King Institute of Preventive Medicine & Research (KIPMR), Chennai (13.1˚N, Tamil Nadu). ICMR-NIV, Pune was the reference and coordinating center for the study. Each study center selected 2 to 3 sentinel hospitals and clinics having general medicine and pediatrics departments for enrollment of the participants. Ethics approval The study protocol was approved by the appropriate institutional human ethics committees and approved by the Health Ministry's Screening Committee of India. The participants were informed about the study in their local language and written consent/assent was obtained before enrollment in the study. For participant under 18 years of age, written informed consent was obtained from the parent/guardian. Case definitions For enrolling the participants, acute respiratory infection (ARI) was defined as illness in a person presenting in the outpatient department (OPD) with acute onset (within 7 days) of any two of the following symptoms: fever/feverishness/chills, cough, nasal congestion, shortness of breath, or sore throat. Severe acute respiratory infection (SARI) definition was adapted from the WHO case definition and was defined as cases with history of cough with onset within the last 7 days and requiring overnight hospitalization. For defining SARI among infants aged <2 months, physician diagnosis suggestive of acute lower respiratory infection (pneumonia, bronchitis, bronchiolitis, sepsis) requiring overnight hospitalization was used. Participant enrollment and case identification Each center enrolled a convenience sample of 5 to 10 ARI cases and 5 to 10 SARI cases of all ages every week. During any up surge in respiratory infections noticed by the clinicians in the sentinel hospitals, additional participants were enrolled. At all sentinel sites, physicians and nurses were trained to screen patients using ARI and SARI case definitions. ARI cases were screened from outpatient facility and SARI cases from general medicine, pediatric and pulmonary medicine wards. Trained staff visited outpatients clinics and inpatient wards on fixed day of week. Patients fulfilling the case definitions and consenting to participate were recruited in the study. Every week first 5-10 ARI/ SARI cases identified in outpatient clinics and inpatient wards of each center were enrolled into the study. The clinical and epidemiological details of each enrolled participant were recorded in a standardized case report form. Specimen collection Clinical sample collection, transportation, and storage were done as per WHO guidelines [9]. Nasal and/or throat respiratory specimens were collected from enrolled ARI and SARI cases by trained personnel. Only nasal specimens were collected from infants aged less than 1year. Specimens were transported in viral transport media to the respective site laboratory within 24 hours in cold box with ice packs. If the samples were not tested immediately, they were stored at -80˚C. Virus detection All centers performed external molecular quality assurance for the detection of respiratory viral pathogens (provided by the ICMR-NIV Pune) successfully. NIV participated in the external quality assurance program by Quality Control for Molecular Diagnostics (QCMD) for influenza and non-influenza respiratory viruses with 100% concordance. All the centers followed standard operating procedures for sample processing and viral detection using the same RTPCR assays. RNA was extracted using a Qiagen viral RNA isolation kit by all the centers except NIV, which used a MagMax-96 kit as per manufacturer's protocol. All the specimens were tested by real-time reverse transcription polymerase chain reaction (rRT-PCR) for the following viruses: influenza A [A(H1N1)pdm09, A(H3N2)], influenza B [B/Yamagata and B/Victoria lineages] along with house-keeping RNaseP gene, respiratory syncytial virus A and B, metapneumovirus, parainfluenza viruses 1, 2, 3, and 4, rhinoviruses and adenoviruses using a protocol previously described [10]. Nucleic acid amplification was performed using one step RT-PCR (qRT-PCR SuperScript III kit, Invitrogen, USA). A 25 μl PCR reaction comprised of 10 μmol of each forward and reverse primer, 5 μmol of TaqMan probe, 12.5 μl 2X buffer, 0.5 μl SuperScript III enzyme and 5 μl nucleic acid templates. Thermal cycling conditions were: 50˚C for 30 minutes for reverse transcription, initial denaturation at 94˚C for 5 minutes, 45 cycles of three steps (15 seconds at 94˚C, 15 seconds at 50˚C and 30 seconds at 55˚C incubation step during which fluorescence data were collected). For identifying an oseltamivir-resistant influenza A(H1N1)pdm09 virus possessing the H275Y mutation (in the neuraminidase (NA) gene) in clinical specimens or in clinical isolates, the laboratory performed allelic discrimination using a rRT-PCR protocol shared by the National Institute of Health, Thailand [11]. Sequencing of the HA gene of influenza and G gene of RSV was carried on a subset of positive samples using ABI 3730 DNA analyzer [12]. The sequence obtained was edited by Seqscape V2.5 software (Applied Biosystems, USA), and pairwise sequence alignment and phylogeny of HA of influenza viruses and G gene of RSV were performed using best fit Tamura-Nei nucleotide substitution model to generate a neighbor-joining tree with 1000 replicates bootstrap support, representing reference strains, and previously reported strains from India and strains from the global database using MEGA 6 program [13]. Data analysis All data were entered into Epi-info 7 by each participating center and collated every week at NIV, Pune and analyzed using STATA 15 (StataCorp LLC). The overall percent positivity for each virus for surveillance period was calculated as proportion of ARI and SARI samples tested positive for each virus along with 95% confidence interval. Monthly percent positivity (mPP) of each virus for each center were calculated as a proportion of specimens tested positive out of all specimens (ARI and SARI) with symptom onset in same month. To detect increase in activity we compared mPP with overall percent positivity during the project period (September 2016 to October 2018). Months with mPP higher than percent positivity were considered as having increased virus activity. Chi-square test was used to measure statistical significance of virus positivity across different groups. Multivariable logistic regression was used to derive adjusted odds ratios after adjusting for age and center. The overall percent positivity for influenza A (H1N1)pdm09 ranged from 3.3% in Dibrugarh to 10.1% in Pune (S1 Table). Similarly, the overall percent positivity for the influenza A (H3N2) ranged from 0.9% in Pune to 7.3% in Srinagar, and for influenza B ranged from 0.1% in Kolkata to 8.2% in Srinagar. The mPP of influenza viruses exceeded the overall prevalence during different months at different centers (Fig 2). (IC 50) values in phenotypic assays conferring reduced susceptibility to oseltamivir. Of these, 3 were detected in Pune, and one each was detected in Chennai, Srinagar, and Delhi. Overall percent positivity of RSV ranged from 0.8% in Kolkata to 8% in Srinagar. Increased RSV activity with mPP more than percent positivity was seen between August to November in Table) (Fig 2). In 2018, increased RSV activity was seen in Dibrugarh from February to August. Phylogenetic analysis of G gene of 27 RSVA detected in 2016, and 29 RSV B detected in 2017-18 showed that RSV A of ON1 and RSV B of BA9 genotype were in circulation. Discussion The results of this multisite surveillance suggest that influenza surveillance capacity can be leveraged for the surveillance of other respiratory viruses. Using the ARI and SARI case definitions among all age-group, we detected one or more respiratory virus in one-third of the specimens using RT-PCR. Influenza viruses were most detected followed by RSV among both ARI and SARI cases. The overall virus detection (ARI: 33%,SARI: 33%) in our study was lower than studies in other tropical and sub-tropical countries like Vietnam (SARI: 42%), Yemen (SARI: 41%), Thailand (ILI: 45%, SARI: 38%), Bolivia (ILI 50%) and Suriname (ILI 61%; SARI 41%) [14][15][16][17][18]. The viral detection in <15 years (ARI: 40%, SARI 41%) in our study were comparable to detection in the same age group in the Thailand study (ARI: 49%, SARI:41%) [16]. Influenza was the most common virus detected in those aged �5 years in both ARI and SARI cases. This was similar to finding from studies in Thailand and Vietnam [14,16]. High proportion of participants aged �5 years (70% in ARI and 60% in SARI) would have contributed to higher prevalence of influenza seen in the current analysis. The variation in viral detection across centers in our study could be due to difference in proportion of cases below 5 and above 60 years in different centers. Centers with lower proportion of under 5 years was (Chennai and Pune) or had higher proportion of adults> = 60 years (Srinagar) had the lower detection of viruses in ARI and SARI cases. Among SARI cases, influenza virus was detected in 27% of pregnant women, 12.4% of cases with pre-existing chronic diseases and 12.7% adults �60 years. Studies from India during the same period had shown high incidence of influenza in pregnant women (68 to 90 per 10,000 pregnant women months) and older adults (influenza associated lower respiratory infection incidence:7.9 per 1000 person years) [19,20]. High burden of influenza among these WHO Strategic Advisory Group of Experts (SAGE) target groups suggests that value proposition of influenza vaccination and early interventions (early detection and antiviral treatment) among these groups need to be explored by the country. In contrast to influenza viruses, other respiratory viral pathogens predominated mainly among children under 5 years, consistent with earlier published data on non-influenza viruses [17,21]. RSV was most commonly detected virus among children less than 2 years of age both among ARI and SARI cases, emphasizing the leading role of RSV in respiratory infections in young children. Globally, it has been shown that lower middle-income countries contribute to nearly 40% RSV associated ALRI cases and nearly 75% of the RSV-related deaths [22,23]. The increased activity of influenza in Delhi, Kolkata, Dibrugarh, Pune and Chennai corresponded with rainy season (June to September in Delhi, Kolkata, Dibrugarh, Pune; October to December in Chennai) and in Srinagar with winter (December to February). These results were similar to results from our previous study, displaying discrete periods of peak activity [6,7]. We also observed an inter-seasonal increase in influenza A (H1N1) pdm09 virus detection across all centers between January and July 2017. Other countries in the region, such as Thailand, Sri Lanka and other South-east Asian countries also displayed a relatively heightened influenza A(H1N1)pdm09 activity throughout 2017 [24]. In contrast to developed countries with temperate climates, influenza epidemiological studies are limited in LMIC. Furthermore, data on non-influenza respiratory infections are also limited. Most of the studies in LMIC are focused on specific age groups (children or elderly population) or ILI/SARI cases. Hence, we tried to address the epidemiological and clinical characteristics of influenza and ORVs across all age groups in both ARI and SARI cases in this paper. We demonstrated that the ARI and modified WHO SARI case definition were useful for detecting influenza and other respiratory viruses. India like many countries leveraged the influenza sentinel surveillance system for COVID-19 pandemic which further underscored the importance for such robust surveillance systems with capacity to detect other respiratory viruses early as well [25]. The survey had several limitations. We have used a convenient sampling method for recruitment, and this could potentially contribute to overestimation of percent positivity due to increased sampling during peak period. Also, considering the vast size, diverse terrain and climates, and large rural populations of India, the data from the six cities may not be generalizable to whole country. Additional years of surveillance data from different geographical regions within the country will contribute to better understanding of epidemiology of ORVs. Although a substantial proportion of ARI and SARI is associated with bacterial pathogens, only respiratory viruses were tested for. Nevertheless, our study was aimed to leverage the existing influenza surveillance sites collecting upper respiratory specimens for ORV surveillance, and therefore was inappropriate for bacterial pathogens. We demonstrated the usefulness of influenza sentinel surveillance platform to collect comprehensive information on the viral etiology of SARI and ARI cases in a pre-COVID-19 period. Our findings suggest influenza sentinel surveillance systems may be leveraged for surveillance of RSV, SARS-CoV2 and other respiratory viruses. Supporting information S1
2022-11-06T16:06:12.313Z
2022-11-04T00:00:00.000
{ "year": 2022, "sha1": "848c30c381779b038569591c072d66e051b03b93", "oa_license": "CC0", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0001001&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d02ea043f12a7e1c5a4866092be697e4b525253b", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
199543329
pes2o/s2orc
v3-fos-license
Hidden Nambu mechanics II: Quantum/semiclassical dynamics Nambu mechanics is a generalized Hamiltonian dynamics characterized by an extended phase space and multiple Hamiltonians. In a previous paper [Prog. Theor. Exp. Phys. 2013, 073A01 (2013)] we revealed that the Nambu mechanical structure is hidden in Hamiltonian dynamics, that is, the classical time evolution of variables including redundant degrees of freedom can be formulated as Nambu mechanics. In the present paper we show that the Nambu mechanical structure is also hidden in some quantum or semiclassical dynamics, that is, in some cases the quantum or semiclassical time evolution of expectation values of quantum mechanical operators, including composite operators, can be formulated as Nambu mechanics. We present a procedure to find hidden Nambu structures in quantum/semiclassical systems of one degree of freedom, and give two examples: the exact quantum dynamics of a harmonic oscillator, and semiclassical wave packet dynamics. Our formalism can be extended to many-degrees-of-freedom systems; however, there is a serious difficulty in this case due to interactions between degrees of freedom. To illustrate our formalism we present two sets of numerical results on semiclassical dynamics: from a one-dimensional metastable potential model and a simplified Henon--Heiles model of two interacting oscillators. Introduction In 1973, Nambu proposed a generalization of the classical Hamiltonian dynamics [1] that is nowadays referred to as the Nambu mechanics. In his formulation, the phase space spanned by the canonical doublet (q, p) is extended to that spanned by N (≥ 3) variables (x 1 , x 2 , ..., x N ), the Nambu N -plet, and the Hamilton equations of motion are generalized to the Nambu equations. In order for the Liouville theorem to hold in the N -dimensional extended phase space, the Nambu equations are defined by N − 1 Nambu Hamiltonians and the Nambu bracket, an N -ary generalization of the Poisson bracket. The structure of Nambu mechanics has impressed many authors, who have reported studies on its fundamental properties and possible applications, including quantization of the Nambu bracket [2][3][4][5][6][7][8][9][10][11][12]. However, the applications to date have been limited to particular systems, because Nambu systems generally require multiple conserved quantities as Hamiltonians and the Nambu bracket exhibits serious difficulties in systems with many degrees of freedom or quantization [1,2,11]. In 2013 we proposed a new approach to Nambu mechanics [13]. We revealed that the Nambu mechanical structure is hidden in a Hamiltonian system which has redundant degrees of freedom. For example, in a Hamiltonian system with a Hamiltonian H(q, p), if we take three variables as (x 1 , x 2 , x 3 ) = (q, p, q 2 ), their classical time evolution can be given by N = 3 Nambu equations with two Hamiltonians F (x 1 , x 2 , x 3 ) and G(x 1 , x 2 , x 3 ). Here x 3 = q 2 is a redundant degree of freedom in the original Hamiltonian system, and the Nambu Hamiltonians are given by the original Hamiltonian F (x 1 , x 2 , x 3 ) = H(q, p) and the constraint G(x 1 , x 2 , x 3 ) = x 3 − x 2 1 = 0, which is induced due to the consistency between the three variables. We derived the consistency condition to determine the induced constraints. In the present paper we show that the Nambu mechanical structure is also hidden in some quantum or semiclassical systems. The key idea is as follows. In our previous work, the Nambu multiplet is given as a function of classical variables (q, p), and therefore the induced constraints are always trivial, i.e. set to zero [13]. However, if we take the Nambu multiplet as a set of expectation values of quantum mechanical operators including composite operators (q 2 ,p 2 , ...), the constraints become nontrivial because of quantum fluctuation. Furthermore, if these constraints are constants of motion, the time evolution of the Nambu multiplet could be given by the Nambu equations. For example, consider a classical system with a Hamiltonian H(q, p) and a corresponding quantum system with the HamiltonianĤ = H(q,p). If we take three variables as (x 1 , x 2 , x 3 ) = (q, p, q 2 ), the trivial constraint G = x 3 − x 2 1 = 0 is induced. Then, if we replace these variables with (x 1 , x 2 , x 3 ) = ( q , p , q 2 ), the same function G = x 3 − x 2 1 has a nonzero value in general because of quantum fluctuation. Furthermore, in the case of frozen Gaussian wave packet dynamics [14], which is the dynamics of a Gaussian wave packet with a fixed width σ, the function G = x 3 − x 2 1 = σ 2 is constant in time and therefore the quantum or semiclassical time evolution of the Nambu triplet can be given by the N =3 Nambu equations with Nambu Hamiltonians F and G. Here, F is equal to or approximately equal to the expectation value of the Hamiltonian operator F = Ĥ or F ≃ Ĥ . We present a general procedure to find the Nambu mechanical structure in quantum or semiclassical systems of one degree of freedom with some specific examples. It should be noted that our formulation is not a quantization of the Nambu bracket. We just propose a prescription to describe ordinary quantum or semiclassical dynamics in a classical Nambu mechanical manner. The Nambu mechanical structure is hidden not only in one-degree-of-freedom systems. It is straightforward to extend our formalism to many-degrees-of-freedom systems by extending the definition of the Nambu bracket. However, the resulting hidden Nambu mechanics becomes pathological because in many-degrees-of-freedom systems the Nambu bracket does not satisfy the fundamental identity, which is an important property of the Nambu bracket and corresponds to the Jacobi identity in Hamiltonian dynamics [2,11]. Without the Jacobi identity the canonical transformation of the canonical doublet cannot be properly defined, and the dynamics becomes anomalous [15]. Similarly, without the fundamental identity we cannot define the canonical transformation of Nambu multiplets including their consistent time evolution. The hidden Nambu mechanics in many-degrees-of-freedom systems is an example of dynamics without the canonical structure. The outline of this article is as follows. In Sect. 2 we briefly review our previous work on the hidden Nambu mechanics in classical Hamiltonian systems [13]. As preparation for the next section, we give a detailed description of some examples. In Sect. 3 we present a procedure to find the Nambu mechanical structure hidden in some quantum or semiclassical 2/18 dynamics. Two examples are given: the exact quantum dynamics of a harmonic oscillator and the semiclassical nonlinear dynamics of a frozen Gaussian wave packet. In Sect. 4 we give an extension of our formalism to many degrees of freedom. In Sect. 5 we present two numerical results to illustrate our formalism: the semiclassical tunneling dynamics in a onedimensional metastable system and the semiclassical energy exchange dynamics between two coupled oscillators in a simplified Henon-Heiles model. In the last section we give our conclusions and discuss the direction of future work. Hidden Nambu mechanics We begin with a brief review of Hamiltonian dynamics, Nambu mechanics [1], and hidden Nambu mechanics [13]. We describe two examples in detail to prepare for the next section. In this and the next section we consider only one-degree-of-freedom systems. Hamiltonian dynamics Hamiltonian dynamics is the classical dynamics of the canonical doublet (q(t), p(t)), which is given by a Hamiltonian H = H(q, p) and the Poisson bracket defined by the two-dimensional Jacobian, where A = A(q, p) and B = B(q, p) are any functions of the canonical doublet. The Poisson bracket should satisfy the Jacobi identity, where A 1 = A 1 (q, p), A 2 = A 2 (q, p), and B = B(q, p) are any functions. In terms of the Poisson bracket, the Hamilton equation of motion for any function f = f (p, q) can be written as The time evolution according to this equation preserves the phase space volume because of the divergenceless property, This is the Liouville theorem in Hamiltonian dynamics. Nambu mechanics Nambu mechanics is a generalized Hamiltonian dynamics of N (≥ 3) variables (x 1 , x 2 , ..., x N ) [1]. In Nambu mechanics the canonical doublet is generalized to the Nambu N -plet, and the Poisson bracket, Eq. (1), is generalized to the Nambu bracket defined by means of the 3/18 N -dimensional Jacobian, where A a = A a (x 1 , x 2 , ..., x N ) (a = 1, ..., N ) are any functions of the Nambu multiplet and ε i1i2···iN is the N -dimensional Levi-Civita symbol, the antisymmetric tensor with ε 12···N = 1. The Nambu bracket should satisfy the following fundamental identity [2], an N -ary generalization of the Jacobi identity in Eq. (2): .., N − 1) are any functions. In terms of the Nambu bracket, the Nambu equation for any function f = f (x 1 , x 2 , ..., x N ) can be written as where .., N − 1) are Nambu Hamiltonians. The time evolution according to this equation preserves the N -dimensional phase space volume because of the divergenceless property, Therefore the Liouville theorem also holds in Nambu mechanics. Hidden Nambu mechanics Consider a Hamiltonian system of a canonical doublet (q, p) with a Hamiltonian H = H(q, p). The key idea of hidden Nambu mechanics is to describe this system by means of N (≥ 3) variables x i = x i (q, p) (i = 1, ..., N ). We assume that at least N − 1 of {x i , x j } PB do not vanish, so that the time evolution of any functionf (x 1 , ..., x N ) = f (q, p) can be written via Hamilton equation of motion in Eq. (3), where F (x 1 , ..., x N ) = H(q, p). We introduce the functions G c = G c (x 1 , ..., x N ) (c = 1, ..., N − 2) which satisfy the consistency conditions Then, Eq. (9) can be rewritten as the Nambu equation in the form of Eq. (7), 4/18 where we have used the following formula concerning Jacobians: The functions G c are constants in motion and can be set to zero by redefining G c . This is a natural choice because the functions G c work as constraints for the Nambu multiplet (x 1 , x 2 , ..., x N ). We refer to G c = 0 as induced constraints, because they are induced by enlarging the phase space from (q, p) to (x 1 , x 2 , ..., x N ). Examples Here we present detailed descriptions of two simple examples to show how induced constraints are obtained for given multiplets. We adopt the same choice of N -plets in the next section. Finally we comment on the functional forms of the Nambu Hamiltonians. (a) N = 3: classical harmonic oscillator Consider three composite variables of the canonical doublet, which satisfy the following relations: Then, the conditions in Eq. (10) become and G is solved as G = 2x 2 3 − 2x 1 x 2 + C with a constant C. Redefining G to eliminate the constant, we obtain the induced constraint As an example of the dynamics of the Nambu triplet in Eq. (13), consider a one-dimensional harmonic oscillator whose Hamiltonian is given by The Hamilton equations of motion for the triplet are as follows: Let us derive these equations from the N = 3 Nambu equations with two Nambu Hamiltonians (F, G). One of the Hamiltonians, F , is equal to the original Hamiltonian H(q, p), Eq. 5/18 (17), and the other Hamiltonian, G, is given by the induced constraint, Eq. (16). The N = 3 Nambu equations are and each equation is given by These equations are equivalent to the Hamilton equations of motion in Eq. (18). (b) N = 4: classical nonlinear systems Consider four variables, two of them being composites, which satisfy the following relations: Then, the conditions in Eq. (10) become and G 1 and G 2 are given by where C 1 and C 2 are constants. By redefining G 1 and G 2 , we obtain the induced constraints As an example of the dynamics of the Nambu quartet in Eq. (22), consider a onedimensional nonlinear system whose Hamiltonian is given by where V (q) is an anharmonic potential. The Hamilton equations of motion for the quartet are written as follows: Let us derive these equations from the N = 4 Nambu equations with three Nambu Hamiltonians (F, G 1 , G 2 ). One of the Hamiltonians, F , is equal to the original Hamiltonian whereṼ (x 1 , x 3 ) = V (q). The other two Hamiltonians, G 1 and G 2 , are given by the induced constraints in Eqs. (25) and (26). The N = 4 Nambu equations are and each equation is given by These equations are equivalent to the Hamilton equations of motion in Eq. (28). Some comments Here we make some comments on the functional forms of Nambu Hamiltonians. In some cases, the functional form of (G 1 , ..., G N −2 ) cannot be determined uniquely. For example, for (x 1 , x 2 , x 3 , x 4 ) = (q, p, q 2 , q 3 ), one of the Poisson brackets in the consistency condition in Eq. (10) is given by 3 1 , respectively. Although we can choose either expression in classical mechanics, we must choose the latter expression, {x 2 , x 4 } PB = −3x 3 , in quantum or semiclassical mechanics. As shown in the next section, we must express the Poisson brackets in the consistency condition of Eq. (10) using variables of the highest order possible. Also, in some cases we cannot uniquely determine the functional form of the Hamiltonian F in classical mechanics. In the next section we present a prescription to determine the functional form of F in quantum or semiclassical systems. 3. Hidden Nambu mechanics in quantum/semiclassical systems 3.1. Quantum/semiclassical dynamics Consider a quantum system of a doublet (q,p) with a Hamiltonian operator The dynamics of a quantum operator = A(q,p) is given by the Heisenberg equation, In this work we focus on the dynamics of the expectation value of the quantum operator,  (t) = ψ|Â(t)|ψ , where |ψ is a quantum state. The time evolution of  (t) is given by 7/18 taking the expectation value of both sides of the Heisenberg equation, This equation gives the exact quantum dynamics, and we can consider several approximated dynamics. The lowest-order approximation is simply the classical Hamiltonian dynamics, A systematic approximation scheme to derive higher-order semiclassical dynamics, the quantized Hamiltonian dynamics [16,17], has been developed. In quantum or semiclassical systems there might exist conserved quantities other than Ĥ = H(q,p) , the expectation value of the original Hamiltonian. Moreover, in some cases such conserved quantities might be identified as the constraints G c in the hidden Nambu mechanics. This means that the Nambu structure could be hidden in quantum or semiclassical systems with nontrivial constraints G c = 0, which are trivial (G c = 0) in classical systems. How to find the hidden Nambu structure The procedure to find the Nambu structure hidden in quantum or semiclassical systems is as follows. Step (1) Step (2): Consider a quantum system of a doublet (q,p) with a HamiltonianĤ = H(q,p), Eq. (32), which corresponds to the classical Hamiltonian H(q, p). Replace the Nambu N -plet with the corresponding expectation values of quantum operators. For example, Step (3): Determine the functional form of F (x 1 , x 2 , ..., x N ) by representing Ĥ as a function of the Nambu N -plet. If Ĥ includes an expectation value Ô which is not a member of the Nambu N -plet, we reduce Ô to a function of the Nambu N -plet by means of the zero-cumulant approximation that ignores the cumulant, For example, for (x 1 , x 2 , x 3 ) = ( q , p , q 2 ), if Ĥ includes q 4 , it is approximated as q 4 ≃ 3 q 2 2 − 2 q 4 = 3x 2 3 − 2x 4 1 by means of q 4 c ≃ 0 followed by q 3 c ≃ 0. Step (4): The other Nambu Hamiltonians G c (c = 1, ..., N − 2) are given by the same functional forms as the trivial constraints. They are in general nontrivial, G c = 0, because of quantum fluctuation. 8/18 Step (5): If the quantities (F, G 1 , ..., G N −2 ) are all conserved in quantum or some semiclassical dynamics, the dynamics of the Nambu N -plet can be cast into the Nambu form in Eq. (11). The zero-cumulant approximation is similar to the approximation adopted in the quantized Hamiltonian dynamics [16]. However, it is not the only approximation for the Hamiltonian F in the hidden Nambu mechanics. It is also possible to consider an approximation that ignores the quantum fluctuation, for example (q − q ) n ≃ 0. As for the example shown in Step (3), this approximation leads to q 4 ≃ 6 q 2 q 2 − 5 q 4 = 6x 3 x 2 1 − 5x 4 1 . The resulting Nambu equations for quantum/semiclassical systems are the same as the ones for classical systems, and quantum properties are introduced through nonzero constraints G c = 0 (c = 1, ..., N − 2). This might imply that the replacement where C c are nonzero constants, could be regarded as a kind of "quantization" scheme for the Nambu mechanics. However, as opposed to various attempts to quantize the Nambu mechanics proposed so far [3][4][5][6]11], this replacement only gives a scheme to quantize the hidden Nambu mechanics. Furthermore, this replacement is incomplete even as a quantization scheme for the hidden Nambu mechanics, because the constants C c in general depend on the models and the initial conditions. Therefore the procedure presented here is not for quantizing the Nambu mechanics, but just for finding the hidden Nambu structures in quantum/semiclassical systems. The resulting hidden Nambu mechanics is a volume-preserving dynamics of the expectation values of the quantum operators. If the Nambu multiplet includes the variables ( q , p ), the Nambu mechanics can be regarded as a kind of quantized Hamiltonian dynamics [16]. In some cases, such Nambu mechanics can be reduced to the effective Hamiltonian dynamics by explicitly solving the constraints G c = 0. In the next subsection we will see an example of such reduction. Examples Here we present two examples; one is an example of exact quantum dynamics and the other is semiclassical. In the latter example, the resulting Nambu mechanics can be reduced to the Hamiltonian dynamics with the effective Hamiltonian. They correspond to the examples shown in Sect. 2.4, and therefore we will show the procedure after Step (2). (a) N = 3: quantum harmonic oscillator Consider three expectation values which correspond to the Nambu triplet in Eq. (13), The quantum Hamiltonian of a one-dimensional harmonic oscillator is given bŷ Then, one of the Nambu Hamiltonians, F , can be obtained without any approximation, The other Nambu Hamiltonian, G, is given by the same functional form as Eq. (16), which is nonzero in general due to quantum fluctuation. We can see that both F and G are conserved in the exact quantum dynamics, The Nambu equations of Eq. (21) are equivalent to these exact equations; that is, the Nambu structure is hidden in the exact quantum dynamics of a harmonic oscillator. (b) N = 4: semiclassical nonlinear systems Consider four expectation values which correspond to the Nambu quartet in Eq. (22), The quantum Hamiltonian of a one-dimensional nonlinear system is given byĤ = H(q,p), Eq. (32), with an anharmonic potentialV = V (q). The Nambu Hamiltonian F can be obtained as an approximation of Ĥ , where the functional form of the reduced potentialṼ (x 1 , x 3 ) is uniquely determined by means of the zero-cumulant approximation, Eq. (36). The other Nambu Hamiltonians are given by the same functional forms as Eqs. (25) and (26), G 1 = x 3 − x 2 1 and G 2 = x 4 − x 2 2 , which are nonzero in general due to quantum fluctuation. We can see that all of F , G 1 , and G 2 are conserved in the following approximated dynamics: wheref is determined by the zero-cumulant approximation in Eq. (36) if necessary. This is a semiclassical dynamics which corresponds to the lowest order of the quantized Hamiltonian dynamics [17]. We can also see that the N = 4 Nambu equations of Eq. (31) are equivalent to these semiclassical equations; that is, the Nambu structure is hidden in the semiclassical nonlinear dynamics. This semiclassical dynamics can be regarded as the frozen Gaussian wave packet dynamics [14], where the quantum wave function is approximated by a Gaussian wave packet with a constant width σ, Here q c is the center of the wave packet and p c is that in momentum space. The time evolution of the variables (q c , p c ) can be determined by means of the time-dependent variational 10/18 principle [18], by taking the frozen Gaussian wave function of Eq. (46) as a trial function. The resulting variational equations are semiclassical equations which have the same forms as the Hamilton equations of motion in Eq. (3), Here H c (q c , p c ) = ψ FG |Ĥ|ψ FG is the effective Hamiltonian modified by the quantum correction. Evaluating the expectation values in Eq. (43) by means of the state |ψ FG (t) , we obtain . Then, G 1 and G 2 are given by Using these nontrivial constraints, we can show that the Nambu Hamiltonian F in Eq. (44) is equivalent to H c (q c , p c ). This is because the zero-cumulant approximation for the Nambu quartet in Eq. (43) is exact in the case of the frozen Gaussian wave packet. Using Eqs. (49) and (50), we can also show that the Nambu equations of Eq. (31) are reduced to the variational equations of Eq. (48). That is, the Nambu structure is hidden in the semiclassical dynamics of the frozen Gaussian wave packet. In Sect. 5.1 we present a numerical demonstration of the semiclassical tunneling dynamics in a metastable system. Many-degrees-of-freedom extension It is straightforward to extend our formalism to many-degrees-of-freedom systems. However, the resulting classical or quantum/semiclassical hidden Nambu mechanics becomes pathological, because the Nambu bracket itself has a serious problem in interacting systems [1,2,11]. Difficulties in the Nambu bracket where A and B are any functions of the 2n variables. Since the dynamics is divergenceless, the Liouville theorem holds. The Poisson bracket in Eq. (51) satisfies the Jacobi identity of Eq. (2) and therefore we can define canonical transformations of the 2n variables in a consistent manner. 11/18 On the other hand, the Nambu mechanics has a problem in the many-degrees-of-freedom extension. Consider a system of n Nambu N -plets (x where A a (a = 1, ..., N ) are any functions of the N × n variables. Because of the divergenceless property, the Liouville theorem holds. The Nambu bracket of Eq. (53), however, fails to satisfy the fundamental identity in Eq. (6) if the N -plets interact with each other [2,11]. Therefore we cannot define consistent canonical transformations of the N × n variables in general [1]. Hidden Nambu mechanics in many-degrees-of-freedom systems Although the Nambu bracket has a serious problem in many-degrees-of-freedom systems, it is still possible to extend our hidden Nambu formalism to such systems. The resulting hidden Nambu mechanics is the Nambu mechanics without the fundamental identity. We start from a Hamiltonian system of n canonical doublets with a Hamiltonian H = H(q (1) , p (1) , ..., q (n) , p (n) ). Then we introduce N variables x i (q (α) , p (α) ) (i = 1, ..., N ) for each α. We assume that at least N − 1 of ∂(x N ) = f (q (1) , p (1) , ..., q (n) , p (n) ) can be written via the Hamilton equation of motion, where F (x N ) (c = 1, ..., N − 2) that satisfy the consistency conditions for each α, then Eq. (55) can be rewritten in the same form as Eq. (7), where the Hamiltonians G c are defined as the sum of each G The Nambu mechanics is hidden in classical many-degrees-of-freedom systems. The Liouville theorem holds in the hidden mechanics, though the fundamental identity does not hold. Such hidden Nambu mechanics is an example of dynamics without the canonical structure [15]. Taking the same procedure presented in Sect. 3.2, we can find the Nambu structure in quantum/semiclassical many-degrees-of-freedom systems: Step (1): Start from the classical hidden Nambu mechanics shown above. Step (2): Replace the Nambu N -plets with the corresponding expectation values of quantum operators. For example, Step (3): Determine the functional form of F . Use the zero-cumulant approximation if necessary. In Sect. 5.2 we present a numerical demonstration of the semiclassical dynamics of a twobody system. Numerical results for semiclassical dynamics Here we give two numerical results for N = 4 hidden Nambu mechanics equivalent to the semiclassical frozen Gaussian wave packet dynamics in one-and two-degrees-of-freedom systems. We choose the same systems as used in the applications of the quantized Hamiltonian dynamics [16]. We compare the results with the corresponding quantum and classical results. In both systems, the time developments in the Nambu and classical mechanics are numerically evaluated by using the fourth-order Runge-Kutta integrator, 1 while the propagation of the quantum wave function is numerically evaluated by a split-operator method, which is a hybrid of Cayley's form and the Suzuki-Trotter decomposition [20]. As the initial wave function for the quantum dynamics, we take the Gaussian wave packet ψ FG (q, 0), Eq. (46). The initial conditions for the Nambu mechanics are given by the expectation values of the quantum operators with respect to that initial state, and those for the classical mechanics are given by the center of the initial wave packet (q c (0), p c (0)). We choose the width of the initial wave packet as σ = /(2mω), for which the frozen Gaussian wave packet dynamics becomes exact for a harmonic oscillator. Metastable cubic potential The first model is a quantum system which exhibits tunneling. Consider a one-dimensional metastable system whose quantum Hamiltonian is given by 2 The corresponding classical Hamiltonian is H = (1/2m)p 2 + V (q), where V (q) is the classical potential, V (q) = (mω 2 /2)q 2 + (g/3)q 3 . We choose N = 4 Nambu variables, as in Eq. (43), and the Nambu Hamiltonian F is then determined by the zero-cumulant approximation in Eq. (36), For the frozen Gaussian wave packet dynamics, the Nambu Hamiltonians F and (G 1 , G 2 ), Eqs. (49) and (50), are conserved in the time evolution according to the semiclassical equations of Eq. (45), which are equivalent to the N = 4 Nambu equations of Eq. (31). The initial conditions for the Nambu mechanics are given as follows: The classical potential V (q) is plotted in Fig. 1a, where we set the parameters ω = 1 and g = 0.3 with the units = m = 1. These parameters are the same as in Ref. [16]. The initial wave function given by |ψ(q, 0)| 2 = |ψ FG (q, 0)| 2 is also shown in Fig. 1a, where we choose the initial conditions as (q c (0), p c (0)) = (0, 1.8). The initial wave packet is located at the local minimum of the classical potential V (q) and moves to the right. The calculated trajectories of the quantum, Nambu, and classical mechanics are shown in Fig. 1b. The quantum mechanical expectation value q(t) moves to the right, bounces off the wall, and moves to the left through the potential barrier, the top of which is located at q = −3.3. This is an instance of quantum mechanical tunneling because the classical variable q(t) fails to go through the potential barrier and oscillates around the local minimum of V (q). On the other hand, the Nambu variable x 1 (t) can reproduce the quantum mechanical tunneling, although it deviates from the quantum result as time increases. This semiclassical behavior of the Nambu variable can be understood as follows. The N = 4 Nambu mechanics discussed here is equivalent to the variational dynamics of (q c , p c ), whose time evolution is given by Eq. (48). The effective Hamiltonian is H c (q c , p c ) = (1/2m)p 2 c + V c (q c ), where V c (q c ) 2 This metastable system has also been used in the applications of the symplectic semiclassical wave packet dynamics [21]. 14/18 is the effective potential shown in Fig. 1a, The last two terms are proportional to the Planck constant and generated by the quantum correction. As shown in Fig. 1a, these terms lower the height of the potential barrier, and there exists a region of initial values (q c (0), p c (0)) where the Nambu mechanics can tunnel but the classical mechanics cannot. The initial conditions adopted here, (q c (0), p c (0)) = (0, 1.8), are in such a region. Simplified Henon-Heiles model The second model is a quantum system which exhibits nonlinear energy exchange dynamics between coupled oscillators. Consider a one-dimensional quantum system of two oscillators whose Hamiltonian is given bŷ 6) is zero, whereas the right-hand side is −λ. That is, the interaction between two oscillators violates the fundamental identity and the canonical structure is broken in the hidden Nambu mechanics. However, by explicitly solving the constraints in Eqs. (68) and (69), this Nambu mechanics can be reduced to the effective Hamiltonian dynamics where the time evolution of the canonical doublets can be properly defined. Therefore the dynamics of two oscillators considered here is anomalous as the Nambu mechanics, but not anomalous as the Hamiltonian dynamics. Conclusions and future work We have shown that the Nambu mechanical structure is hidden not only in classical Hamiltonian dynamics but also in some quantum or semiclassical dynamics. We focused on the dynamics defined in an extended phase space spanned by N (≥ 3) quantum mechanical expectation values. 3 The dynamics of variables such as ( q , p , q 2 , p 2 ) cannot be described by the Hamilton equations of motion; however, if the system has a sufficient number of conserved quantities, (F, G 1 , G 2 ), their dynamics could be described by the N = 4 Nambu equations. We gave some quantum/semiclassical examples of hidden Nambu mechanics, including a many-degrees-of-freedom system. It would be interesting to investigate other examples. In many-degrees-of-freedom systems, however, the hidden Nambu mechanics become anomalous, because interactions between multiple degrees of freedom violate the fundamental identity of Eq. (6) [2,11]. Since the fundamental identity would play a similar role to the Jacobi identity in the Hamiltonian dynamics, its violation implies that it would be difficult to formulate the Nambu statistical mechanics or quantize the Nambu mechanics. On the other hand, in Hamiltonian dynamics there also exists anomalous dynamics known 3 Our formalism could also be applied to statistical-mechanical expectation values. 17/18 as nonholonomic dynamics [23], where the Jacobi identity is violated and the Hamiltonian structure is broken [15]. Recently, a procedure has been proposed to recover the Hamiltonian structure and formulate a statistical theory of the nonholonomic dynamics [24]. This work might provide guidance for formulating a statistical theory of Nambu mechanics, and our formalism presented in this article might provide example systems suitable for Nambu statistical mechanics to be tested.
2019-08-09T05:46:25.000Z
2019-08-09T00:00:00.000
{ "year": 2019, "sha1": "0b515f29f4572e40b27ddd99c51301b024edf57d", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/ptep/article-pdf/2019/12/123A02/32693898/ptz144.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "11b45f14caa305d9961b52b4f7d0376a215de7ad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270552939
pes2o/s2orc
v3-fos-license
Mixed-Dimensional Assembly Strategy to Construct Reduced Graphene Oxide/Carbon Foams Heterostructures for Microwave Absorption, Anti-Corrosion and Thermal Insulation Highlights Reduced graphene oxide/carbon foams (RGO/CFs) vdWs heterostructures are efficiently fabricated via a simple mixed-dimensional assembly strategy. Linkage effect of optimized impedance matching and enhanced dielectric loss abilities endows the excellent microwave absorption performances of RGO/CFs vdWs heterostructures. Multiple functions such as good corrosion resistance performances and outstanding thermal insulation capabilities can be integrated into RGO/CFs vdWs heterostructures. Supplementary Information The online version contains supplementary material available at 10.1007/s40820-024-01447-9. Introduction As the continuous and rapid progress of electronic communication technology, the popular intelligent electronic equipment brings convenience to people's life.Meanwhile, it also hides serious electromagnetic (EM) pollution and threatens people's health [1][2][3].Consequently, the focus on designing outstanding materials and structures to effectively improve EM wave (EMW) absorption performances has increasingly intensified.According to the actual application requirements, the desired EMW absorption materials are appraised not only by the characteristics of "strong," "broad," "thin" and "lightweight," but also by their high environmental adaptability such as good anti-corrosion and superior thermal stability [4].Accordingly, biomassderived [5] or chemically synthesized carbon-based materials from zero dimension (0D) to three dimension (3D) such as carbon nanocages/microspheres [6,7], carbon nanofibers (CNFs) [8], graphene (G) [9], and carbon aerogels [10] are deemed as the extremely attractive candidate substances for developing perfect EMW absorption materials relying with their extraordinary electrical conductivity, light quality, high physical/chemical stability, and so on [11,12].Unfortunately, the poor impedance matching characteristic and attenuation mechanism greatly hinder the improvement of EMW absorption performances [13].In order to effectively solve these problems, different methods and strategies have been proposed.For examples, a new nano-micro engineering was presented by Cao's team, which could modulate the inner porous structure of NiCo 2 O 4 nanofibers and further effectively regulated the EMW absorption performances by boosting its charge transport capacity.More importantly, this simple strategy for constructing diverse microstructures could be extended to other EM functional materials [14].Che and co-workers reported a pioneering galvanic engineering for constructing core@shell structure nanohybrids to exploit efficient EMW absorbers.Wondrously, the diversity of heterogeneous nanoparticle shell composition composed of single-metal or bimetallic was controlled and quantitatively regulated through this general programmable strategy [15].Recently, Ji et al. employed phase engineering strategy to boost dielectric loss through regulating amorphous/crystalline heterophase of γ-Fe 2 O 3 nanosheets.Concluding from the results, compared with the pure amorphous and bare crystalline, the designed composites exhibited an effective absorption bandwidth (EAB), which was attributed to heterointerface provided by different phase structures [16].Similarly, Reza Peymanfar et al. and Zhang's team successfully promoted the EMW absorption performances of MgFe 2 O 4 -based materials and NbS 2 through manipulating the phase and morphology, respectively [17,18].Additionally, Wu's group proposed a vacancy engineering of Se-doped CoS 2 and S-doped CoSe 2 through an anion-doping.Benefitting from much superiority of improved electronic conductivity and numerous polarization centers caused by vacancy sulfur and selenium, the EMW absorption performances were successfully optimized [19].In general, the previously reported results revealed that EMW absorption performances were significantly boosted through the meticulous regulation of morphology and microstructures, phase and components, defect and interfacial effects. Mixed-dimensional heterostructures, especially van der Waals (vdWs) heterostructures, are undoubtedly desirable structures for constructing high-performance EMW absorption materials by virtue of large specific surface area, abundant interfaces, multi-dimensional components, and so on [20,21].For instance, Pan and his colleagues synthesized multi-dimensional heterostructures, which were composed of 3D carbon nanocoils, two-dimensional (2D) graphene, one-dimensional (1D) CNFs and 0D nanoparticles.According to the results, the impedance matching and EMW absorption characteristics could be regulated by modifying the growth parameters of CNFs and nanoparticles [22].Liu's group designed and constructed multi-dimensional hybridized structures of 3D N-doped carbon aerogels with attachment of 0D Ni/MnO nanoparticles.In consequence, compared with pure 3D N-doped carbon aerogels, in situ incorporation of 0D Ni/MnO particles greatly adjusted the absorption capacity and achieved a ultrawide absorption bandwidth [23].Recently, Wu et al. constructed 0D selenide nanoparticles@2D carbon nanosheets@1D CNFs mixed-dimensional composites for multi-functional applications.With respect to the extraordinary EMW absorption performances of composites, it was mainly ascribed to the synergistic effect combined with good conductive networks, abundant space gap and rich heterointerfaces [24].Besides the strong absorption 1 3 and wide bandwidth, perfect EMW absorbing materials with excellent stability and versatility to satisfy the everincreasing demands in the changeable practical environment will be a key research direction in the future.However, effectively incorporating the multiple functionalities including EMW absorption capability, heat protection, and resistant to corrosion into carbon materials still faces huge challenges so far. Considering the presented aspects, herein, 2D/3D reduced graphene oxide/carbon foams (RGO/CFs) vdWs heterostructures were meticulously engineered and synthesized via freeze-drying, immersing absorption and thermal treatment.The obtained results suggested that their unique structures and components induced the linkage effect of optimized impedance matching and enhanced dielectric loss abilities, leading to the significant EMW absorption, good anti-corrosion as well as thermal insulation performances of 2D/3D RGO/CFs vdWs heterostructures.Accordingly, our works not only demonstrated an efficient pathway to produce 2D/3D RGO/CFs vdWs heterostructures, but also provided a facile mixed-dimensional assembly strategy to develop multifunctional carbon materials for the great potential in complex and variable environments. Fabrication of 3D Cellular Chitosan/g-C 3 N 4 Foams (CGFs) In a typical experiment, the 3D CGFs were prepared through a simply equipped freeze-drying process.Initially, yellow g-C 3 N 4 powder as viscosity modifier was acquired by a thermal decomposition of urea.And g-C 3 N 4 powder (60 mg) was ultrasonically dispersed into deionized water (60 mL) for 30 min to prepare the g-C 3 N 4 aqueous dispersion.After that, chitosan powder (2.4 g) was completely dispersed into the above dispersion.Subsequently, glacial acetic acid (1.2 mL) was injected into the chitosan/g-C 3 N 4 aqueous dispersion under magnetic stirring to synthesize the yellow chitosan/g-C 3 N 4 hydrogel precursor.Then, each 13 g of chitosan/g-C 3 N 4 hydrogel was transferred into glass garden and placed at room temperature until the bubble disappeared.After frozen at ca. − 60 °C, the ice templates were removed after the vacuum freeze-drying treatment to obtain 3D cellular CGFs. Fabrication of 2D/3D GO/CGFs and RGO/CFs vdWs Heterostructures Firstly, few-layer GO could be synthesized using the previously reported route [25].Particularly, the yellow CGFs were placed in the oven at 80 °C for 48 h to further promote the cross-linking reaction.Afterward, deionized water was employed to clean the CGFs for removing the residual glacial acetic acid.The obtained wet CGFs should be squeezed as much as possible to remove excess deionized water.At the same time, GO aqueous dispersions with different concentrates (2, 4, and 6 mg mL −1 ) were obtained by ultrasonic dispersing different amounts of GO (40, 80, and 120 mg) in 20 mL deionized water for 30 min, respectively.Next, the extruded CGFs were separately immersed into different concentrations of GO dispersions under stirring until they were saturated, which were subsequently placed into a freeze-dryer to produce GO/CGFs vdWs heterostructures.For easy description, the obtained GO/CGFs with different contents of GO were named as G2/CGF, G4/CGF and G6/ CGF, respectively.Finally, the lyophilized GO/CGFs heterostructures were carbonized at 650 °C (model BTF-1200C, Anhui BEQ Equipment Technology Co, Ltd.) for 2 h in Ar to obtain the corresponding RGO/CFs vdWs heterostructures, which were denoted as R2/CF, R4/CF and R6/CF, respectively.For comparison, the 3D cellular CFs without 2D RGO nanosheets attachment were obtained through directly carbonization process of CGFs.Aiming at deeply exploring the influence of carbonization temperature, taking G2/CGF as a research object, the carbonization process was also carried out under 600 and 700 °C to produce the corresponding RGO/CFs heterostructures (named as R2/CF-600 and R2/ CF-700). Characterization For making sure the phases, morphology, elements mapping and compositions of samples, emission scanning electron microscopy (FE-SEM), energy dispersive spectrometer (EDS), Fourier transform infrared (FTIR) spectrum, Raman spectra, X-ray photoelectron spectrometer (XPS) and X-ray powder diffractometer (XRD) were successively carried out.To investigate EMW absorption properties, the obtained specimens (15, 20, and 25 wt%) were mixed with paraffin to compress into a series of toroidal shapes (3.0 mm inner diameter and 7.0 mm outer diameter).A vector network analyzer was used to measure their EM parameters using the coaxial-line method from 2.0 to 18.0 GHz.without any alteration of its external form, confirming the ultra-lightweight features of RGO/CFs heterostructures.Figure 1c shows the FIIR spectra of CGFs, G2/CGF, CFs, and R2/CF.The analysis of FIIR curves for CGFs and G2/ CGF reveals that the -OH peaks undergo an evident red shift from ca. 3430 to ca. 2900 cm −1 , which are primarily ascribed to the appearance of hydrogen bonds caused by the superfluous glacial acetic acid [27].On account of their similar oxygen-containing functional groups of GO and chitosan, CGFs and G2/CGF samples display the similar FIIR curves, showing the characteristic peaks of hydrophilic groups.Compared to CGFs and G2/CGF, the FIIR results for CFs and R2/CF samples reveal that these characteristic peaks of -COH, -COC and -CH 2 OH (within 1000-1200 cm −1 ) are diminished, and the -NH 2 and C-N peak intensities are significantly disappeared, which implies the reduction of hydrophilic groups in chitosan and GO during the pyrolysis process [23].Furthermore, the C=O peak is still pronounced from the obtained CFs and R2/CF, which is beneficial to induce polarizations for the attenuation of EMW.As provided in Fig. 1d, the obtained CFs and RGO/CFs show the broad peaks of graphitic carbon at 24° and 44°, respectively [28].Specially, the disappearance of diffraction peak corresponding to GO in XRD pattern suggests the successful reduction of GO, which is consistent with the analysis of FTIR.With reference to the previous reports, no obvious diffraction peak of g-C 3 N 4 (27°) appears, demonstrating complete decomposition of few g-C 3 N 4 after pyrolysis [29].To further investigate the surface chemistry of samples, the XPS measurement was conducted.The XPS survey spectrums of CFs and RGO/ CFs exhibit O 1s, N 1s, and C 1s characteristic peaks in Fig. 1e, providing strong evidence of N-doping.In the C 1s orbit of R2/CF (Fig. 1f), the spectrum is deconvoluted as a combination of three characteristic peaks: 288.0, 285.6, and 284.6 eV, which correspond to C=O, C-N, and C-C/ C=C [30].In marked in Fig. 1g, the N 1s XPS spectrum of R2/CF sample is presented to further confirm the bonding configuration of N, which indicate the presences of oxidized-N, graphitic-N, pyrrolic-N and pyridinic-N, respectively [31].Additionally, the comparison of the highresolution spectra C 1s and N 1s for R4/CF (Fig. S1a, b) and R6/CF (Fig. S1c, d) suggests the similar composite components.It is well known that pyrrolic-N and pyridinic-N as polarization centers and graphitic-N as conduction loss enhancer help to improve the dissipation of EMW [32]. Results and Discussion To further study their structures, the precursor CGFs and GO were characterized by FE-SEM and TEM in Fig. S2.Before thermal treatment, CGFs exhibit coarse skeleton and small pores (Fig. S2a, b) and GO shows typical 2D tulle-like nanosheets (Fig. S2c, d).After processing, the as-prepared CFs and RGO/CFs were also investigated by FE-SEM.From Fig. 2a1-a3, the obtained CFs sample manifests a uniform faveolate configuration with a comparatively smooth surface.The generation of the dense channels can be attributed to the formation of ice crystal and subsequent sublimation under the treatment of freeze-drying.Compared with CFs, the FE-SEM observations from Fig. 2b1-b3 demonstrate that the channels are filled with RGO nanosheets in large scale and 2D RGO nanosheets are firmly affixed to the 3D skeleton surface of CFs via the van der Waals forces, constructing a typical 2D/3D vdWs heterostructures and generating large quantities of solid-void interfaces.To further test this idea, FE-SEM images of R4-CF were gained and dipicted in Fig. 2c1-c3.The investigations reveal that the R4/CF exhibits much rougher frameworks and denser channels than the R2/CF sample due to the attachment of much more RGO.And the R4/CF sample is also the representative 2D/3D vdWs heterostructures, which consists of 2D RGO nanosheets and 3D CFs.With a further increase in the GO content, the SEM observations reveal that the channel structure of R6/CF becomes more blurred, and RGO nanosheets clearly stack into clusters and evidently accumulate on the surface of skeleton (Fig. 2d1-d3).To further determine the elemental distribution, EDS elemental mapping images of R2/CF are provided in Fig. 2e1-e4.The results illustrate that the elements of O, N, and C are evenly distributed throughout the R2/CF sample, which is consistent with the XPS analysis.Overall, the acquired outcomes demonstrate that RGO/CFs 2D/3D vdWs heterostructures can be fabricated simply and efficiently through our proposed route.By adjusting the initial concentration of GO, the RGO content and morphology of designed RGO/CFs can be effectively manipulated.More importantly, the obtained 2D/3D RGO/ CFs vdWs heterostructures build the good conductive networks and provide abundant interfaces of void-solid, which promote the multiple scattering, reflections and attenuation of EMW [33]. For the sake of confirming the aforementioned analyses, Fig. 3 offers the EM parameters and dielectric loss tangent ( tan E = �� � ) for obtained CFs, R2/CF, R4/CF and R6/CF with the packing ratios of 15, 20, and 25 wt%.Due to the 1 3 non-magnetic characterization of RGO and carbon [34], the ′ and ′′ values determine EMW absorption characteristics of designed absorbers, which are related to storage and dissipation capacity, respectively [35].Intuitively, as presented in Fig. 3a-d, all the samples exhibit the degraded ′ and ′′ values within the tested frequency range, which is in line with the frequency dispersion phenomenon of carbon materials [36].Specifically, the ′ and ′′ values (Fig. 3a) for 3D cellular CFs with a filling ratio of 15 wt% are relatively small, which decrease from 5.501 to 3.289, and 1.411 to 0.892, respectively.With the increasing of filler loading, the ′ and ′′ values of CFs with the filling ratios of 20 and 25 wt% range from 6.519 to 3.535 and 2.098-1.142,7.688-3.915and 2.907-1.676,respectively.Similar to the previous findings [37], the relatively low values of complex permittivity for 3D CFs sample should be ascribed to the insufficient filling amount, which makes it difficult to form a complete conductive network.The ′ and ′′ values (Fig. 3b) of R2/CF with different filling ratios are as follows: 6.162-2.753and 1.984-0.907,9.385-4.695and 4.019-2.370,11.449-5.319and 7.268-2.540.And the designed R2/CF sample presents much higher values of ′ and ′′ than 3D CFs with a same filling ratio, demonstrating the improved ability to store and attenuate EMW energy.These situations are attributed to the attachment of 2D RGO nanosheets on 3D CFs contributing to construct the mix-dimensional vdWs heterostructures and form a dense conductive network, which accelerates electrons migration and hopping process [38,39].To further verify the above deduction, Fig. 3c, d presents the values of complex permittivity for R4/CF and R6/CF samples with the different filling ratios.As speculated, ′ and ′′ values of R6/CF are still significantly higher than that of R4/CF at the same filling ratios, which is mainly due to the gradual stacking of the RGO flakes together and the further reduction of the pore size, resulting in higher electrical conductivity.The visualized comparison ′ and ′′ values (as shown in Fig. 3e) of CFs, R2/CF, R4/CF and R6/CF samples further confirm the obtained analyses and SEM results, suggesting the effective modulation of EM parameters after incorporation of RGO.And their dielectric loss tangent values also indicate that the CFs and RGO/CFs present the steadily upward trend when the filling ratio increases from 15 to 25 wt% (as depicted in Fig. 3f), implying their improved dielectric loss capacities [40].Furthermore, the comparative outcomes also manifest that the 2D/3D RGO/CFs vdWs heterostructures exhibit the superior dielectric loss abilities compared to CFs. To study their EMW absorption performances, the reflection loss (RL) values were acquired on basis of transmission line theory and corresponding equations (Eq.S1) and (Eq.S2) in Supporting Information [41,42].As illustrated in Fig. 4a-d 2 GHz and 1.66 mm, respectively.Furthermore, the obtained CFs (Fig. S3), R2/CF (Fig. 4e), R4/CF (Fig. 4f) and R6/CF (Fig. 4g) samples display the EAB values of 5.6, 6.2, 6.0, and 5.8 GHz.And the corresponding d m values are 2.72, 2.27, 2.04, and 1.92 mm, respectively.For comparison, Fig. S4 demonstrates the EM parameters and absorption performances of the pure GO.Without doubt, the extremely low EM parameters result in very poor performance.What is Additionally, the comparison results (as presented in Fig. S6) between CFs and R2/CF based on quarter-wavelength matching theory were carried out.From that, all dots corresponding to the thickness-frequency nearly locate on the simulated curve, indicating the good coincidence between theoretical and experimental outcomes [43].Consequently, the designed R2/CF sample exhibits the strong absorption capabilities, broad EAB as well as small matching thicknesses, which is a super-duper novel EMW absorbers. Impact of Thermal Treatment Temperature To investigate the influence of thermal treatment temperature, the compositions, morphologies and EMW absorption characteristics of obtained R2/CF-600 and R2/CF-700 were deeply investigated.From Fig. 5a, the analysis of XRD reveals that the as-prepared R2/CF-600 and R2/CF-700 display the alike diffraction peaks, which are similar to the obtained R2/CF.Furthermore, the RGO/CFs vdWs heterostructures exhibit the increasingly XRD peak intensities corresponding to graphitic carbon with the carbonization temperature increasing from 600 to 700 °C, indicating the gradual increase in graphitic carbon content.As provided in Fig. 5 a-c XRD, XPS and Raman spectra R2/CF-600, R2/CF-650 and R2/CF-700 samples, and SEM images for d-f R2/CF-600 and g-i R2/ CF-700 Fig. 5b, the XPS analysis reveals the presence of N, O and C, which also distribute over the obtained R2/CF-600 and R2/ CF-700 samples.Distinctly, the declining O 1s peak of the obtained samples suggests the reduced oxygen content with enhancing the thermal treatment temperature, which implies the improved reduction degree of G2/CGF.And in Fig. S7, C 1s and N 1s high-resolution spectra for R2/CF-600 and R2/CF-700 are provided.In particular, the strong C=O peak in Fig. S7a and the missing graphitic-N peak in Fig. S7b also suggest the low degree of carbonization at 600 °C.Conversely, the weakened C=O peak in Fig. S7c and the emergence of graphitic-N peak in Fig. S7d further confirm the deepening of reduction at 700 °C.In addition, Raman spectra are also provided.From Fig. 5c, all the obtained samples display two characteristic peaks at about 1345 and 1585 cm −1 corresponding to D and G band [44].And their peak intensity ratios (I D /I G ) are 0.958, 0.965, and 0.978.And the gradual increase in value coincides with the transformation from amorphous carbon to graphitic nanocrystals on basis of three-stage model [45,46].Thus, abundant C=C bonds in graphite nanocrystals generate a 2D plane and thus decrease the electrical resistivity [47].One can see that the Raman spectra are accorded with the above-mentioned XRD and XPS outcomes.Same to R2/CF, the SEM investigations reveal that both the obtained R2/CF-600 (Fig. 5d-f) and R2/ CF-700 (Fig. 5g-i) display the representative 2D/3D vdWs heterostructures in which 2D RGO nanosheets firm anchoring to 3D cellular structure, which implies that the influence of heat treatment temperature on the morphology can be ignored.In short, the content of graphitic carbon is modulated by regulating the heat treatment temperature, facilitating the optimization of their EM parameters and EMW absorption properties. To confirm the effect of carbonization temperature on their performance, EM parameters for R2/CF-600 and R2/ CF-700 samples were also investigated.The achieved R2/ CF-600 (Fig. 6a,) and R2/CF-700 (Fig. 6b) samples also present the gradually increasing values of ′ and ′′ when the filling ratio raises from 15 to 25 wt%.Furthermore, the outcomes reveal that RGO/CFs vdWs heterostructures at a same filler loading exhibit the evident enhancement in ′ , ′′ and tan E values (as presented in Fig. 6c), which further confirms the adjustment of EM performances by the carbonization temperature.Additionally, the 2D RL map (as presented in Fig. 6d) suggests that the EAB and RL min values for R2/CF-700 sample at 15 wt% are 5.0 GHz (from 13.0 to 18.0 GHz) and − 52.05 dB.And their corresponding d m values are 1.85 mm and 3.32 mm at the frequency of 7.8 GHz, respectively.Equally, the obtained R2/CF-700 sample with a 20 wt% filling ratio (Fig. 6e) also displays the RL min and EAB values of 14.80 dB and 4.2 GHz (13.8-18.0GHz), and their matching thicknesses are 1.47 mm.And the too high complex permittivity (Fig. 6f) gives rise to impedance mismatching characteristic and poor EMW absorption properties of R2/CF-700 at a 25 wt% filling ratio [48].Meanwhile, the other detailed EM parameters and absorption performances of both samples at different filling ratios are summarized in Table S1.According to the acquired outcomes, it is evident that the excellent EMW absorption performances of obtained 2D/3D RGO/CFs vdWs heterostructures are also tailored by modulating the thermal treatment temperature. Analyses on the Difference in EMW Absorption Properties, Radar Cross Section Simulation and Possible EMW Absorption Mechanism Generally speaking, optimal impedance matching characteristic implies more incident EMW permeating into the interior of absorber, which is instrumental for the subsequent EMW attenuation [49].As shown in Fig. 7a, b, taking R2/ CF with the filling ratios of 25 wt% as example, the comparison Z in Z 0 values indicate the designed 2D/3D RGO/ CFs vdWs heterostructures achieve the much better impedance matching characteristic than the initial CFs, implying that the RGO addition improves the impedance matching characteristics.Additionally, the ′′ p and ′′ c values were achieved on basis of equations (Eq.S4) and (Eq.S5) in Supporting Information to evaluate the polarization and conduction loss capabilities, respectively [50,51].To determine the conductive loss based on Eq.S7, the ac values of CFs and R2/CF absorbers were acquired based on a Hall-effect system and are given in Table S2.It is apparent that the CFs exhibit the smaller value of ′′ c than ′′ p (Fig. 7c), implying the dominated contribution of polarization loss.Whereas, p values for R2/CF sample point to the major role of conduction loss at low frequency (below ca.6.0 GHz) and polarization loss within 6.0-18.0GHz frequency range.Moreover, R2/CF presents a significant enhancement in the ′′ c and ′′ p values compared to CFs in the whole tested frequency, indicating its apparently improved polarization and conduction loss capacities.In order to further verify the polarization relaxation loss, the Cole-Cole curves of CFs and RGO/CFs were drawn on the basic of Debye relaxation theory and are displayed in Fig. S8 [52].Generally, a semicircle and long straight line corresponds to a Debye relaxation process and conduction loss, respectively [53].Obviously, compared with CFs, RGO/CFs display relatively more semicircles in addition to linear regions, suggesting the enhancement of polarization loss in RGO/CFs.The obtained outcomes demonstrate that incorporating RGO to construct the 2D/3D RGO/CFs vdWs heterostructures simultaneously improves the impedance matching characteristic, conduction and polarization loss abilities.The linkage effect leads to their boosted EMW absorption properties.Besides, the radar cross section (RCS) measurement was carried out employing computer simulation technology (CST).As shown in Fig. 7d, the CST simulation outcomes reveal that the plate of perfect conductive layer (PEC) displays the strongest scattering signal.Whereas the PEC coated by CFs present the much higher signal intensity than that covered by R2/CF (2.5 mm thick).These contrast results further prove that most of EMW energy is effectively attenuated by 2D/3D RGO/CFs vdWs heterostructures.As compared in Fig. 7e, the obtained R2/CF sample exhibits the lowest RCS values (less than − 10 dB m 2 ) within 0-180° angle region than PEC and CFs, which corresponds well with the prominent EMW absorption properties.On the other hand, R2/CF exhibits significant radar stealth property in the practical applications compared with PEC and CFs.The comparison of the designed RGO/CFs with the other recently reported carbon-based absorbers is detailed in Table S3.Overall, the resulting RGO/CFs exhibit outstanding performances, incorporating the characteristic of "strong," "broad," "thin" and "light." Combined with the experiments and analyses demonstrated previously, it can be concluded that the designed cellular porous foams endow the absorber lightweight property and outstanding EMW absorption performances.For a more intuitive understanding, Fig. 8 summarizes the conceivable EMW attenuation mechanisms of 2D/3D RGO/ CFs vdWs heterostructures.As a prerequisite, their typical mixed-dimensional cellular porous materials greatly correct impedance mismatch characteristics compared to singledimensional structure.Based on the optimized impedance matching, most incident EMW can effectively permeate into the designed RGO/CFs absorbers and induced multiple reflection and scattering to achieve energy attenuation [54].Meanwhile, the 2D RGO nanosheets and 3D CFs are cross-linked with each other to construct the wonderful conductive network.Benefiting from electron migration and hopping along among graphite nanocrystals, the conduction loss efficiently facilitates energy transformation from EMW energy to thermal energy, thus achieving attenuation [55,56].Besides the conduction loss, polarization loss is another crucial factor in accelerating EMW attenuation.Therein, the foam-like 2D/3D vdWs heterostructures and composite components provide numerous heterogeneous interfaces such as solid-air interfaces, different components interfaces, where interfacial polarization loss occurs when the different electrical properties charges accumulate on the heterogeneous interfaces [57,58].Another one, the dipole polarization loss deriving from defects, heteroatoms dopant as well as remaining polar groups inside RGO/CFs vdWs heterostructures also contribute to the attenuation of penetrated EMW [59,60].Overall, these special 2D/3D vdWs heterostructures consumedly optimize the impedance matching property and promote the dielectric loss ability, which contribute to their excellent EMW absorption performances. Versatility and Possible Application Prospects To investigate the practical application, we also conducted the corrosion resistance measurement using electrochemical measurement technique to further clarify the stability of designed RGO/CFs vdWs heterostructures in the various extreme conditions.In a typical experimental procedure, the obtained R2/CF sample was immersed in KOH solution (pH = 14), 3.5 wt% NaCl solution and HCl solution (pH = 1) for 30 min, respectively.As we all known, the high positive E corr and low I corr value imply the excellent corrosion resistance of sample [61].From Tafel curves shown in Fig. 9a, compared to HCl (− 0.11 V and 158.9 μA) solution, the R2/ CF sample exhibits a high positive E corr and small I corr values in the NaCl (0.278 V and 2.267 μA) and KOH (0.075 V and 8.201 μA) solutions, implying its better corrosion resistance under the neutral and alkaline conditions.Additionally, electrochemical impedance spectroscopy (EIS) measurement results (as presented in Fig. 9b) show that the R2/CF sample displays much larger radius of impedance arc under the NaCl and KOH solutions than HCl solution, indicating the strong charge transfer resistance ability and good anti-corrosion performance.As shown in Fig. 9c, it is once confirmed that the obtained sample has excellent corrosion resistance in neutral and alkaline condition.Based on the above findings, the outstanding anti-corrosion performance should be attributed to the high physical/chemical stability of carbon materials, dense heterostructures and excellent hydrophobicity.And strong hydrophobicity of R2/CF (water contact angle up to ca. 130° shown from Fig. S9) avoids the penetration of corrosion medium.Besides, good thermal insulating performance also protects microwave coating layers from high-temperature damage [62].Accordingly, we provided intuitive comparison of insulation properties among R2/CF, commercial polyurethane (PU) foam and polyvinyl chloride (PVC) plate insulations.Notably, all of materials were set as 3.0 mm thick and the heating temperature was 100 °C. Figure 9d presents the thermal infrared photos of samples collected at various time points ranging from 0 s to 20 min.Visually, the detected temperatures of PU and PVC are stabilized at ca. 66 °C, whereas R2/CF remains at ca. 58 °C even 20 min, which profits from the highly porous heterostructure [63].As compared in Fig. 9e, the thermal radiation performance of R2/CF is comparable to or even better than that of commercial material, implying the promising prospect of our designed RGO/CFs vdWs heterostructures in the practical applications.This satisfactory property is ascribed to the high porosity of 2D/3D R2/CF heterostructures, which extends the path of thermal transfer and further weakens the intensity of heat conduction and thermal radiation.More intuitively, the thermal insulation performance of R2/CF can be observed by heating a beaker containing 5 mL water without and with interlayer using a spirit lamp.As can be seen from Fig. 9f, after laying the beaker on asbestos mesh, water vapor begins to appear within 10 s and the water starts to boil at 60 s.After that, PU with 3.0 mm thick is selected as the control spacer, which is placed between the asbestos wire gauze and beaker.With the blocking effect of PU, steam emergence time and boiling time are extended to 30 s and 2 min, respectively.However, it can be seen that the PU deforms at 10 s and occurs apparently coking at 30 s. Finally, the same experiment was carried out using R2/CF as spacer.Amazingly, one can find from the enlarged images (named 3 and 4) that numerous minute bubbles have generated at the base of beaker at 4 min, it still fails to boil even after 10 min.It is evident that the R2/CF displays the much better thermal stability than the commercial PU.In general, these favorable outcomes indicate that the fabricated R2/CF owns the protruding thermal resistance performance and is suitable for aviation and space sectors and more complex environments. Conclusions In summary, multifunctional 2D/3D RGO/CFs vdWs heterostructures could be meticulously engineered and synthesized via an oversimplified freeze-drying, immersing absorption, secondary freeze-drying and subsequent carbonization processes.The acquired outcomes indicated that the RGO introduction greatly optimized the impedance matching characteristics of 2D/3D RGO/CFs vdWs heterostructures and improved their polarization and conduction loss capabilities.And the EM parameters of 2D/3D RGO/CFs vdWs heterostructures could be effectively modulated by regulating the RGO content and carbonization temperature.Hence, the linkage effect of the optimized impedance matching and the enhanced dielectric loss capabilities endowed the designed 2D/3D RGO/CFs vdWs heterostructures with the excellent EMW absorption properties.As a result, the R2/CF displayed a low RL min (− 50.58 dB) and broad EAB values (6.2 GHz).More importantly, the reasonable components design and mix-dimensional vdWs heterostructures contributed to significant radar stealth properties, good corrosion resistance performances as well as outstanding thermal insulation capabilities of 2D/3D RGO/CFs vdWs heterostructures, displaying more great potential in complex and variable conditions. 3. 1 Fig. 1 a 3 obtaining Fig. 1 a Experimental diagram of 2D/3D GO/CGFs and RGO/CFs vdWs heterostructures, b digital image of R2/CF standing on leaves, c FTIR spectra, d XRD patterns, e XPS spectra of CFs and RGO/CFs.f, g C 1s and N 1s XPS spectra of R2/CF Fig. 3 a Fig. 3 a-d ′ and ′′ values, e comparison ′ and ′′ values, f dielectric loss tangent values for CFs and RGO/CFs with different filling ratios , the 3D RL color maps reveal that the RL min values of CFs, R2/CF, R4/CF, and R6/CF samples with the filling ratio of 25 wt% are − 29.22, − 50.28, − 27.81, and − 20.07 dB.Their corresponding frequency locations and matching thicknesses (d m ) values are 4.0 GHz and 7.85 mm, 12.8 GHz and 2.50 mm, 17.4 GHz and 1.73 mm, 17. Fig. 7 a Fig. 7 a-c Impedance-f curves, ′′ c , ′′ p values for CFs and R2/CF, and d, e 3D RCS simulation and simulated RCS values at 0-180° incident angle for PEC, CFs and R2/CF Fig. 9 a Fig. 9 a Tafel curves, b EIS plots, c Bode plots of R2/CF in HCl solution (pH = 1), 3.5 wt% NaCl solution and KOH solution.d, e Thermal infrared images and corresponding temperature-time curves for PU, PVC, R2/CF captured at different times (from 5 to 20 min), and f Beaker containing 5 mL water placed on asbestos mesh, PU and R2/CF for heating by a spirit lamp
2024-06-18T06:17:07.967Z
2024-06-17T00:00:00.000
{ "year": 2024, "sha1": "05fe00708aaacdc27addd8074517877fe8ca09bb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e8e161d8d154ce7be412d3f0c868624aa08b824a", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
10105710
pes2o/s2orc
v3-fos-license
The relationship between neighborhood empowerment and dental caries experience : a multilevel study in adolescents and adults REV BRAS EPIDEMIOL SUPPL D.S.S. 2014; 15-28 ABSTRACT: Objective: To investigate the relationship of contextual social capital (neighborhood empowerment) and individual social capital (social support and social network) with dental caries experience in adolescents and adults. Methods: A population-based multilevel study was conducted involving 573 subjects, 15-19 and 35-44 years of age, from 30 census tracts in three cities of Paraíba, Brazil. A two-stage cluster sampling was used considering census tracts and households as sampling units. Caries experience was assessed using the DMFT index (decayed, missing and filled teeth) and participants were divided into two groups according to the median of the DMFT index in low and high caries experience. Demographic, socioeconomic, behaviors, use of dental services and social capital measures were collected through interviews. Neighborhood empowerment was obtained from the mean scores of the residents in each census tract. Multilevel multivariate logistic regression was used to test the relationship between neighborhood empowerment and caries experience. Results: High caries experience was inversely associated with neighborhood empowerment (OR = 0.58; 95%CI 0.33 – 0.99). Individual social capital was not associated with caries experience. Other associated factors with caries experience were age (OR = 1.15; 95%CI 1.12 – 1.18) and being a female (OR = 1.72; 95%CI 1.08 – 2.73). Conclusion: The association between neighborhood empowerment and caries experience suggests that the perception of features of the place of residence should be taken into account in actions of oral health promotion. The relationship between neighborhood empowerment and dental caries experience: a multilevel study in adolescents and adults INTRODUCTION Reducing inequalities in oral health is one of the main challenges that health policy makers have to face 1 , with the identification of social determinants of oral health being one of the possible ways to overcome this difficulty 2,3 .Socioeconomic factors are considered determinants of health conditions in populations 4 and, more recently, there has been a growing interest in understanding how the characteristics of societies and the various forms of social organizations influence the health and well-being of individuals and groups 5,6 .There is evidence that individual health vary in different social contexts, and that many measures at the individual level are strongly conditioned by social processes that operate at the group level 7 . Epidemiological studies in Brazil show that lower income, lower education, non-white skin color and inadequate housing are individual socioeconomic determinants of dental caries [8][9][10][11][12] .Furthermore, contextual social determinants were associated with caries experience.Access to piped water, the Human Development Index (HDI) and the Gini Index, which assesses income inequality, were related to caries in children and adolescents 8,12 . capital" has been observed as a possible characteristic associated to health conditions, although there is no consensus on its definition and measurement.Bordieu 13 and Coleman 14 define social capital as the reciprocity in social relations, while for Putnam 15 it is the set of norms and social structure networks that empower individuals to act together and more effectively in pursuit of common goals.Therefore, the concept encompasses civic culture, trust among members of the community, involvement in community affairs and good relationship between neighbors, and concerns norms and networks that foster collective action aimed at the common good [16][17][18] . According to Kawachi et al. 19 , there are three main types of social capital: bonding, bridging and linking.Bonding is represented by horizontal close relationships between individuals or groups with similar demographic characteristics, such as relationships between family members and close friends.These bonds influence the quality of life through the promotion of mutual understanding and support.Bridging stands for the most extensive relationships networks with other individuals and communities, and are vital to connect individuals and communities to resources or opportunities that are outside of their networks of personal relationships.Finally, linking refers to alliances with individuals in positions of power, that is, those who have the necessary resources for social and economic development, and it can be characterized as political awareness while integrating with other communities. In general, social capital can be considered both at an individual and at a contextual level.The individual social capital is defined as resources and different forms of support that are within the individuals' social networks 20 .Thus, measures of social and support networks have been used to assess individual social capital 21,22 .On the other hand, contextual or collective social capital emphasizes the resources that can be built collectively by individuals who are socially interconnected aiming to achieve collective goals, and has been evaluated and studied both in local levels of aggregation, such as neighborhoods, census tracts or neighborhoods, and in broader levels, such as municipalities, states or countries 20 .The neighborhood social capital is linked to the relationship between individuals and the social groups inserted into neighborhoods, and is a product derived from the continuous interaction between neighbors 23 .Neighborhoods can be defined geographically and correspond to social structures that include, in addition to individual social networks, shared norms and mutual trust, promoting cooperation for mutual benefit 24,25 .The measurement of neighborhood social capital may be an aggregate measure obtained from individual responses.Some dimensions used include social trust, social control, empowerment, political efficacy and safety in the neighborhood 26 . Despite the growing number of studies that linked the social capital to oral health 21,22,[26][27][28][29][30] , there are few studies involving adults, and those who assess both the effect of individual and of collective social capital 18,21,22,30 .Specifically for the outcome of dental caries, Patussi et al. 26 , in a multilevel study in the Federal District, Brazil, found a negative association between neighborhood empowerment and caries in adolescents.In an ecological study in 39 Japanese cities, it was found that the variance of the distribution of caries experience in 3 year-old children was explained in 6.6% of cases by individual variables and in 47.2% of cases for variables in the community level, suggesting that the community context affects the distribution of caries 29 .Therefore, there is a scarcity of studies on social capital and dental caries in Brazilian adults, and an absence of studies to evaluate simultaneously the individual and contextual social capitals.The aim of this study was to investigate the relationship between contextual (neighborhood empowerment) and individual (network and social support) social capital and caries experience in adolescents and adults. METHODOLOGY A cross-sectional, population-based study was conducted with individuals in the 15 -19 and 35 -44 years old age groups, between 2010 and 2011, in three randomly selected cities of the First Health Macroregion in the State of Paraíba, Northeastern Brazil.Paraiba was chosen because of the lack of studies on social determinants and oral health in the Northeast region of the country.The state is administratively divided into four Health Macroregions, and the sample sought to represent the First Macroregion, as it is the most populous (1,513,173 inhabitants, according to the 2008 census made by the Brazilian Institute of Geography and Statistics). The sampling was made by conglomerates, in two stages.First, we selected a random sample of 30 census tracts (primary sampling unit).Then we proceeded to the enrollment of blocks and households in each sector, in order to guide the selection of individuals with probability proportional to size of the census tract.Households (secondary sampling unit) were then selected based on a systematic sampling with an interval proportional to the number of households in the census tract, and within those, all individuals who met the age range of interest were invited to participate in the study. The inclusion criterion was restricted to age (birth year between 1991 and 1995 and between 1966 and 1975) and the exclusion criterion was residing outside the territorial area of the selected census tract. The variable "caries experience", as measured by the DMFT index (mean number of permanent teeth affected -decayed, missing and filled -per individual), was used for sample size calculation.The minimum sample size estimated was of 571 individuals selected proportionally from 30 census tracts, with a significance level of 5%, power of test of 80% and a design effect of 1.5 to detect a difference of 10% for the high prevalence of caries experience (DMFT > median) among census tracts with high and low empowerment. DATA COLLECTION Individual data were obtained from oral examinations conducted in the households, and from individual interviews structured for measurement of individual social capital and neighborhood empowerment, in addition to covariates. DENTAL CARIES EXPERIENCE The outcome of interest was the experience of dental caries assessed by the DMFT index.Clinical examination was performed by three previously calibrated examiners (Kappa intra and inter-rater coefficients greater than 0.93 and 0.89, respectively, for the DMFT index).The final score was converted into a dichotomous variable using the median as the cutoff: low caries experience (DMFT ≤ 9) and high caries experience (DMFT > 9) 26 . MEASUREMENT OF INDIVIDUAL-LEVEL VARIABLES The individual variables included individual social capital, sociodemographic and behavioral characteristics and use of dental services. Social support (bonding) and social network (bridging) were used to assess the individual social capital.The social support scale used in this study consisted of 19 items representing five dimensions of functional support: material, affective, emotional, informational and positive interaction 31,32 .The social network was assessed with 5 questions regarding the individual's relationship with their family and friends and their participation in social groups 32 . The individual covariates collected included demographic and socioeconomic characteristics (age, gender, ethnicity, education, family income and health conditions), risk behaviors related to oral health (frequency of intake of sweets and tooth brushing) and use of dental services (having at least one dental appointment and time elapsed since the last one). MEASUREMENT OF NEIGHBORHOOD-LEVEL VARIABLES The neighborhood variable was the perception of empowerment in the area of residence by the participants, defining neighborhood empowerment as social interaction processes that enable people to improve their individual and collective skills and exercise better control over their lives 26,27 .The instrument used was previously developed and used in a Brazilian population with good psychometric properties 26,27 .Still, the questionnaire was pre-tested in a pilot study involving 20 individuals from the same population and not participating in the main study, in order to assess its reliability.An intraclass correlation coefficient (7 day interval) of 0.808 and Cronbach's α of 0.887 were obtained, showing a good temporal reliability and excellent internal consistency.The empowerment was measured with a scale of 5 items related to the possibility with which each individual, if deemed necessary, would sign a petition, make formal complaints, contact local authorities, participate in meetings and form groups to talk about problems plaguing their neighborhood in order to improve their area of residence 33 .Three response options were provided: "I disagree" (code = 0), "I somewhat agree" (code = 1) and "I Agree" (code = 2), allowing, by adding the items, a score that ranges from 0 (lowest empowerment) to 10 (highest empowerment) for each individual.Subsequently, the final score of each participant was added in the census tract (area) level, since the chosen items reflect the idea that empowerment is an ecological feature 19 , characterizing a representative dimension of the census tract in this study 26,27 .The thirty sectors were then categorized into low, intermediate and high neighborhood empowerment, according to the tertiles of the sectors' score distribution 22,26 . STATISTICAL ANALYSIS A multilevel logistic model was used to estimate the association between contextual social capital, measured in this research by neighborhood empowerment (area level variable), individual social capital (social support and network) and caries, controlled for possible confounding factors. A bivariate analysis was performed, testing the crude association between covariates and caries experience.At this stage, the estimated odds ratios (OR) and confidence intervals of 95% (95%CI) were used.The variables with a p value lower than 0.10 (Wald Statistical Test) were selected to enter the multilevel modeling.The collinearity between variables was detected for: age and education; education and family income; frequency of intake of sweets and frequency of tooth brushing; frequency of intake of sweets and of dental appointments; having at least one dental appointment and time elapsed since the last one.The selection criterion to include the variable in the multivariate analysis was the degree of statistical significance in the bivariate obtained.Thus, the covariates considered in the multivariate analysis were age, gender and frequency of intake of sweets. As the caries experience was dichotomized, a multilevel logistic model, based on the logarithm of odds ("logit"), was used.Multilevel models allow the estimation of the contextual effect of a variable measured at the area level, considering the spatial distribution of individuals within the areas.The structure of a model with two levels of random intercepts and two fixed angles was adopted to group the individuals in the census tracts and estimate the cumulative distribution probability of the groups under comparison.Estimates of fixed and random parameters of the two ordered logarithm models were calculated by predictive/penalized quasi-likelihood (PQL) procedures, with a second-order Taylor expansion. The strategy adopted in the modeling consisted of first estimating the gross association between neighborhood empowerment and caries experience, and then gradually adjusting for factors that could explain this association.The unadjusted association of neighborhood empowerment (Model 1) was sequentially adjusted for individual social capital in Model 2, for demographic variables (age and gender) in Model 3 and for frequency of intake of sweets in Model 4. Significance adopted for the multilevel analysis was 5%. Statistical analyzes were performed in the following software: SPSS 17.0 (Statistical Package for Social Sciences for Windows ® , SPSS Inc., Chicago, IL, USA) and MLwiN 2.24 (Centre for Multilevel Modeling, Bristol, UK). The study was approved by the Ethics Committee on Human Research of the Health Department of the State of Paraíba, Protocol no.0001.0.349.000-09. RESULTS Initially, 685 individuals were invited to participate and 583 (response rate of 85.1%) agreed by signing the free and informed consent form.Participants without information for the outcome of interest or to any of the selected variables for multilevel modeling were excluded (n = 10), resulting in a final analysis sample of 573 individuals.The mean DMFT was 10.5 (± 7.6), ranging from 0 to 32, with a median of 9.0, with only 4.5% of individuals free of caries (DMFT = 0). Table 1 shows the distribution of individual characteristics of the sample and the unadjusted association between these variables (level 1 -individual) and the caries experience.Considering the significance level of 10%, an unadjusted association between age, gender, education (completed years of studies), family income and caries experience was observed.All variables of the blocks of behaviors related to oral health and use of dental services were associated with caries experience (p < 0.10). The distribution of caries experience according to the social capital variables is arranged in Table 2.No crude association was observed between empowerment and caries experience.However, considering the purpose of this study, it was decided to introduce it in modeling.None of the dimensions of social support reached significance for only one item selected, and the social network (frequency of sports or artistic activities in the last year) was associated with caries experience. The results of the analysis of the multilevel logistic regression between empowerment and caries experience can be seen in Table 3.Although empowerment was not associated with caries experience in the unadjusted model (Model 1), it was maintained throughout the modeling due to the purpose of the study.In the second model (Model 2), the individual social capital (social network) was added, noting its association with the outcome.Then, the model was gradually adjusted to individual variables considered as potential confounders, such as demographic variables (Model 3) and behaviors related to oral health (Model 4). According to the final model, individuals living in census tracts with intermediate empowerment had a 43% lower chance of having high caries experience than those from census tracts with low empowerment (OR = 0.57; 95%CI 0.33 -0.99).Additionally, no relationship between individual social capital and caries experience was observed (OR = 1.42; 95%CI 0,87 -2,31).Among the other individual factors, age (OR = 1.15; 3). DISCUSSION In this study, the variation in caries experience among census tracts was explained by the levels of empowerment perceived by residents.Adolescents and adults living in areas with intermediate empowerment had lower caries experience than those who lived in neighborhoods with low levels of empowerment.However, the individual social capital was not associated with caries.It was also observed that the caries experience was related to age and female gender.These findings suggest that the perception of the characteristics of the individual's context of residence is important for caries experience.These results are consistent with previous studies that consider the social capital as a possible contextual factor influencing caries 26,29,34 .The relationship between social capital and oral health can be explained by three mechanisms.First, social capital generates benefits to health by influencing behaviors from the dissemination of information on health and from a greater likelihood of the population adopting these positive behaviors 27 .According to Turrel et al. 35 , neighborhoods with high social capital are possibly characterized by shared norms and a general consensus on what would be "appropriate" practices, not only to the benefit of the individual, but to the benefit of the neighborhood as a whole.This "moral" dimension of social capital could influence people's behavior, since it would approve some actions such as regular check up dental appointments, and disapprove others, such as smoking in public places, thus producing a positive impact on health. A second explanation is that neighborhoods with high social capital can promote and protect the psychosocial health, as they are supposed to be communities in which greater trust, reciprocity and mutual concern among people can be observed.Therefore, to live within this context could imply lower levels of fear, anxiety and stress, as well as an increased self-esteem of individuals, with some of these as mediators of behaviors related to oral health 27,35 .As a third mechanism, there is the fact that high levels of social capital in the neighborhood are usually accompanied by a greater number of social networks between people, forming groups and organizations that rely on the participation of its residents not only in civic activities, but also in political processes related to various fields of social welfare, such as education, security, transportation and recreation 35 .Thus, social capital can influence health by creating a more participatory, humane, efficient, appropriate and better coordinated health care system 27 .There is evidence that communities with shared values and a strong sense of belonging can be better organized and are more successful in structuring processes for modifying the health care system to be consistent with local standards of behavior, shared values and community goals 36 .Pattussi et al. 27 add that, in such circumstances, the social capital would still assist communities or populations to make more efficient use of physical resources available locally.In this study, empowerment was used to measure social capital at the collective level, since it is considered a dimension of social capital 26,27 .The items used for its measurement covered a range of attitudes that could be taken by individuals to improve their neighborhood.Such actions require that communities have goals, and see them as a collective thing, as opposed to individual interests.They relate to the desire to act for the common good, a condition which, in turn, implies mutual trust and solidarity among residents.Thus, cohesive communities have more organizational capacity to decide their common interests and to demand the provision of appropriate local services 37 .From this perspective, the results in this study can be explained by the influence of neighborhood social capital on increased access and better organization of health services. The lack of association between individual behaviors related to oral health and caries experience in the adjusted analysis corroborates the mechanism that links the neighborhood social capital with the diffusion of health-promoting behaviors, i.e. a contextual effect.Unlike observed by Pattussi et al. 26 , in a study conducted in the Federal District and restricted to adolescents, this study could not relate the experience of dental caries with the frequency of intake of sweets and with having dental appointments.These discrepancies may be due to methodological differences in the measurement of behavioral variables or variations on the age group investigated and on the study location. There are few studies that evaluated the relationship between contextual and individual social capital and oral health in adolescents and adults through a multilevel approach.The findings of this study are consistent with previous studies on the independent effect of contextual social capital in the occurrence of dental caries.However, previous studies were conducted with children and adolescents 26,29,34 .The inclusion of the simultaneous analysis of the collective and individual social capital was planned to verify if the probable influence of this social determinant in the occurrence of dental caries would be attributed to its contextual effect or its compositional effect.It is known that the place where people live influences their health, but this influence can be explained by the effect of the context itself or assigned to the individual characteristics of their residents 38,39 .Thus, this study sought to investigate what would be more important for caries experience: the empowerment of the neighborhood where the individual resides (contextual social capital) or their social links and connections in the neighborhood (individual social capital).The answer to that question is that the characteristics of the area play a more prominent role than the social network and social support.This type of evaluation is only possible through the use of multilevel analysis, still not very explored in studies on oral health inequities 35 .This approach allows the distinction of the contextual effect from the compositional, since it considers the influence of the variables measured at different levels 35,38,39 . The sectional design of this study is one of its limitations, since it is not possible to establish a causal relationship between the exposed and the outcome 26,34 .Furthermore, one must consider the possibility of reverse causality, since the concentration of individuals with more caries in areas with less social capital would occur as a result of their low socioeconomic status.While the inclusion of adolescents and adults is positive due to the lack of studies that consider these age groups, it is not possible to rule out the possibility of residual bias of age, since both caries and age were analyzed as dichotomous variables and are strongly related to each other.Another relevant aspect to be discussed is the non-investigation of the systemic exposure to fluoride as a possible confounder of the association between neighborhood empowerment and caries, as done in previous studies 26,29 .This occurred because there is no fluoridation of the water supply in municipalities selected to compose this study's sample. In research on social capital and health, there is no consensus on the geographical unit used to define neighborhood.In the present study, the use of census tracts as neighborhoods was based on the convenience to the sampling process, as the census tract is considered by IBGE as the smallest territorial unit, and maps and information on these áreas are available.However, this definition may not match the perception that individuals have of the neighborhood, both geographically and in relation to the sense of belonging to that area 35 .The average of 19 respondents per census tract in this study can be considered short of ideal, because the recommended number of individuals allocated per unit of aggregation is between 25 and 35 in multilevel studies 40 .Yet, the association observed between neighborhood empowerment and caries suggests that the average number of individuals per tract in this study achieved sufficient power to test the hypothesis.The variation in the number of respondents per census tract may also explain the association of empowerment and caries only in the intermediate level of empowerment.Finally, despite the low response rate (14.9%), it was impossible to estimate its variation among the 30 census tracts.Pattussi et al. 26 reported that, despite areas with lower response rates showing less accurate and valid neighborhood empowerment measures, this aspect did not affect the outcome. The findings of this study present implications for policy makers and leaders in the health field.In general, the major questions faced by these people concern the levels at which the actions and programs should be targeted.In other words, when faced with the question "At what level is most appropriate to intervene in order to improve oral health?In individuals or places where they reside?", the answer should be: both.Health promotion measures should not be restricted only to individuals because they are ineffective in modifying their health-related behaviors.New perspectives in this regard emphasize that the adoption of healthy behaviors is linked to changes in the environment where people live and work, for they allow the creation of conditions in which healthy choices are the easiest to be taken 1,2 .This would be one of the viable ways to deal with the causes of the causes, called "upstream social conditions", which originate inequities in health in modern society 1 .Future prospective studies are necessary to confirm the hypotheses raised, as well as to better guide public dental health policies. Table 2 . Distribution of variables related to Social Capital: Contextual (neighborhood empowerment) and Individual (Bonding and Bridging).Crude Odds Ratios, with respective 95% confidence intervals, estimated by multilevel analysis for Dental Caries Experience. *Social support: OR estimates evaluated at each 10-point increase in the scale; **n = 571; ***n = 572; & mean±SD.Exchange rate of 1 real = 1.70American dollars at the time of the study. Table 3 . Multilevel Logistic Regression Models for the association between Empowerment and Dental Caries Experience, controlled for confounders.Model 2, Model 1 adjusted for individual social capital (Social network); c Model 3, Model 2 adjusted for sociodemographic confounders (age and gender); d Model 4, Model 3 adjusted for frequency of intake of sweets.Exchange rate of 1 real = 1.70American dollars at the time of the study. a Model 1 unadjusted; b
2017-06-01T05:34:39.511Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "1f9962e894318b33e5e7aa73a562e321b266c7d4", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rbepid/a/jwVSFfhkqRCqbVXjXPsP4xz/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1f9962e894318b33e5e7aa73a562e321b266c7d4", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
218674122
pes2o/s2orc
v3-fos-license
Social interaction layers in complex networks for the dynamical epidemic modeling of COVID-19 in Brazil We are currently living in a state of uncertainty due to the pandemic caused by the SARS-CoV-2 virus. There are several factors involved in the epidemic spreading, such as the individual characteristics of each city/country. The true shape of the epidemic dynamics is a large, complex system, considerably hard to predict. In this context, Complex networks are a great candidate for analyzing these systems due to their ability to tackle structural and dynamic properties. Therefore, this study presents a new approach to model the COVID-19 epidemic using a multi-layer complex network, where nodes represent people, edges are social contacts, and layers represent different social activities. The model improves the traditional SIR, and it is applied to study the Brazilian epidemic considering data up to 05/26/2020, and analyzing possible future actions and their consequences. The network is characterized using statistics of infection, death, and hospitalization time. To simulate isolation, social distancing, or precautionary measures, we remove layers and reduce social contact’s intensity. Results show that even taking various optimistic assumptions, the current isolation levels in Brazil still may lead to a critical scenario for the healthcare system and a considerable death toll (average of 149,000). If all activities return to normal, the epidemic growth may suffer a steep increase, and the demand for ICU beds may surpass three times the country’s capacity. This situation would surely lead to a catastrophic scenario, as our estimation reaches an average of 212,000 deaths, even considering that all cases are effectively treated. The increase of isolation (up to a lockdown) shows to be the best option to keep the situation under the healthcare system capacity, aside from ensuring a faster decrease of new case occurrences (months of difference), and a significantly smaller death toll (average of 87,000). Introduction Although we have experienced several pandemics throughout history, COVID-19 is the first major pandemic in the Modern Era.The last critical global epidemic occurred in 1918 and became known as the Spanish flu.But, in 1918, the reality was quite different.Scientific and medical knowledge was much more limited, making it difficult to fight the disease.Furthermore, the world was not globalized, the means of transport were not as agile as the current ones and the population was much smaller.The 21st century is marked by globalization and an intricate and intense social network, which connects in one way or another to everyone on the planet.The latter fact increases the danger that a local epidemic disease will rapidly evolve into a pandemic like what happened in Wuhan, China, and now is all over the world. The form of propagation and contagion of the Sars-CoV-2 virus occurs by direct contact between individuals, through secretions, saliva, and especially by droplets expelled during breathing, speeching, coughing, or sneezing.The virus also spreads by indirect contact, when such secretions reach surfaces, food, and objects [41].Besides, infected people take a few days to manifest symptoms, which can be severe or as mild as a simple cold.There is even a large proportion arXiv:2005.08125v2[physics.soc-ph]20 May 2020 of infected people who remain asymptomatic [37].This makes it practically impossible to quickly identify the infected and apply effective measures to limit the spread of the disease.Also, Sars-CoV-2 was discovered in December 2019, which makes it very recently in the face of the current epidemic.Little is known about the COVID-19 disease, which appears to be highly lethal, with no drugs to prevent or treat.The concern is greater since direct (individual -individual) and indirect (individual -objects -individual) social relations are the means of spreading the disease.Thus, the social interaction structure is the key to create strategies and guide health organizations and governments to take appropriate actions to combat the disease. One of the main concerns is overloading the health system.The first case in Brazil was confirmed on February 26, a 61-year-old man who traveled to the Lombardy region in northern Italy.Now, in the middle of May, there are more than 200,000 cases and 14,000 deaths in all states of Brazil [30].The concern is even worse due to the country's social inequality, over 80% of the population relies solely on the public health system and this distribution is not uniform.According to [11], there are only 9 hospital beds per 100,000 people in the North region while Southeast accounts for 21 hospital beds.The treatment of severe cases requires the use of respirators/ventilation in intensive care units (ICU), and if simultaneous infections occur there will be no beds to meet the demand and a possibly large number of victims.Thus, it is urgent to develop models and analyses to try to predict the evolution of the virus.Also, as noted in Figure 1, Brazil is running towards being the next epicenter of the pandemic.It has already exceeded the number of cases in important countries such as Germany, China, Japan, Italy, Iran, South Korea, and France (the rates consider the population size of each country and are on a logarithmic scale). Figure 1: Total number of cases reported in Brazil compared to other countries (May 5, 2020 [24]).It is possible to notice that Brazil is surpassing countries such as Italy, South Korea, Japan, and China, and it is reaching the relative number of cases in the United Kingdom and France.As of the date of this study, the United States is the epicenter of the pandemic. Since COVID-19 presents a unique and unprecedented situation, this work proposes a specific model for the current pandemic.Based on the classic epidemic model SIR, also extended to SID [35], SIASD [9] and SIQR [14], we propose a more realistic model to better represent the effects of the COVID-19 disease by adding more infection states.The proposed approach also considers social structures and demographic data for complex network modeling.Each individual is represented as a node and edges represent social interaction between them.The multi-layer structure is implemented by different edges representing specific social activities: home, work, transports, schools, religious activities, and random contacts.The probability of contagion is composed of a dynamic term, which depends on the circumstances of the social activity considered, and a global scaling factor β for controlling characteristics such as isolation, preventive measures, and social distancing. The proposed model can be used to analyze any society given sufficient demographic data, such as medium/big cities, countries, or regions.Here we analyze in depth the Brazilian data.The SIR model is applied through the network using an agent model, and each iteration of the system is simulated using the 24-hour pattern, allowing us to understand the dynamics of the disease throughout the days.The results show the importance of social distancing recommendations to flatten the curve of infected people over time.This is currently maybe the only way to avoid a collapse of the health system in the country. The paper is divided as follows: Section 2 presents important concepts about complex networks, the SIR model and its applications.Section 3 explains our proposed approach and Sections 4 and 5 presents the results, discussion, and conclusions of the work. Epidemic Propagation on Complex Networks Created from a mixture of graph theory, physics, and statistics, Complex Networks (CN) are capable to analyze not only the elements themselves but also their environment to find patterns and obtain information about the dynamics of a system.As most of the natural structures are composed of connected elements, graphs are suitable to analyze most of the real-world phenomena.Over the past two decades researchers have been showing that many real networks do not present a random structure, and its emergent patterns can be used to understand and characterize a model [8,39].Complex network analysis has then been applied to sociology, physics, nanotechnology, neuroscience, biology, among other areas [12,13]. To start with a formal definition, a graph G is a set {V,E} where V is composed by N vertices (also known as nodes or elements) {v i , v n } and E is the set e(v i , v j ) of edges (or connections) among its elements.Edges represents the relationships between two elements and its value can also represent the strength or weight of a connection if e(v i , v j ) > 0. For a G with only e(v i , v j ) = {0, 1| ∀ i,j 0 < i, j ≤ n} the model is an unweighted graph.Furthermore, a network is undirected if e(v i , v j ) = e(v j , v i ) and directed, otherwise. Usually, applications with complex networks consist of two main steps: i) transform the real structure into a complex network, and ii) analyze the model and extract its features or understand its dynamics.One natural phenomenon that has a straight forward connection to a complex network in society.People are connected due to several aspects such as members of a family, religious groups, co-workers, members of the same school, or faculty, among other social relationships.Therefore CNs have been widely employed for social network analysis [38]. Extended from social interactions, the epidemic spread has also been studied by researchers in the last decades.In this context, one of the best known and widely used epidemic models in infectious diseases is the susceptible-infectedrecovered (SIR) model, which is composed of three categories of individuals [4,7] • Susceptible: the ones who are not infected but could change its status to a state to infected if in contact with a sick person combined with a probability β of contagion • Infected: the ones that have the disease • Recovered: usually after some time, a person recovers from the illness and it is not able to be infected again due to the immunity process (in this case, this is an assumption of the process).The recovery rate of infected people is aligned with a probability of γ Also, the model can be described as where s, i and r represents the ratio of susceptible, infected and recovered people in the population, respectively.Usually, the problem is solved with differential equations, however, agent-based techniques in networks can represent the nature of the spread of viral diseases in a more complex scenario. If a network is fully connected, meaning that e(v i , v j ) = {1, ∀ i,j 0 < i, j <= N }, Equation 2 fits the structure perfectly.However, in the real world, not everyone is connected and people only contract the disease if in contact with an infected individual or object.This is why a complex network approximates the dynamics of real viruses and can help us to understand the disease behavior.There are various approaches to represent people and society as networks, named social network analysis.Small world networks [32] can be used as a good approximation of the social connections.In 2000, Moore [32] emphasized that the use of small-world networks, where the distance among two elements is usually small in comparison to the size of the population, showed a faster spread of the viral disease than classical diffusion methods.The approximation of real social phenomena was first explained by Milgram [29] in [29], the sociologist is the author of the well-known idea that there are up to six people separating any two individuals in the world, which reinforces the importance of analyzing the epidemic spread from a graph view.In [33], the authors used small-world networks to simulate a SIR model, however, they considered that every contact with an infected person resulted in contamination, which is not realistic.Therefore, other researchers improved the model over the years, adding new constraints to approximate the simulation to real scenarios [15]. The SIR model on networks works as follows: each node represents a person and, the elements are connected according to some criteria and the epidemic propagation happens through an agent-based approach.It starts from a random node, and for each time step nodes with the susceptible state can contract the disease from a linked infected node with a predefined probability.The same idea occurs with the recovered category.After a certain period, a node can recover or can be removed from the system (case of death) according to a certain probability.At the end of the evolution of a SIR model applied to a network, the number of nodes in each SIR category (susceptible, infected and recovered) can be calculated for each unit of time evaluated and then compare these data with real information, for example, the hospital capabilities of the health system.Also, the probability of infection and recovery can be adjusted over time considering social distancing, hygiene, and health conditions. 3 Proposed Model: COmplexVID-19 The proposed model extends the SIR model to a more realistic scenario to achieve a better correlation to the COVID-19 disease, since the model was created specifically for the disease, we named the model as COmplexVID-19.Our strategy is based on a multi-layer network to represent the Brazilian demography and its different characteristics of social relationships.Each layer is composed of a set of groups representing how people interact in a given social context.In the network, a node represents a person and the edges are the social relationships between persons, and they are also the means through which the disease can be transmitted.The virus spreads from an infected node to neighboring nodes at each iteration step (1 step = 1 day), according to a given infection probability.First, we describe how the layers are built based on social data from Brazil. Network Layer Structure Over Brazilian Demography To define the different social relations, the first information needed is the age distribution so that groups such as schools and work can be separated.We consider the Brazilian age distribution in relation to the total population in 2019 [20], details are given on Table 1.This distribution is used to define an age group for each node, which is then used to determine its social activities through the creation of edges on different layers.In this approach, each network-layer represents a kind of social relationship or activity that influences the transmission of the COVID-19.In this way, it is possible to evaluate and understand what is the impact of each social activity in the epidemic propagation.Basically, in this work, a network layer is represented by a set of edges connecting some nodes.The following social activities are considered, composing 6 different layers: • Home: in this layer, all people that live in the same residence are connected. • Work: connects people that work in the same environment/company. • Transport: this layer represents people that eventually take the same vehicle at public transports. • School: represents the social contact of students that belong to the same school class. • Religious activities: connects people of the same group of some religious activity. • Random: this layer represents activities of smaller intensity, such as indirect contact (through objects/surfaces). The first layer represents home interactions and is composed of a set of groups with varying size which are fully connected internally.These groups have no external connections, i.e. the network starts with disconnected components representing each family.To create each group, we consider the Brazilian family size distribution for 2010 [19], the year with more detailed information on family sizes from 1 up to 14 members.We consider the probability of a family having sizes from 1 to 10, therefore the probability of a family having 10 persons is the sum of the higher sizes, the details of this distribution are given in Table 1.The first layer is then created following the family size distribution and ensuring that each family has at least 1 adult.Figure 2 (a) shows the structure of such a layer built for a population of n = 100. A large fraction of the population in any country needs to work or practice some kind of economic activity, which also means interacting with other people.Thus, work represents one of the most important factors of social relations, which is also very important in an epidemic scenario.To represent the work activity we propose a generic layer to connect people with ages from 18 to 59 years, i.e. 60% of the total population in the case of Brazil.There is a wide variety of jobs and companies, therefore it is not trivial to create a connection rule that precisely reflects the real world. Here, we consider an average scenario with random groups of sizes around [5,30], uniformly distributed, and internally connected (such as the "home" layer).An example of this layer is shown on Figure 2 (b), using n = 100.Although the nodes of a group are fully connected, the transmission of the virus depends directly on the edge weights, which we discuss in-depth on Section 3.1.1. Collective transports are essential in most cities, however, it is one of the most crowded environments and plays an important role in an epidemic scenario also due to the possibility of geographical spread, as vehicles are constantly moving around.The third layer we propose represents this kind of transports, such as public transports, and includes people that do not possess or use a personal vehicle.In Brazil the number of people using public transport depends on the size of the city, with 64.98% in the capitals and 35.89% in other cities [23], with an average use of around 1.2 hours a day1 .Here we consider the average of the population between the two cases (50%), randomly sampled, to participate in the "transports" layer.Random groups are created with sizes between [10,40], uniformly sampled, and the nodes within each group are fully connected.This variation of sizes is considered to represent cases such as low and high commuting times, and also the differences between vehicle sizes.Other factors such as agglomeration and contact intensity are discussed in Section 3.1.1.This layer is illustrated on Figure 2 (c). Schools are another environment of great risk for epidemic propagation.The proposed layer considers the characteristics of schools from primary to high school and how children interact.We consider that all persons from 0 to 17 years (24% of the Brazilian population) participate in this layer, and the size of the groups, which represents different school classes, varies uniformly between [16, 30] [21].This layer is illustrated on Figure 2 (d). Brazil is a very religious country, in which by 2010 only around 16.2% of the population claimed not to belong to any religion [18].64.6% claimed to be catholic and 22.2% to be protestant, summing up to 86.8% of the total population. Here we consider that nearly half of these people (40% of the total population) actively participate in religious activities (weekly).The distribution of religious temple sizes is defined as a Pareto distribution in the interval [10,100].Taking into account that wage distribution follows the Pareto distribution approximately, we model real estate predominance according to their capacity.The assumption here is that building costs (for churches, offices, homes, etc.) have a linear relationship to their internal capacity, and thus any given capacity has a power-law relationship with the number of such buildings within a region. We consider a random layer to represent all kinds of contacts not related to the specific previous social layers.This includes small direct contacts (person-to-person) and indirect contacts (individual -objects -individual) that may happen throughout the week, such as random friend/neighbor meetings, shopping, and other activities that involve surface contacts.For that 5n new random edges are created, that can connect any node.On the one hand, this yields an average of 5 random connections to each node, which can randomly connect any other node.On the other hand, the impact of this layer on the epidemic is smaller than the others, as it represents rapid contacts in comparison to the other activities described, thus its infection probability is smaller.In the following section we discuss the details concerning this aspect, deriving from the edge weights of each layer.In Figure 2 (f) an example of this layer is shown.The overall structure of social interactions in our model can be compared to the statistical analysis in [16], however here we introduce a more detailed model of social contacts with specific layers and connection patterns to better fit the particularities of a given country or city. Infection probabilities Unlike the traditional SIR model, which consists of a single β term to describe the probability of infection, here we propose a dynamic strategy to better represent the real world and the new COVID-19 disease.The idea is to incorporate important characteristics in the context of epidemic propagation according to each layer.Firstly, to a given layer a fixed probability term is calculated to represent its characteristic of social interaction.For this, we considered 3 local terms: the contact time per week, the average number of people close to each other (agglomeration level), and the total number of people involved in the respective activity.Considering two nodes v x and v y , connected at group j of layer i, its edge weight is then defined by where t i represents the average weekly contact time on layer i, k i is the agglomeration level (average number of nearby people) and n ij represents the size of the group j in which the nodes participates on layer i.The first fraction represents the contact time normalized by the total time of the week (24 * 7 = 168), and the second fraction represents the proportion among the local people closest to the total number of people on that activity group. The first part of the infection probability equation is multiplied by a β term, which scales the original probability.The β term is then the only parameter to tune the infection rates for the entire network, and the other properties are specific for the studied society, based on its population characteristics and the nature of the activities (layers).Table 2 shows these specific properties that we considered for the Brazilian population, and how the infection probabilities are calculated for each layer.In the table, we have the following information: who or how many people are part of the activity represented by a layer (column "who", discussed in the previous section); contact time according to activity (column "Time of contact"); the average number of people close to each other in each activity (column "Nearest", represents the agglomeration level); the number of connections between people (column "Group size"); the probability of infection (column "Probability"). Dynamics Modeling The proposed model is a variant of the SIR approach where we include new possible states, structural and dynamic mechanisms after the new findings on COVID-19.The traditional SIR model consists of 3 states: Susceptible, Infected, and Recovered.To better represent the intrinsic dynamics of the new epidemic, we considered 7 states according to reported distributions of the clinical spectrum [27,36]: • Susceptible: Traditional case, it means that a person can be infected at any time.This is the initial state of every node.• Infected -asymptomatic: People who do not show any symptoms (30% of the total cases of infection) and remain contagious for up to 18 days (they may recover after 8 days).This is the most dangerous case for the epidemic spreading because the person is not aware of its infection.• Infected -Mild: 55% of the cases, present mild and moderated symptoms with no need for hospitalization, remain contagious for up to 20 days, and may recover after 10 days of infection.• Infected -Severe: 10% of the cases, present strong symptoms, and need hospitalization, remain contagious for up to 25 days.Has a death rate of 15% and may recover after 20 days.• Infected -Critical: Present worst symptoms and remain contagious for up to 25 days, need ICU and Ventilation, have a death rate of 50% and may recover after 21 days.• Recovered: People who went through one of the infection cases and overcame the disease, ceasing to contaminate and supposedly becoming immune.These nodes no longer interact with other nodes anymore and are therefore removed from the network.• Dead: People who went through severe or critical cases and eventually died.These nodes are also removed from the network. Estimates for the proportion of asymptomatic cases vary from 18% (95% confidence, [15.5, 20.2%]) [31] to 34% (95% confidence, [8.3, 58.3%]) [17].Considering the confidence intervals, here we roughly approximate it to an average of 30% of the total number of infected cases.However, it is very difficult to study asymptomatic cases due to several reasons, such as the lack of available tests and the difficulty in identifying potential cases, which would include every person who had contact with known symptomatic cases.Some studies indicate that asymptomatic cases may remain contagious for up to 25 days, with an incubation period of 19 days [6], but the viral load may be smaller at the end of the infection.Here we take an optimistic approach considering that they may recover (become immune and cease to contaminate) uniformly after 8 days of infection, up to around 18 days.As for the recovered nodes, we are considering that people become immune or at least acquire a long-term resistance to the virus, up to a maximum of 300 days (limit of our simulations).However, this should be taken cautiously as these properties are not yet fully understood [26]. Dynamic Evolution The infection grows through the contact (edges) between infected and susceptible nodes, and the probability of being infected is the edge weight.If infection occurs, then one of the 4 infection cases are chosen based on the probability described above (30%, 55%, 10% and 5%).This distribution plays an important role in the structure and dynamics of the network.The node structure of asymptomatic cases does not change during the simulation, except for the time it takes to cease contamination and recover.It means that as these persons are not aware of their contamination, they will remain acting normally on the network (according to the active layers and edge weights).Their contagious time varies from 1 to 18 days after infection. Concerning the other cases (mild, severe, and critical), we consider the incubation time of the virus, the recovery time, the contagion time, the death rates of each case, and the usual action taken by the infected person or health professionals at hospitals.Various works [5,27,28] point out that the average incubation period of COVID-19 is around 5 days, but some cases may take much less or more time.The official WHO report [40] states that the average incubation time is around 5 to 6 days, with cases up to 14 days.The results in [27] show that the average shape of the incubation time follows a log-normal distribution (Weibull distribution) with an average of 6.4 days and a standard deviation of 2.3 days.In this context, we consider the day when an infected person begins to show symptoms by randomly sampling from this distribution (1000 repetitions), with cases varying from 2 to 14 days. For mild cases, the nodes are isolated at home, maintaining the connections of the first layer, and then only 20% of the cases are diagnosed.Considering the ratio of diagnosed cases, patients who are asymptomatic or with mild symptoms of COVID-19 may not seek health care, which leads to the underestimation of the burden of COVID-19 [25].Moreover, our diagnosis rule is also based on the fact that ongoing tests in Brazil are increasing more slowly than in most European countries and the USA (tests are being performed mostly on people that need hospitalization).If a given case is severe or critical, the patient goes to a hospital and is fully isolated, i.e. we remove all of its connections.This is a rather optimistic assumption, considering that these patients still may infect the hospital staff.Concerning the time that patients usually stay at hospitalization/ICU, the works [10,44] points to an average of 14 days for all cases.For standard hospitalization, we considered a minimum of 6 days and a maximum of 16 days of stay, and for the ICU/Ventilation, a minimum of 7 and a maximum of 17 days of stay.The time of each case will depend on the day the symptoms start and the day of recovering/death.Figure 3 illustrates all the infected states and mechanisms described here.This configuration results in an overall lethality of 4%.It is important to stress that here we consider a maximum of 25 days of infection time, which is the time frame based on most studies we have seen so far in the literature.We are still at the beginning of the pandemic and a better characterization of the long-term impact is very difficult.Nonetheless, the available information allows to represent the most obvious features of the Sars-CoV-2 virus and to evaluate its main impacts on society.To simulate the reduction or increase of social distancing/quarantine, we remove/include some layers of the network, or change their edge weights.Similarly to the approach on [16] to improve home contact when in quarantine, we increase the home layer edge weights by 20% for each removed layer.To balance that we considered a smaller number of hours of contact in the base calculation for the home layer (3 hours a day), also taking into consideration that this layer has full contact between people of the same family.When the home contacts are increased according to our approach of layer removal, the time/intensity of contacts may increase up to its double. Results For each experiment with the proposed model, we consider the average and standard deviation (error) of 100 random repetitions to extract statistics of infection, death, and hospitalization time.Due to the random nature of these networks, it is possible that extreme cases occur within the repetitions, i.e. when the infection starts at a node that is not capable of further propagation, leading the epidemic to end at few iterations.Considering the real data we know that this is not the case, at least not for Brazil, therefore we manually remove these networks and they are not considered for the average/error calculations.It is important to notice, however, that this rarely happens, in all our experiments we noticed a maximum of 4 networks of this kind.Due to time and hardware constraints, our simulation considers 100,000 nodes, and the results need to be scaled up by a factor of 57 to match the Brazilian population statistics.This factor was empirically found by approximating the model results in the number of reported cases in Brazil.It is important to stress that for better statistics it should be considered the largest possible number of nodes to represent a population, i.e. the ideal case would be n = total country/city population.However, the computational cost of the simulation grows directly proportional to the number of nodes and edges of the network, and considering the critical situation of the moment at hand, 100,000 nodes are our limit to promptly present results of the epidemic dynamics. In the experiments when varying the social distancing, the same network is considered in each iteration, i.e. comparisons of including/excluding layers are made in the same random network.We considered the epidemic began on February 26, which is the day the first confirmed case was officially reported.It is important to emphasize that we made various optimistic assumptions throughout the model construction and simulation, such as to consider that people are behaving with more caution by reducing direct contact, wearing masks, and doing proper home/hospital isolation when infected.It is also important to notice that we are not considering the number of available ICU/regular hospitalization beds for the death count, i.e. all the critical and severe cases are effectively treated.It is not trivial to estimate the direct impact of these numbers on the epidemic, however, this is an essential factor that directly impacts the number of deaths.Here we focus on the impacts of different actions on the overall epidemic picture, such as the increase and reduction of cases, deaths, and occupied beds in hospitals. The social network starts normally, with all its layers and the original infection probabilities.The infection starts at a node with the closest degree to the average network degree and propagates at iterations of 1 day (up to 300 days).We consider an optimistic scenario, in which people are aware of the virus since the beginning, thus the initial infection probability is β = 0.3.This represents a natural social distancing, a reduction of direct contacts that could cause infection (hugs, kisses, and handshakes), and also precautions when sneezing, coughing, etc.We empirically found that this initial value of β yields results with a higher correlation to the Brazilian pandemic.A moderated quarantine is applied after 27 days, representing the isolation measures applied on March 24 by most Brazilian states, such as São Paulo [1].To simulate this quarantine we remove the layers of religious activities and schools and reduce the contacts on transports and work down to 30% of its initial value, i.e. β = 0.09.The remaining activities on these layers represent services that could not be stopped, such as essential services, activities that are kept taking higher precautionary measures, and also those who disrespect the quarantine. Comparison to real data We compare the output of the model in the first 83 days with real data available from the Brazilian epidemic (up to May 18) [24,34,42,43].The model achieves a significant overall similarity within its standard deviation.The greatest difference in the number of diagnosed cases at the last 10 days may be related to the increase in the number of tests being performed in Brazil, or yet, the constant decrease of isolation levels in the country (below 50% for most days of the past month) [22].We considered here a fixed isolation level around what was observed in the first days after the government decrees in Brazil, but data in ref. shows that these levels are constantly changing.Therefore, the number of diagnosed cases and deaths for the remaining simulation may be greater than the reported on this paper (see the "keep isolation" scenario in the next section). Concerning the daily death toll, the average number of the proposed model is greater than the official numbers.This is somehow expected, considering that the underdetection rates may be greater in contrast to the fewer number of tests being performed.To better understand this, we analyzed the number of death in Brazil from January 1 to April 30, comparing cases between 2019 and 2020, the results are shown in Figure 5.It is possible to observe a clear increasing pattern after February 26, which is the day of the first officially confirmed case of COVID-19 in Brazil.This indicates that the real death toll for the disease may be significantly greater than the official numbers.[34], Worldometer [43], Johns Hopkins University [24], and World Health Organization (WHO) [42].The dotted lines represent the standard deviation, in the case of the real data the curve is the average over a 5-day window, and the solid lines the real raw data.The greatest average number of deaths produced by the proposed model may be related to underdetection (See Figure 5).To understand the impact of the COVID-19 underdetection in Brazil, we considered the official death records of 2019 and 2020 at the same period (January 1 to April 30) [2].Then the total death difference is compared to the COVID-19 records of the WHO [42] and the Brazilian government [2] data.The largest difference that appears right after the first confirmed case may indicate a significant underdetection of COVID-19 cases. Future actions and its impacts After the initial epidemic phase, we consider 4 possible actions that can be taken after 90 days (May 26): a) Do nothing more, maintaining the current isolation levels; b) Stop isolation, returning activities to normal (initial network layers and weights); c) Return only work activities, restoring the initial probability of the layer; or d) Increase isolation, stopping the remaining activities in the work and transports layers (home and random remains).Firstly, we analyze the impacts on the number of daily new cases and deaths, results are shown in Figure 6.As previously mentioned, at the start of the COVID-19 pandemic, Brazil was performing a fewer number of tests by an order of magnitude, in comparison to other countries with similar epidemic numbers, therefore we considered as diagnosed only the severe and critical cases, which are pronounced subjects for testing, and 20% of the mild cases.The total infection ratio is discussed later.Considering keeping the current isolation levels, the peak of daily new cases occurs around 100 days after the first case (June 5), with around 11,000 confirmed cases.After 202 days (September 15), the average daily cases is around 500, and it goes below 100 daily cases after around 237 days (October 19).The peak of daily new deaths occurs around 118 days (June 23), with an average of 1900 deaths, and goes below 100 new occurrences after around 210 days (September 24).It is important to stress that this is a hypothetical scenario where the isolation level remains the same from day 27 to 300, which is hardly true in the real world where it is constantly changing [22].The total numbers after the last day (300) account for 946,830 (±10, 507) diagnosed cases and 149,438 (±3, 124) deaths.When we consider the return of all activities after 90 days, the number of cases and deaths grows significantly in an exponential fashion.The peak occurs at 108 days (June 13) with an average of 40,937 (± 11,010) new cases, and at 122 days (June 27) with an average of 6,484 (± 1,739) new deaths.Although the peak of cases/deaths and the decrease of the numbers occur early, in this case, the final result is critically worse, with a total of 1,340,367 (± 18,513) diagnosed cases and 212,105 (± 4,359) deaths.Here it is important to notice that we considered that all the activities return after 90 days and remain fully operational until the last day (300).Moreover, we do not account for the overloading of hospitals, which directly impacts the final death count.Therefore, the number of deaths may be considerably higher.Another possible scenario is the return of only the work layer, keeping reduced transports and no schools and religious activities, however, the pattern is similar to returning all activities, considering the growth time, peak, and decay time.The final numbers in this case are 1,253,119 (± 26,009) diagnosed cases and 197,756 (± 5,693) deaths. If the isolation is strictly increased after 90 days (lockdown), the infection and death counts drop significantly in comparison to the other approaches.Moreover, the recovering time is much faster, as daily new cases stop earlier than the other scenarios.The peak of daily new cases happens around day 93 (June 1), and of daily new deaths around day 106 (June 11).The total numbers of diagnosed cases and deaths after day 300 are, respectively, 552,855 (± 195,802) and 87,059 (± 30,871). Considering the hospitalization time described in the scheme of Figure 3 it is possible to estimate the number of occupied beds for regular hospitalization (severe cases) and ICU/Ventilation (critical cases).We also show the difference between the cumulative growth of diagnosed and undiagnosed cases and recovered cases.The same approach as the previous experiment is considered (except for "return work") with 3 possible actions after 90 days (May 26), results are shown in Figure 7.The overall pattern of results is similar to the previously observed for the number of diagnosed cases and deaths.It is possible to notice that the number of undiagnosed cases is much higher than the diagnosed cases.This reflects the number of asymptomatic cases and the lack of tests for mild cases.In the worst scenario, which means ending the isolation, the total infected number may go above 5 million cases.The recovered rate is directly proportional to the infected rate, as one needs to be infected to either die or become resistant to the disease.If the infected rate is high, so is the recovered rate, e.g. the scenarios of keeping or ending isolation, and a high recovered rate also helps in mitigating the epidemic propagation (natural immunization).However, increasing isolation decreases the propagation much faster than natural immunization, with a considerably smaller death toll.It is also possible to observe the differences at the start of effective recovering, i.e. when the recovered rate surpasses the infected rates, this is due to the early increase in isolation levels. The peak of hospitalization occupancy occurs around a week before the death peaks, in any scenario.In this case, ICUs are very important because critical patients are treated there, which represents the cases of higher death rates.Within the "end isolation" setting, patients may occupy up to an average of 215,285 (± 48,682) regular beds and 109,520 (± 24,647) ICU beds.These numbers are by far greater than entire Brazil's capacity, as publicly-available and private ICU beds sum up to 45,848 [3].Even considering the better scenario, i.e. the lower bound of the standard deviation, the number of occupied ICU beds may reach around 86,000, which is also critical for Brazil's capacity (almost 2 times it's capacity).In this setting of "end isolation", the healthcare system would surely collapse. When the isolation levels are kept, the numbers are significantly lower.However, the occupancy of 66,110 (± 16,759) regular beds and 33,470 (± 7,926) ICU beds is still critical for the Brazilian health system.Considering the creation of new provisional ICU units and good patient logistics, the situation may still remain under control during the peak of hospitalization occupancy.However, the results show that the hospital occupancy is prolonged considerably in this scenario, and they may stay functioning around their maximum capacity for up to a month (with an average of occupied ICU beds above 30,000).When increasing the isolation the peak of occupied beds is smaller, with an average of 63,226 (± 20,682) regular beds and 31,816 (± 10,592) ICU beds.Moreover, the shape of the curve throughout the days is different and the final numbers are considerably smaller.The peak also occurs around a week earlier and then decreases much faster.This scenario would be preferable as it has much more chances of not overloading the Brazilian healthcare system, relieving the hospital occupancy considerably faster and, therefore, contributing to the reduction of the number of deaths. Conclusion This work presents a new approach for the modeling of the COVID-19 epidemic dynamics based on multi-layer complex networks.Each node represents a person, and edges are social interactions divided into 6 layers: home, work, transports, schools, religions, and random relations.Each layer has its own characteristics based on how people usually interact in that activity.The propagation is performed using an agent-based technique, a modification of the SIR model, where weights represent the infection probability that varies depending on the layers and the groups the node interacts, scaled by a β term that controls the chances of infection.The network structure is built based on demographic statistics of a given country, region, or city, and the propagation simulation is performed at time iterations, that represent days.Here, we studied in depth the case of the Brazilian epidemic considering its population properties and also specific events, such as when the first isolation measures were taken, and the impacts of future actions. Brazil is a large and populated country with a wide variety of geographical location types, climates, and it also has a lengthy border with other countries to the west.It is a challenging setting for any epidemiological study.Here we consider an average over all the country population, as we adjust the model output to match some statistics of the epidemic official reports.Brazil is performing fewer tests in comparison to other countries at the same epidemic scale, however, it is known that testing for infection is always limited, either due to the low number of tests or to the velocity of infections which the testing procedure cannot keep up to.We then considered that only hospitalization cases and 20% of the mild cases are diagnosed.Asymptomatic cases are not diagnosed and keep acting normally in the network, considering the active layers.Regarding the isolation of infected nodes, we take some optimistic assumptions: Mild cases (even those not diagnosed) are aware of its symptoms and isolate themselves at home.Severe and critical cases are eventually hospitalized, and then fully isolated from the network (removal of all its edges). Under the described scenario, the network starts with all its layers and β = 0.3, representing that people are aware of the virus since the beginning (even before isolation measures).After 27 days of the first confirmed case, the first isolation measures are taken where schools and religious activities are stopped and work and transports keep functioning at 30% of the initial scale (achieved further reducing the β term).Different actions are then considered after 90 days of the first case: keep the current isolation levels, increase isolation, end isolation returning all activities to 100%, or returning only the work activities.The results show that keeping approximately the current isolation levels results in a prolonged propagation, as we are near the estimated peak (around June 5) with an average of 11,000 daily new cases and 1900 daily new deaths, and an average of 946,830 diagnosed cases (up to 3,6 million infected) and 149,438 deaths until the end of the year.In this scenario, hospitals may exceed its maximum capacity around June 11, but the efficient implementation of new ICU beds and good logistic management of patients may still keep the situation under control.However, this is a very optimistic assumption, considering that our definition of "keep isolation" considers social isolation above 50% as registered at the beginning of the Brazilian quarantine [22].The social isolation levels in Brazil are constantly decreasing even when we are still in a state of moderated quarantine, and it is possible to observe average isolation below 50% in most days of the past month (middle of April to middle of May 2020).Moreover, the results show that this prolonged scenario may cause hospitals to keep functioning at maximum capacity for up to a month.When analyzing other possible scenarios the situation may be considerably different.Relaxing isolation measures from now on causes an abrupt increase in the daily growth of cases and deaths, up to 5 times higher in comparison to the current isolation levels.Even if only work activities return while schools, religion, and transport activities remain inactive/reduced, the impact is very similar to returning all the activities, with a possible number of above 1,34 million diagnosed cases (up to 5,2 million infected), and around 212,105 deaths until the end of the year.This is, again, a very optimistic assumption as we do not consider the hospital overflow to calculate the death toll.Considering this aspect, ICU beds may be fully occupied in early June, and around the middle of the month their demand may reach up to 134,000 beds, which is around 3 times higher than the entire country's capacity.The other alternative, which is the increase of isolation levels (lockdown), appears to be the only alternative to stop the healthcare system from entering a very critical situation.In this scenario, the growth in the number of daily cases and deaths would be mitigated, and faster.As we are near the peak of new cases at current isolation levels, estimated to be between the beginning and middle of June, increasing the isolation levels does not cause a significant impact on when the peak occurs or its magnitude.However, the disease spreading and the occurrences of new cases decrease much faster in this scenario in comparison to any other scenario studied here, with a difference of months.Moreover, the final numbers are considerably smaller, with an average of 552,855 diagnosed cases (up to 2.1 million infected) 87,059 deaths until the end of the year. Although the proposed method includes various demographic information for the network construction, and an improved SIR approach to COVID-19, it still does not cover all factors that impact the epidemic propagation.As future works, one may consider more information such as the correlation between the age distribution within the social organization and the clinical spectrum of the 4 infection types (e.g.severe and critical cases are mostly composed of risk groups).Another possible improvement consists of increasing n (number of nodes of the networks), e.g. using a value near the real population of the studied society, which we avoided here due to hardware and time constraints (graph processing is costly).Another important point regarding the obtained results is related to the "keep isolation" scenario, which may be underestimated as we take various optimistic assumptions and also consider a fixed isolation level based on previously observed data, while most recent data shows that these levels are decreasing [22].Therefore, during the network evolution, a possible improvement is the use of dynamic isolation levels to better represent reality.It is also possible to consider various scenarios for future actions, such as 2 or more measures of increasing/reducing isolation.This may allow the discovering of new epidemic waves if social activities return too soon after the isolation period, such as what happened in 1918 with the Spanish flu. Figure 2 : Figure 2: Each social layer of the proposed multi-layer network.The nodes are people and do not change across layers, and the weighted connections represent social contact which may lead to infection according to the edge weight (probability value between [0, 1). Figure 3 : Figure 3: Configuration considered for the dynamic evolution of each type of infected node in the proposed SIR model.Each overlapping region is treated as a combined probability distribution that defines when one phase ends for the other to begin. Figure 4 : Figure 4: Comparison between the proposed model output for the first 83 days (up to May 18) to 4 different data sources of the Brazilian COVID-19 numbers: EU Open Data Portal (EUODP)[34], Worldometer [43], Johns Hopkins University[24], and World Health Organization (WHO)[42].The dotted lines represent the standard deviation, in the case of the real data the curve is the average over a 5-day window, and the solid lines the real raw data.The greatest average number of deaths produced by the proposed model may be related to underdetection (See Figure5). Figure 5 : Figure5: To understand the impact of the COVID-19 underdetection in Brazil, we considered the official death records of 2019 and 2020 at the same period (January 1 to April 30)[2].Then the total death difference is compared to the COVID-19 records of the WHO[42] and the Brazilian government[2] data.The largest difference that appears right after the first confirmed case may indicate a significant underdetection of COVID-19 cases. Figure 6 : Figure 6: Daily statistics in 4 possible scenarios after 90 days (May 26): Keep isolation levels; Increase isolation (stop work and public transports); End isolation (returns work and transport to normal and return school and religion); and return work (only the work layer is returned to normal). Figure 7 : Figure 7: Total number of infected and recovered cases and evolution of hospital beds utilization in 3 possible scenarios after 90 days (May 26): (a) Keep isolation level (no schools and religion, reduced work and transports), (b) End isolation (return schools, religion, work and transport to normal) or (c) Increase isolation (stop work and public transports)). Table 2 : Specific brazilian properties considered to compose each social layer and calculate their probability of infection, i.e. the edge weights of each layer.
2020-05-19T01:01:07.622Z
2020-05-16T00:00:00.000
{ "year": 2020, "sha1": "3e1deb140e8cb4a1987e1cdf0d415a2f40afee39", "oa_license": "publisher-specific-oa", "oa_url": "https://doi.org/10.1016/j.physa.2020.125498", "oa_status": "BRONZE", "pdf_src": "ArXiv", "pdf_hash": "3e1deb140e8cb4a1987e1cdf0d415a2f40afee39", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Biology", "Medicine" ] }
235629164
pes2o/s2orc
v3-fos-license
The Effect of Cream and Gel Vehicles on the Percutaneous Absorption and Skin Retention of a New Eugenol Derivative With Antioxidant Activity The effect of cream and gel vehicles containing clove water on skin permeability was compared for a new eugenol derivative (eugenyl dichloroacetate—EDChA) with antioxidant activity. In vitro permeation experiments were conducted in a Franz cell with porcine skin. The cumulative mass and skin accumulation of EDChA were investigated and compared. The antioxidative capacity of the studied vehicles was determined by using the diphenylpicrylhydrazyl (DPPH) free radical reduction method. The antioxidant activity (evaluated with DPPH, ABTS, and the Folin–Ciocalteu methods) of the fluid that penetrated through the pig skin and of the fluid obtained after the skin extraction, were also determined. For comparison, eugenol was also tested. The results of this work could contribute to the development of vehicles with antioxidant potential estimated after 24 h of conducting the experiment, which indicates long-term protection against reactive oxygen species (ROS) in the deeper layers of the skin. The waste water from the clove buds steam distillation -contains several valuable biologically active compounds, and its use is environmentally friendly. We observed that gel vehicles were the best enhancer of skin permeation for both eugenol and its derivative. In most cases, -similar cumulative masses of eugenol and its ester were found in the acceptor fluid. The accumulation of EDChA was higher for cream vehicles in relation to the parent eugenol when applied onto the skin. The greatest amounts of eugenol were accumulated in the skin when these compounds were used in gel vehicles. INTRODUCTION Transdermal active substances are a convenient route for administration as they allow minimization of the first-pass metabolism, avoiding gastrointestinal degradation, and providing a controlled and prolonged release of active substances into the systemic circulation. The stratum corneum (SC) protects against external toxins and water loss but also acts as a barrier for the active substance permeation into the skin, which is highly dependent on the lipophilicity and molecular size of these substances. Overcoming the lipophilic barrier, which is the skin, is possible for eugenol and the non-polar, new ester of eugenol, which have molecular weights of <600 Da (Makuch et al., 2020;Ossowicz et al., 2020;Nowak et al., 2021). Eugenol (4-allyl-2-methoxyphenol) and its ester were derivative characterized by good lipophilicity as determined by the shake-flask method: log p eugenol 2.20 ± 0.001, log p EDChA 2.65 ± 0.001. Eugenol is a terpene compound classified as an absorption promoter that is characterized by high antibacterial as well as antioxidant activity. Terpenes, which are a group of substances that are commonly considered as safe from the dermal toxicity point of view, are often used in cosmetic vehicles applied to the outer layer of the skin (Makuch et al., 2020). Eugenol has a high potential for application in transdermal systems, it is a cheap and easily available compound that has numerous applications in medicine (Rachoi et al., 2011;Jaganathan et al., 2011;Deepak et al., 2015;Bezerra et al., 2017;Hussain et al., 2011). The antioxidant effect of eugenol and its esters is based on the prevention of free radical formation, oxidative damage repair, and elimination of the damaged particles (Yogalakshmi et al., 2010;Nagababu et al., 2010, Gülçin, 2011Horvathova et al., 2014;Peŕez-Roseś et al., 2016;Bezerra et al., 2017;Janus et al., 2020). Cloves were traditionally used in medical applications, due to their many health benefits, and they are rich in secondary metabolites known for antioxidant activity and biological activity (Prashar et al., 2006;Dwivedil et al., 2011;Arung et al., 2011;Legards et al., 2014;Han and Parker, 2017;Pavithra et al., 2019). Wenkers and Lippold (1999) investigated the skin penetration of 10 nonsteroidal anti-inflammatory drugs (NSAIDs), after the application in a lipophilic vehicle light mineral oil. The results of this research showed that the skin permeability of NSAIDs is a function of the hydrophilicity of the drugs, i.e., of their partition coefficients between phosphate buffer saline (pH 7.4) and the lipophilic vehicle. The skin permeabilities generally increase with increasing hydrophilicity of the NSAIDs. Wenkers and Lippold suggested that the viable epidermis provides a decisive barrier to the penetration of NSAIDs from a lipophilic vehicle, based on correlations between skin permeability and octanol-vehicle and PBS-vehicle partition coefficients Wenkers and Lippold, 1999). In vitro permeation studies of propranolol hydrochloride were (PH) performed using rat abdominal skin as the permeating membrane in a Franz diffusion cell. The oral bioavailability of PH is poor due to a high first pass metabolism. The patches containing PH of were formulated using a combination of polymers and propylene glycol (polyvinylpyrrolidone, hydroxypropylmethycellulose, and ethyl cellulose) as a plasticizer. The result indicated that the maximum release was obtained with a 2% solution of ethyl cellulose. An optimized batch was evaluated for permeation enhancement through rat skin using the natural permeation enhancer eugenol, and they concluded that permeation enhancement through eugenol was comparable to the commercially available permeation enhancer dimethyl sulfoxide 1%. All the films were found to be stable at 37°C and 45°C with respect to their physical parameters and drug content (Nirav and Rajan, 2011). There is no doubt that essential oils and their components are able to permeate human skin. But information is rare regarding the percutaneous absorption of essential oils in detail. A study investigated the in vitro skin permeation of monoterpenes and phenylpropanoids applied in pure rose oil and in the form of neat single substances. Studies have shown that the application form has an exceeding influence on the skin permeation behavior of the compounds. For substances applied in rose oil, a clear relationship between their lipophilic character, chemical structure, and skin permeation was confirmed. Regarding the P app -values, the substances are ranked in the following order: monoterpene hydrocarbons < monoterpene alcohols < monoterpene ketons < phenylpropanoids. In contrast, for neat single substances, there were no relationships between their lipophilic characters, structures, and skin permeation. Except for α-pinene and isomenthone, the P app -values of all other substances were several times higher when applied in pure native rose oil compared with their neat form. This suggests that co-operative interactions between essential oil components may promote skin permeation behavior of essential oils and their components (Schmitt et al., 2010). Oxidative stress is defined as "an imbalance between oxidants and antioxidants in favor of the oxidants, leading to a disruption of redox signaling and control and/or molecular damage" (Sies 2020). Oxidative stress arises when the production of reactive oxygen species overwhelms the intrinsic anti-oxidant defenses (Burton and Jauniaux, 2011) and accumulates in the body by endogenous and exogenous mechanisms (García-Sánchez et al.,). In the human body, the oxidative-antioxidative balance is crucial as it maintains the integrity and functionality of the cell membrane (Kim et al., 2020). Reactive oxygen species can cause a lot of potential damage and are continuously produced by the body's normal use of oxygen, such as in respiration and certain cell mediated immune functions. ROS, which include free radicals such as superoxide anion radicals, hydroxyl radicals (OH . ), and non-free-radical species, such as hydrogen peroxide (H 2 O 2 ) and singlet oxygen ( 1 O 2 ), are various forms of activated oxygen [Gulcin et al., 2012]. It is widely recognized that reactive oxygen species contribute to the aging of the skin, the outer barrier of our body, any tissues inside our organism could also be exposed to ROS, both endogenous and exogenous. These compounds, which cause oxidative stress, are responsible for oxidative modifications of polyunsaturated fatty acids and nucleic acids (and as a consequence, for structural changes in cell membranes and DNA damage) (Rincheval et al., 2012;Pisoschi and Pop, 2015;Dam et al., 2019;Wadhwa et al., 2019;Liguori et al., 2018). In our previous research, we presented the eugenol derivative (eugenyl dichloroacetate-EDChA) made by eugenol esterification with dichloroacetic acid, that can permeate through porcine skin from ethanol (Makuch et al., 2020). Ethanol is a promoters of trans epidermal transport, which has an effect on the effectiveness of the penetration of eugenol and EDChA into the skin. This solvent was able to reversibly transform the structure of the laminar system of the lipid matrix of the epidermis, and thus they facilitated the accelerate the diffusion of particles by the stratum corneum. In addition, ethanol can disrupt the function of the skin barrier by affecting the cells between the cellular cement. This results in loosening the lipid layer and increasing its fluidity and, consequently, increases the degree of diffusion of these compounds (Llewelyn et al., 2019;Ossowicz et al., 2020;Nowak et al., 2021). The selectivity of the conversion to EDChA as well as the conversion of eugenol were determined using gas chromatography (GC), while the molar mass of the obtained product was con-firmed based on the mass spectrum (GC-MS). The most important band associated with the presence of an ester group in the structure of the obtained ester was identified using infrared spectroscopy. The unequivocal structure of the new eugenol ester derivative was confirmed with NMR. The antioxidative activity of eugenol and its ester was evaluated by the spectrophotometric method, whereas the values of the n-octanol/water partition coefficient (P) were used to evaluate the lipophilicity (Makuch et al., 2020). In this study, we compared the effect of a cream and gel, as vehicles, on the skin permeation of eugenol and the new eugenol ester derivative (EDChA). The reason for the cream and gel vehicle application in this study was to evaluate these vehicles on the permeability of the eugenol and EDChA. The results of this work can contribute to the acquisition of knowledge regarding vehicles with antioxidant potential, emphasizing that the water phases are waste from the process of cloves steam distillation and are not reused. The ecological aspect of our research also has importance. The use of waste water from the clove bud steam distillation process is environmentally friendly and allows us to apply the waste, containing valuable biologically active compounds. These compounds, due to their mechanism of action, can have a beneficial effect on the balance between oxidants and antioxidants in the body, minimizing the effects of oxidative stress (Ivy and Payne, 1991;Dahham et al., 2015;Sarpietro et al., 2015;Giovannini et al., 2019;Razafindrakoto et al., 2020). The use of cream and gel vehicles for this type of research is, therefore, justified; moreover, the pH value of the acceptor phase in permeation in vitro tests was set at 7.4 for simulation of the skin surface Nowak et al., 2021). Steam Distillation of Plant Materials and Identification of Water Fractions Obtained During the Steam Distillation by Gas Chromatography-Mass Spectrum Method First of all, steam distillation of cloves (originating in Madagascar and Indonesia) was carried out with the use of Deryng apparatus. A glass flask with a capacity of 1,000 cm 3 was filled with 100 g of suitable plant material and 675 g of distilled water, and then the Deryng apparatus was applied to the glass flask. The content of the distillation flask was kept boiling. The process of distillation of plant raw materials was carried out for 5 h, and after the end of the process, the condensate collected in the receivers was separated (using a separator) to obtain the upper aqueous fractions (the clove waters obtained after steam distillation of cloves from Indonesia (water I) and Madagascar (water M). To identify the substances contained in aqueous fractions, first of all, the upper fractions from the separatory funnel or separator divider were washed in 20 cm 3 n-hexane. Then the obtained samples were analyzed by gas chromatography coupled with mass spectrometry (GC-MS). Analyses were carried out with the TRACE GC series with the VOYAGER mass detector with a DB5 chromatographic column 30 m long, 0.25 μm in diameter and 0.5 μm film thickness of the stationary phase, using helium as a carrier gas with a flow rate of 1.0 ml/min. The temperature of the dispenser was 240°C, while the volume of the dosed sample was 1 µl. The following temperature gradient was used: 50°C for 1 min, followed by a temperature rise of 8°C/min to 260°C for 5 min, followed by cooling to 50°C. The qualitative analysis was conducted based on MS spectra. The percentage of a particular compound was determined on the assumption that the sum of all identified compounds is 100%. In Vitro Measurement of the Antioxidant Capacity of Clove Water, Eugenol and Its New Ester Antioxidative activity of ethanolic solutions of eugenol and its new derivative (EDChA) were determined using spectrophotometric method based on DPPH radical reduction as described elsewhere (Brand-Williams et al., 1995;Makuch et al., 2020). The absorbance at the wavelength of 517 nm was measured using Spectroquant Pharo 300 (Merck, Germany). The antioxidant activity of eugenol and its ester was measured as follows: to 2,850 µl of DPPH ethanolic solution (absorbance at 517 nm 1.00 ± 0.02) 150 µl of the sample (containing one of the tested compound at a concentration of 1.000 w/w) was added. The concentration of the analyzed sample in DPPH ethanolic solution was 0.050% w/w. The tube was wrapped in aluminum foil, sealed with a stopper and incubated for 10 min at room temperature. Each sample was prepared in triplicate. After incubation, spectrophotometric measurements were carried out at 517 nm. Solvent applied to obtain extracts was used as reference. The results were expressed as radical scavenging activity (RSA) (Nowak et al., 2019). For each studied compound calibration curve of RSA vs. concentration (0.006-50.000% w/v) was prepared to calculate IC 50 , i.e. the concentration of the compound reducing 50% of free radicals. The concentration range of the analyzed samples in DPPH ethanolic solution were 0.0003-2.5000% w/v. Moreover, antioxidant activity of water I and M obtained after steam distillation (which was tested unchanged) was evaluated by DPPH method after 10-60 min of incubation (Brand-Williams et al., 1995;Makuch et al., 2020). Preparation of Vehicles Containing of Eugenol and Eugenyl Dichloroacetate To prepare cream vehicles beeswax (0.032 g), cholesterol (0.176 g) and vaseline (3.647 g) were put to glass beaker. The beaker was placed in water bath (70°C) to dissolve the contents. To the second beaker distilled water (5.882 g) and the appropriate amount of either eugenol or its ester were added and mixed using the recipe mixer at 1,375 rpm (Eprus ® U500U) to achieve a uniform consistency. Eugenol or its ester were added in an amount of 0.100 g to the aqueous phase of the cream, to obtain the concentration of these compounds (in cream vehicles) of 1.000% w/w. In the next stage, the content of the second beaker was added to the first beaker, and mixed using the recipe mixer to achieve a uniform consistency of the emulsion. The emulsion obtained is water in oil type. Moreover, for comparison purpose, cream vehicles without active substance were prepared, into which either eugenol or EDChA at a concentration of 1.000% w/w, was entered manually. Gel vehicles were prepared as follows: hydroxyethylcellulose (0.300 g), distilled water (9.600 g) and the appropriate amount of either eugenol or its ester were added (at a concentration of 1.000% w/w) to glass beaker. The beaker content was mixed manually to achieve a uniform consistency. In addition, another form (cream and gel vehicles) containing water I and M instead of distilled water was also prepared and evaluated. Cream formulation contained 60.409% w/w of clove water, while gel contained 96.970% w/w of clove water. In Vitro Evaluation of Free Radical Scavenging Activity The antioxidant activity of acceptor fluid taken after 24 h of permeation, and solutions obtained after skin extraction performed after the experiment was determined using the DPPH (according to the procedure described above) (Brand-Williams et al., 1995;Makuch et al., 2020), ABTS (Makuch et al., 2020) and Folin-Ciocalteau (Nowak et al., 2019;Makuch et al., 2020) methods. The ABTS assay is based on the generation of a blue/green ABTS radical, which is applicable to both hydrophilic and lipophilic antioxidant systems; whereas DPPH assay uses a radical dissolved in organic media and is, therefore, applicable to hydrophobic systems (Molyneux 2004). Evaluation of Free Radical Scavenging Activity Using Diphenylpicrylhydrazyl Method Antioxidant activity of vehicles were evaluated using slightly modified DPPH method. The procedure was as follows: to 2,850 µl of DPPH radical solution in acetone (its absorbance at λ 517 nm was 1.00 ± 0.02) 150 µl of the acetone solution containing the tested formulation at the concentration of 10.0% w/w (0.100% w/w of active substance), while the concentration of active substance in DPPH acetone solution was 0.005% w/w. The tube was wrapped in aluminum foil, sealed with a stopper and then incubated for 10 and up to 60 min at room temperature. Each samples was prepared in triplicate. After incubation, spectrophotometric measurements were carried out at the above mentioned wavelength. The antioxidant activity obtained by this method are expressed in mmol TE/dm 3 , because trolox (TE) was used as a reference substance in the DPPH method. Antioxidant activity of the tested samples was calculated according to the following formula: where: %RSA -antioxidant activity, A 0 -mean value of the absorbance of the acetone solution of DPPH containing 150 µl acetone, A p -mean value of absorbance of the acetone solution of the DPPH radical containing 150 µl of the acetone solution of the tested formulation. Evaluation of Free Radical Scavenging Activity Using ABTS Method First, an aqueous solution of potassium persulfate (2.45 mM) was prepared, to which an appropriate amount of ABTS reagent was introduced to obtain a 7 mM solution of ABTS in an aqueous solution of potassium persulfate. The solution prepared in this way was incubated at 4°C for 24 h and then diluted with methanol (50% v/v) to obtain an absorbance of 1.000 ± 0.020 at 734 nm. The antioxidant activity of acceptor fluid and solutions obtained after skin extraction was measured as follows: 2,500 µl of working ABTS solution and 25 µl of ethanolic solution of tested antioxidant were mixed in spectrophotometric cuvette. The samples prepared in triplicate were incubated for 6 min at room temperature. After this time, the absorbance at 734 nm was measured. Evaluation of Total Polyphenole Content Using Folin-Ciocalteu Method To determine the total content of phenolic compounds in the tested samples the method based on the use of the Folin-Ciocalteu reagent in alkaline medium was applied. The reaction is based on the spectrophotometrically recorded color change of the test solution from yellow to blue. Folin-Ciocalteu reagent was diluted tenfold with water in a dark bottle and incubated at room temperature for 60 min. The antioxidant activity of acceptor fluid and solutions obtained after skin Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 658381 extraction was measured as follows: 1,350 µl of distilled water and 1,350 µl of sodium carbonate solution (0.01 mol/dm 3 ) were mixed in spectrophotometric cuvette with 150 µl of the diluted Folin-Ciocalteu solution and 150 µl of an ethanol solution containing the tested samples. The cuvette was sealed with a stopper and incubated for 15 min at room temperature. All the samples were prepared in triplicate. After this time, spectrophotometric measurements were carried out at 750 nm using water as a reference. Skin Permeation Studies of Vehicles The skin permeability for vehicles containing eugenol and EDChA were assessed in a Franz diffusion cell consisted of a 2 ml donor chamber and an 8 ml acceptor chamber. The permeation area was 1 cm 2 . The acceptor fluid, mixed with a magnetic stirrer, was a PBS (phosphate-buffered saline, pH 7.4) solution that maintained the physiological pH. The acceptor chamber was kept at a constant temperature of 37 ± 0.5°C with the VEB MLW Prüfgeräte-Werk type 3,280 thermostat. Before starting the test, Franz diffusion cells were allowed to equilibrate at 37°C for 15 min. Porcine skin was used for the study due to its similar permeability properties to human skin. The skin was from a local slaughterhouse. A fresh portion of skin from the abdomen was washed several times with a solution of PBS. Skin with a thickness of 0.5 mm was cut with a dermatome, and then it was wrapped in aluminum foil and frozen at -20°C for a maximum of 3 months. This freezing time ensured the stability of the skin barrier properties (Zhang et al., 2013). Before the study, the skin was thawed at room temperature for about 30 min, and then it was soaked in a PBS solution for 15 min to hydrate it. In the next stage, the skin was mounted in Franz diffusion cells. The integrity of skin was checked 1 h after its installation in the Franz diffusion chamber (SES GmbH Analyze Systeme, Germany). For this purpose skin impedance was measured using an LCR 4080 m (Conrad Electronic, Germany) operating in parallel mode at 120 Hz (kΩ error <0.5%). To make the measurement, the tips of the probes were immersed in the donor and acceptor chambers filled with the PBS solution. Membranes with an electrical resistance of >3 kΩ, corresponding to the resistance measured for normal human skin, were used in the study (Makuch et al., 2020;Janus et al., 2020). Preparations (1.000 g) containing one of the test compound (eugenol and EDChA) were placed in the donor chamber. All donor chambers were closed with a plastic stopper to prevent excessive evaporation of the vehicle. The described tests were carried out up to 24 h. An aliquot of 0.3 ml of the solution in the acceptor chamber was taken at specified intervals (30 min, 1, 2, 3, 4, 5, 8, and 24 h), and then supplemented with a fresh portion of buffer of the same pH (Makuch et al., 2020). The samples were analyzed by high-performance liquid chromatography (HPLC). After completion of permeation experiment, the skin was extracted to estimate the residual amount of tested active ingredients accumulated in it. The antioxidant activity of the obtained extracts was also tested using previously described methods (Brand-Williams et al., 1995). Extraction was carried out as follows: after the experiment was completed, the Franz diffusion chambers were dismantled, while the skin surface was washed three times with an aqueous solution of sodium lauryl sulfate (at a concentration of 0.5% w/w) to rinse of the excess of vehicle containing test compound. A patch (1 cm 2 diffusion surface) was cut from the skin prepared in this way, dried at room temperature, and then weighed and cut into smaller pieces. Then, 2 ml of concentrated methanol was added, and extraction was carried out for 24 h at 4°C. After 24 h of incubation, the skin was homogenized (for 3 min) using a homogenizer (IKA ® T18 digital ULTRA TURRAX, Germany). The obtained extracts were then centrifuged at 3,500 rpm for 5 min. The supernatant was analyzed by HPLC to determine the content of active ingredients, while tests on the antioxidant activity of the obtained extracts was evaluated using the DPPH, Folin-Ciocalteau, and ABTS methods. The cumulative mass of active substance (µg) permeating into the receptor chamber was calculated based on the concentrations of compounds in receptor fluid determined by HPLC. The permeation rate was determined based on the amount of permeated compound over a given period (μg/cm 2 /h). The accumulation of compounds in the skin was calculated by applying the amount of compound obtained after skin extraction; the results are given in μg/cm 2 of skin (Makuch et al., 2020). High-Performance Liquid Chromatography Analysis The samples were analyzed by high-performance liquid chromatography (HPLC) with a UV detector (Knauer, Berlin, Germany). The components tested were separated on a 125 × 4 mm column containing Hyperisil ODS; particle size 5 µm. The flow rate of the mobile phase, consisted of acetonitrile, water, and MeOH (28:64:8, by vol), was 1 ml/min. Twenty microliters of each analyzed sample was injected onto the column. Statistical Analysis Statistical calculations were done using Statistica 13 PL software (StatSoft, Polska). The results were evaluated using one-way analysis of variance (ANOVA). Significant differences were evaluated using Tukey post-hoc test. Probabilities p < 0.05 were considered to be statistically significant. Results are presented as the mean ± standard deviation (SD). Table 1 presents antioxidant activity of eugenol and its ester derivative, carried out by the DPPH method. Figure 1 presents the antioxidant activity of the clove water obtained after steam The studied compounds showed different antioxidant activity determined by DPPH method - Table 1. Studies have shown that the values of the parameter determining the concentration reducing 50% of free radicals (IC 50 ) for eugenol are inversely proportional to its antioxidant activity, i.e. the lower the IC 50 the higher antioxidant activity. Eugenol (IC 50 6.1 µM) had the highest activity. The value of the IC 50 parameter for eugenol was more than 8 times lower than the value described in the literature for this compound (IC 50 50.44 µM) (Floegel et al., 2011). Tables 3 and 4 presents the results for the antioxidant activity of solutions of the tested vehicles containing of eugenol and EDChA. Tables 5 presents the results of studies on the permeation and the accumulation of active substances contained in vehicles. Figures 2, 3 show of the comparison of in vitro permeation profiles for eugenol and EDChA contained in cream vehicles through the skin during the 24 h experiment. Figures 4, 5 show of the comparison of in vitro permeation profiles for eugenol and EDChA contained in gel vehicles through the skin during the 24 h experimen The study of DPPH radical scavenging capacity of the pure vehicles, containing no active substance (sample 1 and 14) and the vehicle prepared with the use of clove water as postprocessing waste (sample two and sample 11) showed that sample one did not show antioxidant activity, while samples 2 and 11 were characterized by a DPPH radical scavenging degree of: 8.2 ± 0.1% RSA 10 and 9.9 ± 0.1% RSA 60 and 16.4 ± 0.1% RSA 10 and 18.2 ± 0.1% RSA 60 , respectively. Cream vehicle containing 1.000% w/w antioxidant (eugenol and EDChA) were characterized by the capacity to react with DPPH radical. The highest efficacy was shown by the vehicles obtained in the following way: first a vehicle containing clove water was obtained, and then a suitable active substance (i.e. eugenol, a new eugenol ester derivative -EDChA) was added (manually) into the final vehicle. The vehicles showed the highest efficiency to react with DPPH radical after 60 min of incubation. %RSA of these samples decreased as follows: sample 12 (65.5 ± 0.1) > sample 9 (59.6 ± 0.1) > sample 13 (49.5 ± 0.1)- Table 2. In the case of studies carried out for pure vehicle with distilled water instead of antioxidant solution (sample 1 - Table 3, sample 14 - Table 4), no antioxidant activity was shown (vehicle applied to the skin, acceptor fluid after 24 h of permeation, solution after skin extraction). The test results, presented in Table 3, show that solutions of acceptor fluids containing eugenol and EDChA were characterized by antioxidant activity evaluated with DPPH, ABTS and Folin-Ciocalteu methods. The degree of reduction of the DPPH free radical (of acceptor fluid collected after 24 h of permeation) decreased in the following order: 1.6% RSA (for vehicle containing EDChA -sample 8) > 0.8% RSA (for vehicle containing eugenol -sample 7). Antioxidant activity of acceptor fluids after 24 h of permeation of samples 2-4, six to seven and samples 9-12 were low and were below <0.8% RSA - Table 3. The antioxidant activity (determined by the ABTS method) of acceptor fluid collected after 24 h of permeation showed that the vehicle containing eugenol had the highest antioxidant activity (8.8% RSA). Lower antioxidant activity was observed for the vehicle with EDChA (7.6% RSA). The lowest antioxidant activity was observed for the samples: 2 (<0.5% RSA), and 11 (<2.2% RSA), 4, 6, 10 and 13 (<3.8% RSA) and 3, 5, 9 and 12 (<4.3% RSA) - Table 3. Acceptor fluid collected during 24 h of penetration of the tested vehicles containing EDChA and eugenol were characterized by the highest polyphenol content (0.1 ± 0.1 mmol GA/dm 3 ). In contrary, the lowest concentrations were found for samples: 2-6, 9-12 and 13 (<0.1 mmol GA/ dm 3 ) - Table 3. The results of studies on the antioxidant activity of solutions of skin extracts obtained after the experiment showed that vehicles containing eugenol (sample 7) and its ester derivative (sample 8) Water I, which is a waste from the steam distillation of cloves from Indonesia. c The vehicle was first obtained and then the relevant amount of active substance was entered with a recipe mixer. d The relevant active substance was added to the organic phase during the preparation of the vehicle. e The relevant active substance was added to the aqueous phase during the preparation of the vehicle. f The vehicle was first obtained and then the relevant active substance was (manually) screwed into it. n.a. -no activity. a-m -values are significantly different between analyzed vehicles (p 0.001). % RSA10 -The antioxidant activity of the analysed sample was measured after 10 minutes incubation at room temperature. % RSA60 -The antioxidant activity of the analysed sample was measured after 60 minutes incubation at room temperature. The results obtained by the Folin-Ciocalteu method showed that the value of antioxidant activity of solution obtained after skin extraction (vehicle containing EDChA) was higher (0.3 ± 0.1 mol GA/dm 3 ) than the value of antioxidant activity of Pure with distilled water n.a n.a n.a 2 b Pure with water M (8.2 ± 0.1 i) g Pure with water I (22.5 ± 0.1 d) 12 f,g Eugenol Pure with distilled water n.a n.a 2 b Pure with water M <0.1 (<0.5) 0.4 ± 0.1 a (19.9 ± 0.6 a) a Total polyphenole content (folin-ciocalteu method) mmol GA/dm 3 1 Pure with distilled water n.a n.a b,e EDChA 0.3 ± 0.1 a a Mean ± SD (n 3). b water M, which is a waste from the steam distillation of cloves from Madagascar. g water I, which is a waste from the steam distillation of cloves from Indonesia. c The vehicle was first obtained and then the relevant active substance was (with a recipe mixer) screwed into it. d The relevant active substance was added to the organic phase during the preparation of the vehicle. e The relevant active substance was added to the aqueous phase during the preparation of the vehicle. f The vehicle was first obtained and then the relevant active substance was (manually) screwed into it. n.a. -no activity. a-i -values are significantly different between analyzed substances (p 0.001). In our in vitro research, the permeation of vehicle containing eugenol and its new ester derivative (EDChA) through pig skin was assessed. The experiment was carried out using a Franz diffusion chamber, in which the donor phase consisted of the vehicles tested. The acceptor phase was PBS solution, because it corresponds to systemic conditions, is isotonic in nature, and resembles conditions prevailing in the deeper layers of the skin (Makuch et al., 2020). As shown in Table 5, the application of ester of eugenol in each of the cream and gel as vehicles not led to increase in the skin permeation of EDChA in comparison to eugenol applied in the same vehicle. After conducting the experiment for 24 h, the highest average cumulative mass was observed in the case of eugenol (contained in cream vehicle 20.5 ± 0.8 µg, contained in gel vehicle 31.3 ± 4.3). The mass was slightly lower in the case of EDChA (in the case of cream vehicle 19.6 ± 1.8 µg, in the case of gel vehicle 28.8 ± 2.6). The highest permeation rate of cream vehicles containing eugenol and EDChA to the acceptor fluid (µg/h) was observed between 4 and 5 h (Figure 2), while for gel vehicles between 3 and 4 h ( Figure 4). Moreover, the average cumulative masses for vehicles containing eugenol and EDChA at 0.5; 1; 2; 3; 4; 5; 8, and 24 h are shown in Figure 3 and Figure 5. In our study, differences in permeation of active substances were found, depending on the type of vehicle used (see Table 5). Considering the average cumulative mass of the compounds, the permeation from the vehicles used was ranked in the following order: gel vehicles containing eugenol > gel vehicles containing EDChA > cream vehicles containing eugenol > cream vehicles containing EDChA. After the experiment was carried out, the skin was extracted in order to evaluate the amount of the accumulated tested active ingredients. The obtained results showed that the concentration of substances (contained in the cream vehicles) in the analyzed extracts decreased in the following order: EDChA (173.9 ± 8.4 μg/cm 2 skin) > eugenol (156.9 ± 7.0 μg/cm 2 skin). Moreover the concentration of substances (contained in the gel vehicles) in the analyzed extracts decreased in the following order: eugenol (334.4 ± 20.4 μg/cm 2 skin) > EDChA (255.9 ± 20.1 μg/cm 2 skin) - Table 5. DISCUSSION Eugenol has a hydroxyl group (-OH) associated with an aromatic ring with acidic properties, which could lead to antioxidant activity. Its free radicals scavenging activity could lead to form phenolic radicals. These radicals are stable due to resonance caused by charge transfer and are not able to detach hydrogen from lipid or protein molecules (and to decrease the oxidation). Replacement of hydrogen atoms in the aliphatic chain EDChA by heteroatoms (in this case, chlorine atoms) enhances the anti-oxidative properties. Eugenol esters containing chlorine atoms in the structure easily trap free radicals, giving up the H atom in the aliphatic chain. The reason is a change in the shape of the molecule, i.e. a change in length, direction, range and polarization of the bonds and a change in the symmetry of the particles. Introduction of chlorine atoms into the structure, causes polarization of bonds between carbon-chlorine atoms. The polarization of bonds between the carbon-chlorine atoms reduces the density of the electron cloud in the whole molecule and causes polarization of all close bonds present in the structure. As a result of this bond between the carbonhydrogen atoms in EDChA molecules, they change their length and polarity. In addition, the presence of chlorine atoms in the structure of EDChA changes the electroneutrality of carbon atoms. Moreover, the presence of the methoxy group (-OCH 3 ) in the eugenol and its ester increases the antioxidant properties of these compounds (Makuch et al., 2020). We demonstrated that cosmetic vehicles (both cream and gel) containing eugenol and new eugenol derivatives (EDChA), penetrated through biological membranes. The eugenol derivative (eugenyl dichloroacetate-EDChA) made by eugenol esterification with dichloroacetic acid, had a similar permeation through porcine skin compared to the starting eugenol. This study showed that newly developed eugenol modifications could be promising active ingredients into formulations applied to the skin and employed as an ideal alternative to commercial eugenol. We noticed that the type of vehicles (cream or gel) influenced the eugenol and EDChA transport through porcine skin. We observed that gel vehicles were the best enhancer of the skin permeation of both eugenol and its derivative. In most cases, a similar cumulative mass of eugenol and its ester was found in the acceptor fluid. Relationship was found between the lipophilicity of eugenol and its ester derivative in cream and gel vehicles and skin accumulation. The accumulation of EDChA was higher for cream vehicles in relation to the parent eugenol applied onto the skin. The greatest amounts of eugenol were accumulated in the skin when these compounds were used in gel vehicles. A relationship was also found between the antioxidant activity of vehicles containing clove water and vehicles containing distilled water. The highest antioxidant activity determined with the DPPH method was found for gel vehicles containing EDChA and eugenol (Table 2) as active substances and clove water (the aqueous fraction containing furfural, benzyl alcohol, methyl salicylate, 4-allilofenol, eugenol, β-caryophyllene, α-caryophyllene, eugenyl acetate, and β-caryophyllene oxide) as a water phase. These compounds, due to their mechanism of action, can have a beneficial effect on the balance between oxidants and antioxidants in the body, minimizing the effects of oxidative stress. In addition, the good permeability of vehicles containing eugenol and EDChA through the skin and their proper accumulation in the skin (Table 5; Figures 2, 3) as well as their antioxidant capacity (Tables 3, Tables 4) could also reduce the exogenous effects of free radicals. Cloves are rich in secondary metabolites known for antioxidant activity. These preliminary results highlighted, for the first time, that clove water showed antioxidant activity. Thus, these first findings support the use of clove water, in vehicles; however, more studies are needed to better clarify the antioxidant mechanisms. The second direction of research on the proposed vehicles, which should be developed in the near future, is to investigate the correlation between the content of extra compounds next to eugenol or the new eugenol derivative and the antioxidant effect of test vehicles. Perhaps some of these additional compounds (identified in clove water) in combination with eugenol or EDChA will increase the antioxidant properties of the vehicles. Taking into consideration possible future use of the tested vehicles, the continuation of these tests has the potential for further applications. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS Conceptualization, EM and AN; Writing-review and editing, EM, AN, and AG; Methodology, EM, AN, WD, and AK; Reviewing RP and AK; Formal analysis, AN, EM and AK; Investigation AN, AK and EM; Writing-original draft, AN and EM; Supervision, AK and RP All authors read and approved the manuscript.
2021-06-25T13:28:57.562Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "82d47033844bc0fd05a11cc94c7263fbbf4e0097", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.658381/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82d47033844bc0fd05a11cc94c7263fbbf4e0097", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
18058866
pes2o/s2orc
v3-fos-license
The MOBI-Kids Study Protocol: Challenges in Assessing Childhood and Adolescent Exposure to Electromagnetic Fields from Wireless Telecommunication Technologies and Possible Association with Brain Tumor Risk The rapid increase in mobile phone use in young people has generated concern about possible health effects of exposure to radiofrequency (RF) and extremely low frequency (ELF) electromagnetic fields (EMF). MOBI-Kids, a multinational case–control study, investigates the potential effects of childhood and adolescent exposure to EMF from mobile communications technologies on brain tumor risk in 14 countries. The study, which aims to include approximately 1,000 brain tumor cases aged 10–24 years and two individually matched controls for each case, follows a common protocol and builds upon the methodological experience of the INTERPHONE study. The design and conduct of a study on EMF exposure and brain tumor risk in young people in a large number of countries is complex and poses methodological challenges. This manuscript discusses the design of MOBI-Kids and describes the challenges and approaches chosen to address them, including: (1) the choice of controls operated for suspected appendicitis, to reduce potential selection bias related to low response rates among population controls; (2) investigating a young study population spanning a relatively wide age range; (3) conducting a large, multinational epidemiological study, while adhering to increasingly stricter ethics requirements; (4) investigating a rare and potentially fatal disease; and (5) assessing exposure to EMF from communication technologies. Our experience in thus far developing and implementing the study protocol indicates that MOBI-Kids is feasible and will generate results that will contribute to the understanding of potential brain tumor risks associated with use of mobile phones and other wireless communications technologies among young people. INTRODUCTION A number of national and international organizations have reviewed potential health effects of radiofrequency (RF) field and identified research gaps (1)(2)(3)(4)(5)(6)(7). In 2011, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) classified RF fields as "possibly carcinogenic to humans -2B" (7), a classification confirmed subsequently by the EU funded European Health Risk Assessment Network on Electromagnetic Fields Exposure (EFHRAN) (5). The rapid worldwide increase in mobile phone use in adolescents and, more recently, children has generated additional interest in the possible health effects of exposure to RF [EU funding calls ENV.2008.1.2.1.1. "Health impacts of exposure to radiofrequency fields in childhood and adolescence" and ENV.2013.6.4-2 "Closing gaps of knowledge and reducing exposure to electromagnetic fields (EMF)"]. Concern particularly relating to children and adolescents originates from the likelihood that, if an increased risk exists, it could be greater for exposure at younger ages due to: increased sensitivity of the developing neurological system to effects of RF signals; higher estimated specific absorption rate (SAR) in children (due to a thinner skull and ears compared to adults) (8); and greater lifetime cumulative exposure compared to those who began mobile phone use in adulthood. MOBI-Kids, a multinational case-control study, was therefore initiated to assess the potential effects of exposure to RF and of extremely low frequency (ELF) electromagnetic fields (EMF) from mobile phones on the development of central nervous system tumors among young people. This study builds upon the methodological experience of INTERPHONE, the 13-country collaborative effort investigating the possible association between mobile phone use and risk of gliomas, meningiomas, acoustic neurinomas, and parotid gland tumors among adults diagnosed during 2000-2004 (9-11). In designing MOBI-Kids, considerable effort was invested in improving the INTERPHONE design and adapting it to changing communication technologies and a younger age range. Quantitative exposure assessment is being improved by a group of researchers experienced in non-ionizing radiation, environmental, and occupational exposure assessment. MOBI-Kids is the largest study to date investigating the potential association between mobile phone use and the risk of brain tumors among young people. To date, only one study, CEFALO (12), focused specifically on the possible association between mobile phone use and brain tumors among the young. No evidence of an increased brain tumor risk in association with years of use of mobile phones or cumulative call time was found among 352 cases diagnosed between 2004 and 2008. However, subjects in CEFALO were young (the median age at diagnosis was 13 years), and were not long-term or heavy users (the median period of use was 2.7 years). Large-scale ongoing studies of mobile phones are under way, notably the COSMOS study (13), but are restricted to adults. This paper describes the study design of MOBI-Kids and the challenges encountered and solutions sought while developing the protocol with respect to: (1), choosing a representative control group while ensuring a high compliance rate under the chosen case-control design; (2) investigating a young study population spanning a relatively wide age range (10-24 years), with heterogeneous distributions of tumors and patterns of mobile phone use; (3) conducting a large, multinational epidemiological study while following increasingly strict ethics requirements; (4) investigating a rare and potentially fatal disease; and (5) assessing exposure to RF and ELF fields from changing communication technologies. Where useful, data collected in the study up until June 2014 are used to illustrate the study methods and associated challenges. Edited by: The rapid increase in mobile phone use in young people has generated concern about possible health effects of exposure to radiofrequency (RF) and extremely low frequency (ELF) electromagnetic fields (EMF). MOBI-Kids, a multinational case-control study, investigates the potential effects of childhood and adolescent exposure to EMF from mobile communications technologies on brain tumor risk in 14 countries. The study, which aims to include approximately 1,000 brain tumor cases aged 10-24 years and two individually matched controls for each case, follows a common protocol and builds upon the methodological experience of the INTERPHONE study. The design and conduct of a study on EMF exposure and brain tumor risk in young people in a large number of countries is complex and poses methodological challenges. This manuscript discusses the design of MOBI-Kids and describes the challenges and approaches chosen to address them, including: (1) the choice of controls operated for suspected appendicitis, to reduce potential selection bias related to low response rates among population controls; (2) investigating a young study population spanning a relatively wide age range; (3) conducting a large, multinational epidemiological study, while adhering to increasingly stricter ethics requirements; (4) investigating a rare and potentially fatal disease; and (5) assessing exposure to EMF from communication technologies. Our experience in thus far developing and implementing the study protocol indicates that MOBI-Kids is feasible and will generate results that will contribute to the understanding of potential brain tumor risks associated with use of mobile phones and other wireless communications technologies among young people. STUDY DESIGN MOBI-Kids is a prospective case-control study conducted in 14 countries: Australia, Austria, Canada, France, Germany, Greece, India, Israel, Italy, Japan, Korea, New Zealand, Spain, and The Netherlands. As brain tumors in young people are rare, and because the effect of EMF from mobile phones, if any, is probably weak, MOBI-Kids was designed as a multinational collaboration spanning a target population of almost 40 million individuals ( Table 1). The case-control design was chosen, as a cohort study with similar statistical power would be extremely expensive requiring a similarly sized population with many years of follow-up. STUDY POPULATION The target study population consists of all males and females aged 10-24 years residing in the study region with a confirmed diagnosis of an eligible first primary brain tumor diagnosed during the study period. In some countries, the study region encompasses the entire country, while in others it is restricted to defined areas (usually the major metropolitan areas) ( Table 1). The period of case ascertainment varies by country, first beginning in mid-2010 and continuing through 2014. The age range was an important consideration in defining the study population. The most likely mechanism by which exposure to ELF and RF-EMF may increase the risk of cancer is through a tumor promotion or progression effect (14,15). Therefore, in the case of exposure to ELF and RF-EMF, a relatively short latency period may be a reasonable assumption, particularly, for tumors in young people. However, we had to ensure that the prevalence of mobile phone use in the past would be sufficient for the study to have adequate statistical power. Given the historically low use of mobile phones in children below the age of 12 years and the comparatively high use in teenagers and young adults, a study of brain tumors in subjects aged 15-24 years would have the most power to evaluate tumor risk from mobile phone use in young adults. However, as mobile phones have become increasing popular among 8-10 year olds since 2005 (16), and because of the expanding number of other sources of RF signals in the home (e.g., Wi-Fi), it is also of interest to study tumors in children aged [10][11][12][13][14] years. There appears to be little benefit in including younger subjects in this study due to their limited use of mobile phones. The relatively wide age range of the study population (encompassing both children and adults) raises issues, however, such as the need to design a questionnaire that is clear to the entire age range and to properly separate questions to be answered by parents and by the subjects themselves. Further, covering the ages of 10-24 requires including both adult and pediatric services, complicating ethics board approvals, and study logistics. CASE DEFINITION Eligible diagnoses include only tumors originating in parts of the brain likely to experience the highest exposure to RF-EMF from mobile phones, which mainly comprises tumors not located in the midline (Supplementary Material) (17). Both benign and malignant brain tumors (not only gliomas and meningiomas but also many other tumor types) are included in the study. This histological heterogeneity of tumors could dilute a carcinogenic effect, if it exists, for a specific type of brain tumor. However, considering the rarity of these tumors among children, a separate analysis for each tumor type (apart from glioma and by grade of malignancy) will likely be unfeasible. A case is excluded if s/he has insufficient knowledge of the study language(s) and/or a known genetic syndrome related to brain tumors (e.g., neurofibromatosis). The original expected number of cases in the target age range was of the order of 2,000. With the implementation of the study, however, it became apparent that the number of eligible cases is, in fact, much lower, in large part due to an underestimation of the number of midline tumors in the study population and, to a lesser extent, the failure of busy medical staff to notify eligible patients in some centers. In most centers, it is difficult to know exactly how many cases are ineligible as doctors/hospital staff will generally not inform study staff of ineligible cases. However, centers with access to detailed, reliable registry information or hospital records have excluded from one-third to more than one-half of cases due to an ineligible (midline) diagnosis. Table 1 indicates the revised expected number of eligible cases per year; the revised expected total number of cases to be included in MOBI-Kids is around 1,000, based on each center's length of time in the field and other factors such as number of participating hospitals and accessibility to eligible cases. Fortunately, the MOBI-Kids study still has sufficient statistical power despite the reduced number of cases (see Study Power below). SELECTION OF CONTROLS Two hospital-based controls (who underwent an appendectomy for suspected diagnosis of appendicitis) are selected for each case, and matched on: sex; age (±1 year for cases younger than 17 years and ±2 years for cases 17 years and older); date of surgery/interview (±3 months); and geographic area of residence. In centers experiencing difficulties recruiting controls under the above criteria, the protocol was modified to allow more flexibility: date of surgery (±4 months); expanded area of residence (at the center's discretion); and broader age range (an additional 6 months). In addition to the cases' exclusion criteria, controls are excluded if the interviewer decides they are mentally unable to understand and answer the questions. Care is taken to select controls from the same population base as the cases. Since cases are identified from tertiary centers, many more hospitals must participate in the identification of controls to cover the catchment area from which cases may arise. The rationale for appendicitis patients was the inherent difficulty in case-control studies to recruit representative controls, which is essential to prevent selection biases that could jeopardize the validity of the study results. Recent studies have shown a considerable decline in participation rates among population controls. In the INTERPHONE study, only 54% of controls participated (9); participation was further shown to be selective with respect to phone use, complicating the interpretation of study results (10,18). Given the age range in the current study, we expected selective participation to be an issue since young people have distractions that may prevent their participation. Young adults' participation is further complicated by ethics board requirements that require parental approval to participate (generally at ages 10-18, depending on local ethics legislation). Germany, the only center to recruit www.frontiersin.org both hospital-and population-based controls, has much higher participation rates among hospital-based controls compared to population-based controls. This indicates that compliance rates are indeed much higher when using hospital-based controls (as opposed to population-based controls) and that our choice of hospital-based controls may reduce selection bias caused by low control participation rates. Appendicitis patients were ultimately chosen as controls as this is a common disease among subject in the age range of the study, neither related to mobile phone use nor to socioeconomic status. Known risk factors for acute appendicitis include age (peak: 10-19 years), sex, ethnicity/race, family history of appendicitis, infection, seasonal variation, and having cystic fibrosis (19,20). This limited number of risk factors, unrelated to any exposure of interest in our study, ensures that unlike other hospital-based controls, appendicitis controls are the most likely to represent the general population from which the cases arise. RECRUITMENT OF CASES AND CONTROLS Because of the severity of brain tumors, rapid case ascertainment is critical. Active identification of eligible cases and controls is accomplished through contact with neurosurgery, radiology, and oncology units for cases and general surgery for controls (both adult and pediatric units). Completeness of case ascertainment is assessed by periodically reviewing cancer registries and/or hospital discharge records (where available). Cases are ascertained rapidly and every effort is made to interview them as soon after diagnosis as possible to minimize non-participation and recall bias that may occur due to deteriorating cognitive abilities among cases. Controls are identified and interviewed as soon after identifying a case as possible. For logistical reasons, however, some time may lapse between identifying and interviewing a subject. To ensure strict data quality, we permit a maximum of 12 months between a case interview and his/her reference date (date of first image showing a suspicion of a space occupying lesion) and between a case and a matched control interview (Figure 1). As of June 2014, 2,990 eligible participants (878 cases and 2,112 controls) had been identified. Participation rates range from 78 to 83% and 60 to 69% among cases and controls, respectively (range based on best and worst case scenarios regarding pending subjects' final decisions) ( Table 2). Of the 566 cases who have been interviewed, 73% have at least one identified control and 53% have two interviewed controls ( Table 3). Seventy-nine percent of controls' interviews were performed within 6 months (26% within 1 month) of the case's interview. Three-quarters of cases were interviewed within 6 months of their diagnosis, with 55% being interviewed within 3 months ( Table 3). The study population has slightly more males than females, and more participants in the youngest age range ( Table 4). PROXIES The core protocol specifies that proxies (preferably the parents) will be approached if a case has passed away or is too ill to respond to questions. Conversely, no proxies are approached for controls since in this age group the number of controls deceased or too ill to respond is expected to be minimal, therefore, not introducing any selection bias. However, if the study subject is young and/or their parents prefer to be present, the parents may help answer the questionnaire (for both cases and controls). Only 3% of cases needed a proxy interview because they were too ill or had passed away; however, an additional 40% of cases were interviewed with a parent or guardian either because they were young or the family preferred to be present (results not shown). ETHICS COMMITTEES Ethics approvals for conducting the study were obtained in each country, usually in each participating hospital ( Table 1). As MOBI-Kids involves both adults and children, consent is given by the www.frontiersin.org Completed pairs 68 (12) Completed triplets 300 (53) a Identified but interviews are not yet completed. subject, parent/guardian, or both according to age and local ethics committee requirements. All subjects (and/or parents/guardians) are asked to sign an informed consent form before participating in the study. In recent years, ethics approvals have become more complex. Seven centers had to obtain ethics approvals from each individual hospital (median number of ethics approvals per country: 16; range: 1 national ethics committee in France to 69 individual approvals in Spain) ( Table 1). In Austria, ethics requirements changed during the study period, requiring study staff to stop recruiting participants and to submit applications at county-level ethics committees, resulting in a loss of over a year of fieldwork. Ethics requirements have become more restrictive. Several centers are not allowed to recruit or even contact patients until they have signed the informed consent form, placing the responsibility for recruitment on already overworked doctors/hospital staff. Besides making logistical aspects of MOBI-Kids more difficult, burdensome ethics requirements could have significant implications on other epidemiological studies as it means the denominator is uncertain -we have to rely on busy clinicians to carefully record all eligible cases who were recruited to join the study and to followup on their recruitment. Furthermore, recruitment of controls by doctors or hospital staff is especially onerous as appendicitis patients are only in the hospital for a short time, and further contact with them is limited. It is impractical to expect busy hospital staff to recruit controls following a rigorous epidemiological study protocol, but, given the strict ethics requirements, several countries are left with no choice in this regard. As mentioned above, these issues contributed to an appreciable reduction in the number of cases recruited relative to the originally projected number. QUESTIONNAIRES AND STUDY INSTRUMENTS Trained interviewers administer either an electronic or paper version of a detailed questionnaire developed from INTERPHONE and other recent brain tumor and/or mobile phone studies (21)(22)(23)(24), modified to include technological advancements, simplified for younger subjects, and optimized based on pilot testing in several countries. The main questionnaire includes demographic variables; use of communication technologies (mobile phones, cordless telephones, and Wi-Fi); exposure to non-communication sources of ELF and RF-EMF; occupational history including occupational exposures to EMF; and other possible risk factors for brain tumors (e.g., medical history and radiation exposures). The detailed section on mobile phone use is administered only to subjects answering "yes" to the screening question about ever having been a "regular mobile phone user" -defined as making on average at least one call per week for 3 months or more. Questions are asked about initial and current use, including number and duration of voice calls; use of hands-free kits, speaker phone, and/or Bluetooth headsets; laterality of use; proportion of time using phone in urban/suburban/rural areas; and other phone usage [i.e., number of SMS and other messaging apps, and time spent using email, Internet, and voice over Internet Protocol (VoIP) (e.g., Skype)]. Subjects are also asked about changes in phone use to further characterize their phone use history. All makes and models of phones used are identified with the assistance of a custom-made searchable database containing over 6,500 phones. Preliminary analyses show that <2% of the questions in the main questionnaire have 5% or greater "do not know" responses, indicating that the questionnaire is generally clear to the entire study population (results not shown). In addition to the subject's questionnaire, parents (preferably the mother) are asked about maternal smoking history and other exposures before conception, during the pregnancy and the first trimester of life of the child, as well as about the pregnancy itself, the child's delivery, and her/his school history. Parental occupational histories are collected for both parents. Clinical data regarding the disease status, surgery, pathology, imaging needed for diagnosis verification, and tumor classification are collected from all available medical files. NON-RESPONDENTS In case of refusals, where access to study subjects is allowed by ethics committees, subjects are asked to complete a short nonresponse questionnaire about mobile phone use and maternal education level. This questionnaire will be used to evaluate possible selection bias among participants. VALIDATION STUDIES Validation of self-reported phone use is conducted by comparing responses of consenting subjects to network operator records. In addition, Mobi-Expo, a separate but complementary study of volunteers as well as a group of MOBI-Kids controls using softwaremodified smartphones (SMPS), collects self-reported phone use as well as use patterns, including laterality and data use recorded by the SMPS. Mobi-Expo will provide important information about mobile phone usage patterns in young people recorded by the SMSP as well as a means of validating self-reported mobile phone use. TUMOR LOCALIZATION Neuroradiologists will locate each case's tumor on a generic 3D head model using cases' MRI or CT scans, similar to what was done in INTERPHONE (25). Unlike INTERPHONE, however, there are four head models corresponding to three child-sized and one adult-sized head (according to age) and much more sophisticated exposure estimation. EXPOSURE ASSESSMENT Collecting reliable valid data on complex and rapidly changing patterns of exposures, while minimizing recall bias and errors, has been a significant challenge in MOBI-Kids. Therefore, considerable effort was invested in developing the questionnaire's exposure sections based on an extensive ELF and RF-EMF measurement and modeling campaign (26) and experience gained from INTERPHONE and expert opinions. Much thought was given to minimizing recall biases (e.g., use of prompts and a searchable mobile phone database to facilitate identification of phones used; first asking questions about present use and then previous use). In addition, we allow a maximum of 12 months between the case and control interviews (and also between the case's reference date and interview) to minimize differential recall bias between cases and controls. MOBI-Kids includes numerous validation checks to ensure the accuracy of the questionnaire responses. The development of an electronic mobile phone database containing details on several thousand mobile phones has resulted in a substantial reduction in unknown phone models. Before launching the electronic mobile phone database in June 2011, 36% of phones were unknown (that is, subjects could not identify the make and/or model), whereas only 16% of phones since June 2011 are unknown, clearly demonstrating the benefit of a searchable mobile phone database to assist in identifying mobile phones, a factor that is an important in determining the exposure from the phones. Due to the rapid increase in the use of intermediate frequency (IF) technologies, some questions about sources of IF were added to the questionnaire in early 2013; the job histories will also be coded for IF-exposed jobs. This capability to address emerging issues in non-ionizing radiation highlights the flexibility of exposure assessment in the case-control approach, to address changes in types of exposures in a rapidly evolving technological field. STUDY POWER As discussed above, despite our best efforts to reach the original expected sample size of approximately 2,000 cases, the revised projected number of case is just under 1,000. However, preliminary results on mobile phone use among controls indicate that 77 and 83% of males and females, respectively, were defined as ever using www.frontiersin.org a mobile phone regularly (data not shown). In keeping with the INTERPHONE study, subjects who had used a mobile phone for <1 year were considered "never" regular users. Further, approximately 14% of all subjects in MOBI-Kids have used a mobile phone for 10 years or longer, the threshold for long-term use in INTER-PHONE. As this was a higher proportion than originally expected, our power calculations were revised based on the updated expected number of subjects and updated exposure indicators. Assuming that 971 cases are included in matched analyses, the study has 79% power to detect an increased risk of 40% [the estimated increase in the risk of glioma seen in the highest decile of phone use in INTERPHONE (10)], assuming 10% have used a mobile phone for 10 years or longer; power increases to 90% assuming 15% are "long-term" mobile phone users. STATISTICAL ANALYSIS The main analysis will be based on standard statistical methods for analysis of case-control studies (18). As in INTERPHONE, this study will use two approaches to characterize exposure from mobile phones. The first will be based on self-reported history of use (including number of years of use, duration of calls, laterality, and other exposure metrics) calibrated against objective data, while the second will expand on INTERPHONE's methods to estimate the amount of RF energy absorbed in the brain at the tumor location (25,27) (as well as brain exposure to IF and ELF when possible). CONCLUSION In spite of its challenges, the advantages of MOBI-Kids include its large sample size -it will be the largest study to date on this topic in young people -covering 14 participating countries. Subjects are being identified and recruited in a time period in which mobile phone use in young people has become more prevalent, thus, increasing the statistical power and overall representativeness and generalizability of the results. In addition, MOBI-Kids includes extensive exposure assessment work and validation studies using both historical provider records and SMPS to counteract potential recall bias. Despite the various challenges faced by the study team (which have implications for other epidemiological studies), our experience thus far in developing and implementing the study protocol indicates that MOBI-Kids is feasible and will generate results contributing to the understanding of potential brain tumor risks associated with use of mobile phones and other wireless communication technologies among young people. that the study was carried out with care and with due consideration for the participants. All the hospital services (neurosurgery, neurology, oncology, pediatric oncology, general surgery, etc.) are gratefully acknowledged for their cooperation and collaboration in recruiting participants. We would also like to thank mobile network operators for their collaboration in this study. Finally, we would like to thank all participants and their relatives who took part in this study. We appreciate it very much. France: we would like to thank Dr. Martine Hours for her valuable advice and assistance in the implementation of the study in France. We are grateful to Dr. Luc Bauchet for supporting the implementation of the study and for helping us in the solicitation of neurosurgeons. Germany: we are very grateful to Vanessa Kiessling, Jenny Schlichtiger, and Iven-Alex Heim for their dedication to the conduct of the study in Germany. Also, we would like to thank Susanne Brilmayer and Silke Thomas for their contribution to the planning and implementation of MOBI-Kids in Germany. India: we are deeply thankful to Dr. Rakesh Jalali for helping us with the recruitment of study participants. Italy: we thank all cooperating professionals who made this study possible, Antonio Argentino, Anna Maria Badiali, Emma Borghetti, Valentina Cacciarini, Laura Davico, Laura Fiorini, Francesco Marinelli, Sara Piro, and Caterina Salce. Japan: we appreciate generous support and advice offered by the Japanese Epidemiological Committee. Spain: we would like to thank Carmen Caban for setting up, and Alex Albert for establishing and maintaining the complex electronic database. We are grateful to Patricia de Llobet for her valuable contribution in testing the electronic database as well as creating and updating the mobile phone database. We would also like to acknowledge Laura Argenté as project manager. Finally, we appreciate the enormous time and effort of the regional coordinators (Irene Gavidia, Eva Ferreras, Marta Cervantes, Angeles Sierra, and Angela Zumel). The Netherlands: we would like to acknowledge the following contributions: the Dutch Society for Pediatric Oncology (SKION), represented by A. Antoinette Y. N. Schouten-van Meeteren, for the support and collaboration in setting up the case-control study in The Netherlands, the Dutch Childhood Cancer Parent Organization (VOKK), represented by M. C. Naafs-Wilstra, for the collaboration in patient recruitment. Funding: the research leading to these results has received funding from by the European Community's Seventh
2016-05-12T22:15:10.714Z
2014-09-23T00:00:00.000
{ "year": 2014, "sha1": "0b2833fc89b04be343098203b58f1a25f6cba0a5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2014.00124/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b7bccfbf610910ca7e36aa63238646ce5ce6c35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118898784
pes2o/s2orc
v3-fos-license
Surface and Image-Potential States on the MgB_2(0001) Surfaces We present a self-consistent pseudopotential calculation of surface and image-potential states on $MgB_2(0001)$ for both $B$-terminated ($B-t$) and $Mg$-terminated ($Mg-t$) surfaces. We find a variety of very clear surface and subsurface states as well as resonance image-potential states n=1,2 on both surfaces. The surface layer DOS at $E_F$ is increased by 55% at $B-t$ and by 90% at the $Mg-t$ surface compared to DOS in the corresponding bulk layers. The discovery of superconductivity in a simple metal polycrystalline compound MgB 2 with a critical temperature T c ϳ39 K ͑Ref. 1͒ has generated an explosion of research activity in studying the mechanism of the superconductivity and properties of this compound. For instance, the superconductivity gap has been measured by both bulk sensitive methods 4,19 and surface sensitive techniques. Compared to bulk measurements the surface sensitive experiments, namely scanning tunneling spectroscopy [5][6][7][8][9] ͑STS͒ and pointcontact experiments, 10 give generally a smaller energy gap varying in the surface region. 7,10 This may be caused by two effects: surface contamination and/or disorder and by change of electronic structure at the surface. Qualitatively different STS spectra obtained by different groups on polycrystalline MgB 2 pellets and films reflect different surface contamination and microstructure of the sample surfaces. However, for a single crystal a possible change of a high density of states ͑DOS͒ at the Fermi level, E F , high phonon frequencies and strong electron-phonon interactions can also lead to a change of the energy gap and T c at the surface. Very recently two groups have announced the preparation of single crystals of MgB 2 with edge angles of 120°. 26,27 These studies open up new prospects for experimental investigations of surface properties in MgB 2 including the surface superconductivity. Due to strong covalent interactions within B planes 11,12,22,23 the ͑0001͒ termination of MgB 2 is supposed to be more favorable. However nothing is known about the atoms which form the topmost layer of MgB 2 (0001). The study of the ͑0001͒ termination of other metal diborides which also have crystal structures of the AlB 2 type have shown that some metal diborides (TiB 2 , HfB 2 ) are terminated by metal atoms 28,29 while the topmost layer of TaB 2 is formed by a graphitic boron layer. 30 Here we report ab initio calculations of the electronic structure of the MgB 2 (0001) surface for both types of termination. In order to assess the effect of surface relaxation on surface states we have computed the surface electronic structure for the ideally bulk terminated crystal as well as for surfaces with the first interlayer spacing contracted and expanded by 6%. The bulk electronic structure of MgB 2 ͑Refs. 11, 12, 22 and 23͒ leads to an unconventional bulk states projection with very wide absolute and symmetry energy gaps ͑Fig. 1͒ which support a variety of surface states and give an addi-tional contribution to crystal reflectivity in an energy interval just below the vacuum level where resonance imagepotential states arise. The surface and image states are of crucial importance for the description of the surface dynamical screening, electron ͑hole͒ excitations, and superconductivity at MgB 2 surfaces. We show that for the Mg-terminated (Mg-t) surface the surface states contribution nearly doubles the surface DOS at E F compared to the bulk Mg layer DOS. For the B-terminated (B-t) surface the surface state contribution increases the surface DOS at E F by 55% compared to that in the bulk. Special attention is focused in this paper on image potential states. We find that Mg and B layers possess distinct reflectivity that leads to different localization of image state wave functions in the bulk region. Very recently the layer density of states for a 9-layer slab of MgB 2 (0001) has been calculated by using the fullpotential LAPW method. 31 Kim et al. 31 discussed in detail the influence of the enhancement of the DOS near E F on the superconductivity of surface layers. In contrast to Ref. 31 in the present work we mostly address the surface band structure of MgB 2 (0001) including binding energies and dispersion of surface, subsurface, and image potential states. The calculations of charge density have been performed within the self-consistent local density-functional planewave pseudopotential method by using a supercell of 17 atomic layers and 7 layers of vacuum. 32 This supercell is big enough to ensure a good description of both surface and bulk states. Experimental values of lattice constants a ϭ5.8317 a.u. and cϭ6.6216 a.u. used in the evaluation have been taken from Ref. 1. The 17 layer slab representing the B-t (Mg-t) surface consists of 9 B͑Mg͒ layers alternating with 8 Mg͑B͒ layers. As the LDA potential does not describe the correct asymptotic potential behavior in the vacuum region we modify it by retaining the self-consistent LDA form for z Ͻz im , where z im is the image plane position, and replacing it in the vacuum region for zϾz im by V(z)ϭ͕exp͓Ϫ(z Ϫz im )͔Ϫ1͖/͓4(zϪz im )͔. The damping parameter is a function of (x,y) and is fixed by the requirement of continuity of the potential at zϭz im for each pair of values (x,y). With the use of the self-consistent charge density obtained for a 17layer slab we have constructed the charge density for a 35layer slab by inserting 18 bulk layers into the center of the slab. The vacuum space was increased from 7 to 21 layers. This vacuum interval is enough to accurately describe the n ϭ1 and 2 image states. Finally the LDA potential was generated for this new supercell with a correct image tail in the vacuum. In Figs. 1͑a͒ and 1͑b͒ we show the calculated projection of the bulk band structure onto the surface Brillouin zone together with surface states for B-t and Mg-t surfaces, respectively. The light gray areas show the projected bulk states and the gray ones indicate the states. A remarkable feature of the bulk states projection is the presence of two wide absolute energy gaps. The lower gap separates the s-bulk bands and p z bands of boron, the upper gap crossed by the p x,y bulk bands of B is located in the vicinity of the Fermi level, E F . The B-t surface ͓Fig. 1͑a͔͒ has 4 surface states strongly localized in the topmost boron layer and 3 subsurface states. All these states show energy dispersion which repeats that of the bulk bands. Two surface states degenerated at the ⌫ point are of p x,y symmetry ( states͒. They split off from bulk states of the same symmetry by 0.45 eV and have an energy of 1.23 eV relative to E F . Their charge density is completely localized in the topmost layer ͑Fig. 2͒. One can consider these states as two-dimensional quantum-well states due to their extremely strong localization in the z direction: they decay into the bulk much faster than do conventional surface states which are characterized by a smooth exponential decay. Another surface state with energy of Ϫ2.74 eV is of p z symmetry ( state͒, 75% of this state being concentrated in the three surface atomic layers and in the vacuum region ͑Fig. 3͒. The lower surface state is of s symmetry and splits off from bulk states by 0.4 eV, 70% of the state being localized in the topmost layer. ⌫ are localized in a few subsurface B layers with 40% of the state concentrated in the second B layer. The third subsurface state is located at the bottom of the s bulk boron states. The Mg-t surface shows distinct electronic structure compared to the B-t one. In particular, the Mg occupied surface state of s-p z symmetry with energy of Ϫ1.94 eV appears at the ⌫ point. Its charge density distribution is localized mostly (65%) in the Mg surface layer and in the vacuum region, as shown in Fig. 4. The origin of this state can be understood from a simple charge transfer picture. In the bulk the Mg atom donates two valence electrons to the adjacent B planes thus moving all the Mg bands up to EϾE F . In the surface layer the Mg atom donates one electron to the subsurface B plane while another electron forms an occupied dangling bond (s-p z ) surface state. Unoccupied subsurface states with energy of 0.36 eV degenerate at ⌫ are formed by the subsurface B layer, 70% of the state being concentrated in the layer. At energy ϳϪ12.3 eV there also exists a subsurface resonance state generated by the B layers. In Fig. 5 we show the calculated surface layer DOS for both the B-t and Mg-t surfaces and compare them with the corresponding central layer DOS. In the B-t surface the surface DOS at E F which also includes the vacuum region is higher by 55% than the central B layer DOS. In the Mg-t surface the surface DOS at E F is higher than the central Mg layer DOS by a factor of 2. Both these results favor the higher surface critical temperature T c s compared to that in the bulk. Less is known about phonons on the MgB 2 (0001) surface. There normally exist surface phonon modes on metal surfaces with slightly smaller frequencies compared to those in bulk. 33 In bulk MgB 2 the in-plane boron mode E 2g is responsible for strong electron-phonon interaction. 13,25 Because of its in-plane character one can expect that the vibrational frequences and atomic displacements of this mode in the surface or subsurface boron layer will be similar to those in the bulk. Therefore one can expect very similar or even higher T c s compared to T c in bulk specimens. Image states fall in a group of surface states which are linked to the vacuum level and located relatively far from the surface. The calculated work function which fixes the vacuum level relative to E F was obtained as 6.1 eV for the B-t surface and 4.2 for the Mg-t one. Similar to simple and noble metal surfaces 34 the wave function maximum of the nϭ1 image state on MgB 2 is located at ϳ6 a.u. beyond the surface atomic layer for both surfaces. In Figs. 1͑a͒ and 1͑b͒ we show the calculated nϭ1,2 resonance image states. Nothing is known about the image plane position on MgB 2 and we varied z im for both terminations within the 2.0-3.5 a.u. interval beyond the surface layer. This variation leads to E 1 ϭϪ0.9Ϯ0.15 eV and E 2 ϭϪ0.25Ϯ0.05 eV for the B-t surface as well as to E 1 ϭϪ1.1Ϯ0.15 eV and E 2 ϭϪ0.30 Ϯ0.05 eV for the Mg-t surface, the error bar including the energy dependence on the z im position. The energies obtained are rather similar to those for the nϭ1,2 resonance image states on simple metal surfaces. 34 The resonance image states are mostly degenerate with magnesium bulk states. The interaction between the image states and the Mg bulk states results in a different reflectivity of B and Mg layers and a different behavior of the image state wave functions in bulk. The amplitude of these wave functions is significantly larger in magnesium layers than in boron ones. This behavior of image states is specific for MgB 2 due to its peculiar bulk electronic structure and was not found for simple and noble metals. 34 It is known that the relaxation of closed-packed simple metal surfaces 35 expansion of the first interlayer spacing being р6%. We have inspected the dependence of the surface electronic structure by computing with slabs having the contracted and expanded first interlayer spacings of 6% for both terminations. We have found that these relaxations lead to a change of the surface state energies within 0.1 eV and to small changes of the surface DOS at E F . The change of the n ϭ1,2 image state energies is significantly smaller than the error bar. In conclusion, we have performed self-consistent pseudopotential calculations of the surface electronic structure for the B-t and Mg-t surfaces of MgB 2 . We have found a variety of surface and subsurface states as well as two resonace image states on both surfaces including an unoccupied quantum well state of p x,y symmetry on the B-t surface. Due to very clear surface character of these states the MgB 2 (0001) surfaces provide a good oportunity to test the theoretical results by measuring the surface electronic structure by different spectroscopies such as photoemission, including inverse and time-resolved two-photon processes, and scanning tunneling spectroscopy. The higher surface layer DOS at E F favours a higher critical temperature compared to that in the bulk. This is inconsistent with recent STS experiments which have shown an opposite trend. [5][6][7][8][9] We attribute this discrepancy to contamination and disorder on polycrystal sample surfaces. We thank N.H. March for fruitful discussions. Partial support by the Basque Country University, Basque Hezkuntza Saila, and Iberdrola is acknowledged.
2019-04-14T01:57:39.405Z
2001-05-31T00:00:00.000
{ "year": 2001, "sha1": "fb9725196092a7fb5fd521fec6b5c1e25444fa17", "oa_license": "CCBY", "oa_url": "https://digital.csic.es/bitstream/10261/226083/1/surfsurf.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "e2e451983aa76b8bc001150f39c07270536fe331", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
255337594
pes2o/s2orc
v3-fos-license
A Novel Ferroptosis Inhibitor UAMC-3203, a Potential Treatment for Corneal Epithelial Wound Corneal wound, associated with pain, impaired vision, and even blindness, is the most common ocular injury. In this study, we investigated the effect of a novel ferroptosis inhibitor, UAMC-3203 (10 nM–50 µM), in corneal epithelial wound healing in vitro in human corneal epithelial (HCE) cells and ex vivo using alkali-induced corneal wounded mice eye model. We evaluated in vivo acute tolerability of the compound by visual inspection, optical coherence tomography (OCT), and stereomicroscope imaging in rats after its application (100 µM drug solution in phosphate buffer pH 7.4) twice a day for 5 days. In addition, we studied the partitioning of UAMC-3203 in corneal epithelium and corneal stroma using excised porcine cornea. Our study demonstrated that UAMC-3203 had a positive corneal epithelial wound healing effect at the optimal concentration of 10 nM (IC50 value for ferroptosis) in vitro and at 10 µM in the ex vivo study. UAMC-3203 solution (100 µM) was well tolerated after topical administration with no signs of toxicity and inflammation in rats. Ex-vivo distribution study revealed significantly higher concentration (~12–38-fold) and partition coefficient (Kp) (~52 times) in corneal epithelium than corneal stroma. The UAMC-3203 solution (100 µM) was stable for up to 30 days at 4 °C, 37 °C, and room temperature. Overall, UAMC-3203 provides a new prospect for safe and effective therapy for corneal wounds. Introduction The cornea is an avascular, transparent tissue covering the ocular surface. It is highly susceptible to damage by the external environment, such as allergic conjunctivitis due to allergens, injuries, and oxidative stress caused by chemical and thermal burns, thereby further leading to dry eye disease and optic nerve neuropathy [1][2][3]. Moreover, these damages may lead to a corneal wound, including impaired corneal nerves and nociceptors, that is characterized by impaired re-epithelialization of the corneal epithelium. The corneal wound is associated with intense pain, discomfort, and disability that can drastically affect visual function [4]. The corneal-wound-induced changes in corneal shape and structure may lead to corneal scarring resulting even in corneal blindness [5]. Several therapies, including conventional artificial tears and ointments for lubrication, prophylactic antibiotics, pressure patching, therapeutic contact lenses, amniotic membrane transplantation [6], topical growth factors [7,8], and human serum-derived and plasma-derived therapies [9,10], are used for the treatment of corneal wounds [11,12]. However, such therapies may provide only symptomatic relief and delayed healing [13][14][15]. Additionally, serum and plasma-derived therapeutics are cost intensive, time-consuming, and not accepted in several countries due to a lack of prospective randomized trials [16]. Therefore, safe, and effective drugs for corneal wound treatment are needed. Ferroptosis is an iron-dependent form of regulated cell death that is mainly characterized by the accumulation of lipid reactive oxygen species (ROS) [17]. Ferroptosis has been studied in association with ischemia-reperfusion injury, kidney injury, cardiac diseases, and neurodegenerative diseases [18,19]. Its role in ocular diseases, such as corneal epithelial disease [20], dry eye [21], retinal pigment epithelial diseases [22], and glaucoma [23], has been recently investigated [24]. The association of ferroptosis with alkali corneal wound healing has also been studied. In the alkali burn-induced corneal injury mouse model, accumulation of ROS resulted in elevated expression of peroxide 4-hydroxynonenal (4-HNE), a lipid peroxidation by-product that can alter cell membrane permeability. Furthermore, decreased expression of glutathione peroxidase 4 (GPX4), an enzyme catalyzing the reduction in lipid peroxide, was seen [25]. Another study showed delayed wound healing in GPX4 +/− mice models after n-heptanol-induced corneal wounds, indicating the vital role of GPX4 [20]. Elevated ROS and lipid peroxidation with subsequent corneal ferroptosis has also been associated with exposure to heated tobacco products [26], aging [27], and ocular disease [28]. In the present study, we investigated the efficacy of UAMC-3203 as a corneal epithelial wound healing compound in mouse eyes with alkali-induced corneal wounds. We also studied the mechanism of wound healing effects (migration or proliferation) with in vitro scratch assays in human corneal epithelial (HCE) cells. We evaluated the acute tolerability in rats in vivo. Furthermore, the physicochemical properties of UAMC-3203 and its corneal distribution into excised porcine cornea were investigated. Synthesis, Solubility, and Chemical Stability Compound UAMC-3203 was synthetized in the laboratory of Medicinal Chemistry at the University of Antwerp, as reported by Devisscher L. et al. [30]. Solubility of UAMC-3203 was determined in 0.1 M citrate buffer (pH 5 and pH 6) and phosphate-buffered saline (PBS, Gibco, Life Technologies Limited, Paisley, UK) at pH 7.4. Excess of UAMC-3203 (10-20 mg) was added to 500 µL of buffer in glass vials that were kept at room temperature and mixed (150 rpm) for 72 h. The pH of the solutions was measured daily with calibrated pH meter (Orion Research Incorporated, Boston, MA, USA) and adjusted if needed (with 0.1 N sodium hydroxide or 0.1 N hydrochloric acid). After 72 h, samples were withdrawn and centrifuged at 13,000 rpm for 15 min. The supernatant was collected and analyzed by high-pressure liquid chromatography (HPLC) for UAMC-3203 concentration. The chemical stability of 100 µM UAMC-3203 in PBS (pH 7.4) was investigated for 30 days as described previously [32]. Briefly, four batches with triplicates were stored at 4, 25, or 37 • C with light protection and at 25 • C without light protection. Separate solutions were used for pH-dependent stability studies. Samples were collected, and pH was measured at various times. The samples were stored at −20 • C until HPLC analysis. Drug Distribution in Porcine Cornea Ex Vivo Fresh porcine (a crossbreed between Matias and Yorkside) eyes were collected in the laboratory animal center at the University of Eastern Finland. The eyes were stored overnight in a keratinocyte serum-free medium (Gibco, Life Technologies Corporation, Grand Island, NY, USA) at +4 • C. The cornea was excised using an incision along the limbus, as described earlier [33]. The isolated corneas were rinsed with PBS and mounted in vertical Franz diffusion cells (PermeGear, Inc., Hellertown, PA, USA). Small magnetic stir bars were added to each receiver chamber, and 5 mL of 35 • C BSS Plus supplemented with 10 mM of 4-(2hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES; Lonza Bioscience, Walkersville, MD, USA) was added to the receiver compartment of each Frantz cell. The experiment was initiated by adding 50 µM UAMC-3203 in 10 mM HEPES-BSS Plus solution to the donor compartment. Samples of 500 µL from donor compartments and corneas were collected at the end of the experiments (at 5, 30, 60, and 120 min). The cornea samples were rinsed with PBS, the epithelial layer was scraped off from the corneas, and the remaining stroma-endothelium samples were cut to pieces. The experiments were carried out in triplicates. The samples were stored at −80 • C until liquid chromatography-tandem mass spectrometry (LC-MS/MS) analyses were performed. The values of apparent partition coefficients (K p ) were calculated using drug concentrations on the tissue samples (at 120 min) and in the donor compartment buffer at time zero (ng/mL). In Vitro Scratch Assay In vitro scratch assay was performed in confluent HCE monolayer as previously reported [34]. HCE cells were seeded on 24-well plates at 1 × 10 5 cells/well incubated for 24 h. The monolayer was scratched with sterile 10 µL pipette tips perpendicular to the lines to create a consistent cell-free area. The cells were washed three times with pre-warmed PBS to remove detached cells. The cells were then exposed to UAMC-3203 at different concentrations (10 nM, 1 µM, 10 µM, 50 µM) in serum-free DMEM (Dulbecco's Modified Eagle Medium): F12 media that was supplemented with 1% penicillin-streptomycin solution 1000 U/mL penicillin, 0.1 mg/mL streptomycin (Corning, Ref 30-002-CI, Mediatech Inc., Discovery Boulevard, Manassas, VA, USA). Serum-free medium was used as a negative control, and 1.5% fetal bovine serum (FBS 10270, Gibco by Life Technologies, Carlsbad, CA, USA) supplemented medium was used as a positive control. The well plates were then transferred to Cell-IQ (CM Technologies Limited, Tampere, Finland) and maintained in standard tissue culture conditions (37 • C, 5% CO 2 -atmosphere). Images were taken immediately after scratch (less than 1 h) and at 6, 12, 24, 30, 42, 48, and 72 h and analyzed by Cell-IQ analyzer. Three independent experiments were performed, using two to three wells for each stimulating condition. Cell migration (A%) was calculated as follows: where A 0 and A t represent the wound areas at 0 h and each timepoint, respectively. In Vitro Cytotoxicity The cytotoxicity of UAMC-3203 was assessed by MTT assay. The HCE cells were seeded on a 96-well plate at a density of 100,000 cells/well in 200 µL of supplemented growth medium. The next day the cells were washed with PBS (pH 7.2). The cells were then exposed to UAMC-3203 at concentrations 10 nM, 1 µM, 10 µM, and 50 µM for 3 h. For control, cells not exposed to UAMC-3203 cultured in a serum-free medium were used. The cells were then washed twice with PBS and incubated 2 h with 100 µL of 10% of thiazolyl blue tetrazolium bromide (MTT, Sigma-Aldrich, St. Louis, MO, USA) in a serum-free medium. An amount of 100 µL of 20% (w/v) sodium dodecyl sulfate (SDS), N-dimethylformamide (DMF) lysis buffer was added to each well and incubated overnight. The following day, cell viability was evaluated by measuring absorbance at 570 nm with a Victor 2 multilabel plate reader (PerkinElmer, Wallac, St. Paul, MN, USA). The cell viability % was calculated as below: % o f cell viability = (Abs exposed cells − Abs blank) (Abs non − exposed cells − Abs blank) × 100 (2) where Abs exposed cells = absorbance of cells exposed to UAMC-3203; Abs blank = absorbance of well plates without cells; and Abs non-exposed cells = Absorbance of cells exposed to serum-free medium (i.e., control). Corneal Epithelial Wound Healing in Ex Vivo Mouse Eyes For the study, C57BL/6 mice (Laboratory Animal Centre, the University of Eastern Finland) were euthanized, and the eyes were excised carefully with sterile forceps and micro-scissors to avoid any damage to the cornea. The eyes were transported in ice-cold Dulbecco's modified Eagle's Medium/Ham's F12 (DMEM/F-12, Gibco, Life Technologies, Carlsbad, CA, USA). Then the eyes were adhered to a Petri dish with tissue adhesive (3M Vetbond, St. Paul, MN, USA). The corneal epithelial wound was produced by placing an alkali soaked (0.5% sodium hydroxide) Whitman filter paper of 2 mm diameter on the corneal surface for 2 min, as reported earlier [35]. After removal of the filter paper, the remaining epithelium on the alkali-exposed surface was carefully removed with a micro scalpel, and the eyes were rinsed with PBS. Pre-warmed UAMC-3203 (10 mL of 10 nM, 1µM, 10 µM, and 50 µM solutions) was added in serum-free media that was supplemented with 1% penicillin-streptomycin solution (1000 U/mL penicillin, 0.1 mg/mL streptomycin) (Corning, Ref 30-002-CI, Mediatech Inc., Discovery Boulevard, Manassas, VA, USA) to the Petri dishes so that the eyes were submerged. The serum-free medium without UAMC-3203 was used as the negative control, and the medium with 10% FBS was used as the positive control. The samples were cultivated under standard tissue culture conditions (37 • C, 5% CO 2 -atmosphere, and 95% relative humidity) for 48 h. In order to measure the wound area, images were taken with a Zeiss Axio Scope A1 microscope (Carl Zeiss Microscopy GmbH, Oberkochen, Germany) at different time points (initial (in less than 2 h after wound), 6, 24, 30, and 48 h). For imaging, the media was removed from the dish, and 0.1% fluorescein (2 µL) was added to the corneal surface, followed by PBS (10 µL) to rinse excess fluorescein. A clean tissue was then used to soak excess fluorescein and PBS, and the medium was added again to the dish after imaging. The medium was changed after 24 h. The wound area was quantified with Fiji software [36]. In Vivo Acute Tolerability Study in Rats Three Lister-hooded rats (Envigo Laboratories, Melderslo, Limburg, The Netherlands) were used in the study. The animals were maintained under standard laboratory conditions of 12 h dark-light cycles with food and water available ad libitum. Animal studies demand set by EU directive 2010/63/EU, and all animal experiments were approved by the national Project Authorization Board (ESAVI/27769/2020). The in vivo tolerability of 100 µM UAMC-3203 solution in PBS was determined after topical application by visual inspection, optical coherence tomography (OCT) (Phoenix MICRON™ MICRON IV/OCT, Pleasanton, CA, USA), and microscope imaging (Leica Stereomicroscope with fluorescence Immuno Diagnostic Ltd., LMS, Espoo, Finland). During the study, 10 µL of PBS (pH 7.4) was used as control, and 10 µL of UAMC-3203 solution was applied to the lower fornix of left and right eyes, respectively, twice a day (8 a.m. and 3 p.m.) for five consecutive days. The animals were allowed to move their head freely during the visual inspection while, before OCT and stereomicroscope imaging, the rats were anesthetized with a medetomidine (0.4 mg kg −1 ) and ketamine (60 mg kg −1 ) mixture. Baseline measurements for OCT and microscope imaging were taken two days prior to the study. Visual inspection was performed after each treatment, while OCT and stereomicroscope imaging was performed on the second and fifth days. The follow-up tests with OCT and stereomicroscope imaging were performed three days after the completion of the study. Visual inspection-The eyes were checked visually for any symptoms of redness and eyelid swelling before every treatment. Then, after the application of PBS and UAMC-3203 solution, the number of times the rats blinked their eyes and moved their head within one minute were recorded. OCT-The cornea was observed for any changes in its thickness and any signs of corneal edema during and after the treatment. Stereomicroscope imaging-The cornea and sclera were evaluated for any signs of inflammation or neovascularization during and after the treatment. Solubility and Chemical Stability Studies The water solubility of UAMC-3203 was about 3.5 times higher at pH 6.0 (127.9 ± 16.1 µM) and 7.4 (127.3 ± 17.3 µM) than at pH 5.0 (36.7 ± 5.7 µM). UAMC-3203 shows relatively high stability for 30 days in PBS pH 7.4 at various temperatures and light exposure conditions (about 90% of initial concentration at day 30). During the study period of 30 days, the pH of the UAMC 3203 solution slightly increased (from 7.4 to 7.5-7.6) in all conditions ( Figure S1). Drug Distribution in Porcine Cornea Ex Vivo We studied the distribution of the UAMC-3203 (50 µM) in the cornea by measuring the concentration in corneal epithelial cells and in stroma-endothelium after compound exposure to the epithelial side of the cornea. The drug is distributed rapidly to the cornea, and the levels in the epithelium were 1-2 orders of magnitude higher than in the stromaendothelium (Figure 1). The K p value of epithelium/donor (1.95 ± 0.45) was ≈52 times higher than the K p value of stroma/epithelium (0.04 ± 0.02). Figure 1. The concentrations of UAMC-3203 (50 µM) in corneal epithelium and stroma with endothelium at various time points. Statistical analysis was performed by t-test: ** p < 0.01 and *** p < 0.001 for concentration µmol/L epithelium (compared to that at 5 min), and # p < 0.05 for concentration µmol/L and endothelium (compared to that at 5 min). The results are expressed as mean ± standard deviation, n = 5-6. Scratch Wound-Healing Assay In Vitro In vitro scratch assay was used to assess the effect of UAMC-3203 in cell migration. The cell migration % was calculated by measuring cell coverage in the scratch (cell-free area) at defined time points. The results showed higher scratch closure in the presence of UAMC-3203 at concentrations 10 nM and 1 µM than at higher concentrations (10 µM and 50 µM) (Figure 2). At 72 h, the HCE cell migration (%) was significantly higher in the presence of UAMC-3203 at 10 nM (84.9 ± 12.5%) (p < 0.001) and 1 µM (76.6 ± 15.2%) (p = 0.003) compared to the negative control (55.6 ± 17.6%). Whereas at 10 µM concentration, there was no notable difference at any time point. The highest drug concentration (50 µM) did not cause any changes in the scratch area. In Vitro Cytotoxicity Evaluation We evaluated the effect of UAMC-3203 on HCE cell viability using an MTT assay. UAMC-3203 exposure of 3 h showed concentration-dependent cytotoxicity, as shown in Figure 3. The HCE cell viability was significantly reduced to 75 ± 6.7% and 39.2 ± 5.6% at 10 µM and 50 µM drug concentrations, respectively. Moreover, lower concentrations of 10 nM and 1 µM did not cause any toxicity. Moreover, the optimal concentration for wound healing was 10 µM (significant difference in wound area at 24-48 h as compared to the negative control (p ≤ 0.005)). With positive control, a significant difference compared to negative control was obtained only at 30 h (p = 0.01), and it showed an equal effect with 10 µM UAMC-3203 at 48 h. Compared to the negative control, UAMC-3203 at 10 nM had a significantly lower wound area at 48 h (p = 0.03), whereas at 1 µM, there was no notable difference at any time point. The highest drug concentration (50 µM) did not cause any changes in the corneal epithelial wound area. , serum-free medium (negative control), and UAMC-3203 at concentration 10 nM, 1 µM, 10 µM, and 50 µM at five time points during wound healing (decrease in wound area). Statistical analysis was performed by two-way ANOVA compared to negative control with Bonferroni t-test: * p < 0.05, ** p < 0.01, and *** p < 0.001 (n = 8-9). In Vivo Acute Tolerability Study in Rats We studied the ocular safety of UAMC-3203 solution (in PBS) at a concentration of 100 µM. The rats were treated with topical administration (10 µL) of UAMC-3203) or PBS (used as control) twice a day for five consecutive days. The eyes were observed for clinical signs of toxicity and inflammation by visual inspection, OCT, and stereomicroscope imaging on day 2, day 5, and post-treatment on day 8, as shown in Figure 5A. During the study, the eyes of both groups were free from any signs of irritation, such as redness and eyelid swelling. No comparable difference in the number of blinks and headshakes was observed between the control and treated groups ( Figure S2). No significant change in the thickness of the corneal thickness was obtained between PBS and UAMC-3203 ( Figure 5B). Similarly, no signs of corneal opacity, neovascularization, and inflammation, as redness was observed in the cornea and sclera of treated rats ( Figure S3). Discussion Previously, the involvement of lipid-peroxidation-dependent ferroptosis was shown in alkali burn-induced corneal injury [25]. Moreover, other studies have shown links between corneal wounds and ferroptosis [21,25]. Therefore, we investigated UAMC-3203, a new ferroptosis inhibitor, as a potential treatment for corneal epithelial wounds. In order to gain more insights into corneal wound healing and evaluate the potential therapeutic or toxic effects of compounds, various ex vivo animal models were developed with wounds induced by either chemical burn [35] or physical injury [37][38][39][40]. Ex vivo models allow investigations of corneal wound healing using experiments with controlled drug concentrations and exposure times. These models are in line with the 3R principle of laboratory animal use, providing valuable information on epithelial repair at a lower number of animals, thereby augmenting the design of in vivo experiments in drug development [41]. The models may also involve submerged tissue cultures [35,39], tissue cultures at air-liquid interface [37,38,40], free-floating and agar-mounted corneal disks [39], or whole eyes fixed to the culture plates [35]. Our model is modified from a combined in vivo/in vitro model that allowed initial partial healing for 6 h in vivo before excision and adhesion of bulbi in well-plates afterward [35]. In our model, we adhered the whole eye to the dish and induced the wound by alkali burn, as a previous study [25] has shown that such a severe wound resulted in ferroptosis in vivo. It is a simple model that enables the evaluation of specific effects of compounds for a corneal epithelial wound. In our study, we saw fast and effective wound healing (>85%) at 10 nM-10 µM of UAMC-3203. Interestingly, we observed a positive wound-healing effect at the in vitro IC 50 value for ferroptosis inhibition (10 nM) of UAMC-3203 [30]. Healing was also shown in in vitro scratch model at 10 nM and 1 µM of UAMC-3203, while the effects were less beneficial at higher concentrations. At 50 µM, UAMC-3203 was probably toxic, and no wound healing was seen during 48 h. These results were consistent with our in vitro scratch assay and MTT assay. Moreover, when tested in rats in vivo as a twice-daily topical application for 5 days, UAMC-3203 at 100 µM concentration did not result in any signs of toxicity (visual inspection, OCT, stereomicroscope imaging). High-resolution in vivo imaging with OCT [42,43] did not show differences in corneal thickness between the control group and UAMC-3203 treated animals. The stereomicroscope images did not show any corneal opacity or neovascularization, which is the most common nonspecific response to corneal wounds, inflammation, and hypoxia [44]. The differences in toxicity of UAMC-3203 may be attributed to different experimental setups, as in the ex vivo model, the eyes are submerged in the solution for 48 h, whereas the instilled eyedrops are eliminated from the ocular surface within 5 min [45], resulting in a rapid decrease in drug concentration on the cornea [46,47]. Based on these results, the corneal epithelial wound healing effects of the compound could be investigated in follow-up in vivo studies along with its role in corneal scarring. Corneal wound healing involves four continuous yet distinct processes. The initial latent phase (4-6 h) without visible changes in wound size ( Figure 4A) is characterized by an increased intracellular synthesis of proteins. During cell migration, the movement of cells from the limbus toward the central cornea is observed. Then, in the cell proliferation step, mitosis and differentiation of cells occur, followed by the final step involving cell attachment to the basal cell layers. As shown in our in vitro study, UAMC-3203 seems to accelerate corneal epithelial wound healing via cell migration. In our ex vivo drug distribution study in the porcine cornea, we showed a preferential distribution of UAMC-3203 to the corneal epithelium. At 5 min, we observed rapid distribution of UAMC-3203 into the cornea at levels that were three orders of magnitude higher (≈20 µM) than the IC 50 value of UAMC-3203 (10 nM). Lipophilic epithelium concentrations of the drug were ≈28 times higher than in the hydrophilic stroma at 120 min. Such difference in the distribution between corneal layers may be explained by the lipophilic (Log D 7.4 , 0.95) nature of the compound. Furthermore, a 52-fold higher K p epithelium/donor value was seen compared to K p stroma/epithelium, supporting the preferential distribution of UAMC-3203 corneal epithelium. Conclusions Corneal epithelial wound healing effects of new ferroptosis inhibitor UAMC-3203 were investigated. In this study, we demonstrated that UAMC-3203 is involved in the wound healing response in vitro and ex vivo. It can be an effective drug for corneal epithelial wound healing. Mechanisms of wound healing effects and long-term efficacy of UAMC-3203 should be further explored to determine the potential for further translation. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/pharmaceutics15010118/s1, Figure S1. Stability study of UAMC-3203 (100 µM) (A) concentration %, (B) pH as a function of time. The results are expressed as mean ± standard deviation (SD), n = 3. Figure S2. Comparison of number of (A) eyeblinks and (B) head shakes in control and UAMC-3203 treated eyes during safety studies. The results are expressed as mean ± SD, n = 3. Figure S3. Representative images of cornea and sclera observed during in vivo safety study of (A) control and (B) treated groups at (i) baseline, (ii) day 2, (iii) day 5, and (iv) post study (day 8). Table S1. MS/MS parameters for UAMC-3203 and diclofenac. Institutional Review Board Statement: The study was conducted in accordance with EU directive 2010/63/EU, which the current national legislation implements (law 497/2013 and decree 564/2013) and was adhered to. All animal experiments were approved by the national Project Authorization Board. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2023-01-01T16:02:14.136Z
2022-12-29T00:00:00.000
{ "year": 2022, "sha1": "b09b467d2c9495c7e7f9368b1d5acea8d942959d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/15/1/118/pdf?version=1672307687", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e2af5a97de8965fc85a0e46c3d6897bf03fc874", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
16357338
pes2o/s2orc
v3-fos-license
Human Leukocytes Kill Brugia malayi Microfilariae Independently of DNA-Based Extracellular Trap Release Background Wuchereria bancrofti, Brugia malayi and Brugia timori infect over 100 million people worldwide and are the causative agents of lymphatic filariasis. Some parasite carriers are amicrofilaremic whilst others facilitate mosquito-based disease transmission through blood-circulating microfilariae (Mf). Recent findings, obtained largely from animal model systems, suggest that polymorphonuclear leukocytes (PMNs) contribute to parasitic nematode-directed type 2 immune responses. When exposed to certain pathogens PMNs release extracellular traps (NETs) in the form of chromatin loaded with various antimicrobial molecules and proteases. Principal findings In vitro, PMNs expel large amounts of NETs that capture but do not kill B. malayi Mf. NET morphology was confirmed by fluorescence imaging of worm-NET aggregates labelled with DAPI and antibodies to human neutrophil elastase, myeloperoxidase and citrullinated histone H4. A fluorescent, extracellular DNA release assay was used to quantify and observe Mf induced NETosis over time. Blinded video analyses of PMN-to-worm attachment and worm survival during Mf-leukocyte co-culture demonstrated that DNase treatment eliminates PMN attachment in the absence of serum, autologous serum bolsters both PMN attachment and PMN plus peripheral blood mononuclear cell (PBMC) mediated Mf killing, and serum heat inactivation inhibits both PMN attachment and Mf killing. Despite the effects of heat inactivation, the complement inhibitor compstatin did not impede Mf killing and had little effect on PMN attachment. Both human PMNs and monocytes, but not lymphocytes, are able to kill B. malayi Mf in vitro and NETosis does not significantly contribute to this killing. Leukocytes derived from presumably parasite-naïve U.S. resident donors vary in their ability to kill Mf in vitro, which may reflect the pathological heterogeneity associated with filarial parasitic infections. Conclusions/Significance Human innate immune cells are able to recognize, attach to and kill B. malayi microfilariae in an in vitro system. This suggests that, in vivo, the parasites can evade this ability, or that only some human hosts support an infection with circulating Mf. Introduction Throughout history, parasitic nematode infections have had a major impact on human development, especially of the poorest and most disadvantaged populations. Human diseases associated with these infections include lymphatic filariasis (LF) and onchocerciasis. A hallmark of these infections is that they are very long-lasting, with the production of very large numbers of microfilariae (Mf) that are able to survive within the host. In order to do this, parasitic nematodes have evolved the ability to modulate and suppress the host immune response via the secretion of a cocktail of proteins, micro RNAs and small molecules [1][2][3][4]. The current strategy for the elimination of LF and onchocerciasis as public health problems centers on the prevention of transmission by eliminating the Mf from infected hosts, thus preventing any new infections of the insect vectors and hence, more human hosts [5][6][7]. At present, this is achieved by mass administration of the effective anthelmintic drugs, albendazole, ivermectin and diethylcarbamizine (DEC). Studies on DEC have indicated that the drug interacts with the host immune system in order to be effective [8][9][10][11][12][13] and some experiments with animal parasites have suggested that ivermectin has an impact on the ability of polymorphonuclear leukocytes (PMNs) and monocytes to attach to and kill Mf [14,15]. These results imply that host granulocytes and monocytes have the ability to recognize Mf and possibly to kill them. The innate immune response to parasitic nematodes involves many different cell populations, which include granulocytes such as eosinophils, mast cells and PMNs, as well as monocytes. PMNs are critical for controlling a large variety of pathogens including nematodes. They have previously been implicated in the killing of nematode larvae, including Onchocerca volvulus Mf and L3 [16,17], and have been reported to be a key component of the host innate immune response to nematode infections [18]. For example, increased numbers of PMNs in the skin and blood of infected mice reduced the success of invading L3 of the filarial nematode, Litomosoides sigmodontis [19]. A characteristic feature of PMN responses is the production of DNA-containing neutrophil extracellular traps (NETs) [20]. These structures are formed by a unique type of cell death, NETosis, and are characterized by large, extracellular concentrations of expelled cytosolic, granular and nuclear material including DNA, histones, neutrophil elastase and myeloperoxidase [21]. NETosis is frequently, but not always, mediated by NADPH oxidase [21][22]. NET formation is induced by parasitic nematodes but whether these are required for nematode killing is uncertain and may depend on the parasite under study. Despite being trapped by NETs in vitro, the L3 larvae of both Strongyloides stercoralis and Haemonchus contortus were not killed by NETs alone [23][24] although treatment with DNase to destroy NETs did reduce PMN plus macrophage mediated killing of S. stercoralis L3 [23]. In several studies PMNs have been shown to co-operate with monocytes or macrophages in immunity against parasites, including helminths [18,[24][25][26][27]. We have previously shown that PMNs and peripheral blood mononuclear cells (PBMCs) from uninfected dogs attach to Dirofilaria immitis Mf in vitro and that this attachment was increased by the addition of ivermectin [14]. We have extended these studies to the human parasite Brugia malayi and investigated the ability of leukocytes purified from presumably parasite-naïve North American human donors to recognize and kill B. malayi Mf isolated from the peritoneal cavity of infected Mongolian gerbils, Meriones unguiculatus. B. malayi is the causative agent of a minority (roughly 10%) of cases of LF, however it is the only filarial nematode of humans that can be maintained in a convenient laboratory animal host. Our results provide evidence that PMNs and monocytes of many, but not all, human donors were able to both adhere to and kill B. malayi Mf. Human PMNs release NETs that entangle B. malayi Mf in vitro Although NET formation was initially characterized in response to bacteria and protozoa [20,27], it has also been reported for larger multi-cellular pathogens including fungi and three species of parasitic nematodes [19,24,25]. Our initial experiments co-culturing B. malayi Mf with human neutrophils in the presence of the membrane-impermeable DNA-binding dye SYTOX Orange resulted in Mf becoming tethered in a manner consistent with entanglement in NETs (S1 Video). These observations prompted us to confirm that the structures generated possessed typical NET characteristics. Live Mf were co-cultured with human PMNs and the generation of NETs observed by confocal microscopy after staining with DAPI and antibodies to characteristic NET-associated proteins. Worm-NET aggregates stained positive for citrullinated histone H4 and the granular proteins, neutrophil elastase and myeloperoxidase (Fig 1). All the NET-associated proteins examined co-localized with extracellular DNA, confirming typical NET morphology. Mf induce the release of PMN-derived extracellular DNA To further characterize Mf-induced NETosis, we developed a confocal microscopy-based extracellular DNA release assay. Live cell fluorescence imaging of PMN and Mf interactions in the presence of SYTOX Orange allowed us to observe extracellular DNA and NET formation over time (Fig 2A-2C and S2 Video). Mean SYTOX Orange intensity values derived from these images were used to quantify total extracellular DNA release (Fig 2D-2H) in wells where PMNs were stimulated with either Mf or 25nM phorbol myristate acetate (PMA) as a positive control [28]. These data demonstrate that in the absence of serum, B. malayi Mf significantly increased the release of extracellular DNA when compared to the zero Mf controls (Fig 2D; P = 0.041). Both DNase I and the NADPH oxidase inhibitor diphenyleneiodonium (DPI) significantly reduced the mean SYTOX Orange intensities derived from Mf treated wells (Fig 2E; P<0.001 for both treatments), presumably via the enzymatic breakdown of NET structure and the inhibition of NETosis respectively. Interestingly, in the presence of 5% autologous serum, we did not detect any significant increase in extracellular DNA release within Mf treated wells compared to the zero Mf controls (Fig 2F). This may be due to the inhibitory effects of the autologous sera which impeded DNA release in Mf (P<0.001) containing wells (Fig 2G). The autologous sera also reduced the extracellular DNA release induced by PMA (S1 Fig). Heat treatment of the autologous sera had no significant effect (Fig 2G), suggesting that the complement system was not responsible for the inhibition of extracellular DNA release. We have previously shown that canine neutrophils isolated from uninfected dogs can attach to D. immitis Mf in vitro [14]. Therefore, we examined whether a similar phenomenon was observed when we incubated PMNs from uninfected humans with Mf of the human parasite, B. malayi. Video analysis of 96-well plate-based co-cultures allowed us to confirm PMN to Mf attachment in vitro and to count the number of individual Mf with at least one cell attached. After 24 hours, the addition of 5% autologous serum increased the number of worms with attached PMNs from 19.3% to 31.1% (P = 0.035). DNase I treatment virtually abolished PMN attachment in the absence of serum (0.4 ± 0.3% of Mf had !1 PMN attached at 24 hours post experimental set up (p<0.001),), suggesting that NETs were required for PMNs to attach to the worms (Fig 3A). In contrast, DPI had no significant impact on PMN attachment under these conditions (Fig 3A), suggesting that attachment took place via an NADPH oxidase-independent mechanism. Heat treatment (55˚C for 30 min) of the sera inhibited attachment, reducing it to levels less than in the zero serum controls, with only 7.9% of the Mf having at least one cell attached (P = 0.015) (Fig 3A). The effect of heat treatment suggested that heat-assay set up. These wells did not contain serum. D-G: Changes in mean SYTOX Orange intensity were used to monitor the release of extracellular DNA from PMNs incubated for 7 hours in vitro. In each panel, the dotted lines indicate the Standard Error of the mean fluorescence intensity. D: 25nM phorbol myristate acetate (PMA)-and Mf-induced DNA release in the absence of serum was measured as described in Materials and Methods (n = 7). E: 10μM diphenyleneiodonium (DPI) and 30μg/ml DNAse I inhibited Mf-induced DNA release in the absence of serum (n!4). F: Extracellular DNA release in Mf-and PMA-treated wells that contained 5% autologous serum (n = 6). G: 5% autologous serum and 5% autologous heat treated serum (HTS) inhibited Mf induced DNA release (n!6). . White, yellow, red, blue, and black bars represent no serum, 5% autologous serum, 5% autologous heat-treated (55˚C, 30 mins) serum, 10μM diphenyleneiodonium (DPI) and 30μg/ml DNase I treated wells respectively. Error bars represent standard error of the mean; *P<0.05, **P<0.01, ***P<0.001. B: Image of PMNs adhered to a Mf. Image taken after 24 hours of incubation in 5% autologous serum at 37˚C and 5% CO 2 . The preparation was stained with modified Wrights' stain. labile components of the autologous serum promoted PMN to worm attachment. The number of PMNs attached to individual Mf varied, and we often observed Mf that had large numbers of PMNs adhered to their surface ( Fig 3B); however, only a single cell was required to be attached in these assays for the Mf to be scored. Human leukocytes can kill B. malayi Mf in vitro The previous experiments clearly showed that human PMNs can recognize and attach to B. malayi Mf. Bonne-Année and colleagues reported that PMNs and peripheral blood mononuclear cells (PBMCs) collaborate to kill S. stercoralis larvae when incubated in 25% human serum for 48 hours [29], so we tested whether or not the formation of NETS and cell attachment we observed also resulted in Mf killing. Worm survival was monitored over 5 days in culture in the presence and absence of peripheral blood leukocytes (PMNs and PBMCs) isolated from uninfected people. Approximately 85% of Mf survived for 5 days in the absence of human leukocytes (Fig 4C). Exposure to PBMCs alone did not significantly alter Mf survival in vitro (Fig 4), but when Mf were maintained in the presence of either PMNs alone or both PMNs and PBMCs, worm survival was significantly inhibited compared to the zero cell controls (Fig 4). Interestingly, when Mf were incubated with both PMNs and PBMCs, significantly fewer worms survived to day 5 (36.9 ± 14.7%, see Fig 4C; P<0.001) when compared to the Mf exposed to PMNs alone (60.6 ± 14.7%, see Fig 4C; P<0.001). DNase I treatment of the cultures had no significant effect on Mf survival in the presence of PMNs, PBMCs or both, indicating that NET formation was not required for Mf killing, whereas heat treatment of the serum effectively blocked all leukocyte-mediated killing (Fig 4A-4C). Over the course of these experiments we noticed that the levels of both cell to worm attachment and leukocyte mediated Mf killing varied greatly between individual experiments. In an attempt to better understand this variation we re-analyzed the videos of co-culture wells that contained both PMNs and PBMCs to score leukocyte to worm attachment and compare this to Mf survival. These data highlight an obvious negative correlation between the levels of leukocyte attachment at one hour post experimental set up and the percentage of worms that survived to day 5 ( Fig 4D; Spearman's rho = -0.85, P<0.0001). Each experiment was carried out using cells from a single donor and this donor was different for each experiment. The isolated cells appeared to split largely into two distinct phenotypes: "Mf killers" who displayed both rapid leukocyte attachment (>90% Mf with !1 leukocyte attached at 1 hour) and near complete Mf killing (<10% of the Mf survived to day 5) and "non-killers" who displayed relatively low levels of leukocyte attachment (<42% Mf with !1 leukocyte attached at 1 hour) and little to no leukocyte mediated Mf killing compared to zero cell controls (60-80% Mf survival at 5 days post set up), though there were 2 donors whose cells had an intermediate phenotype. This explains the relatively high variation (reflected in the error bars) in attachment and killing seen between experiments. This analysis also reveals that when killing occurs the leukocytes recognize and attach to the Mf very rapidly and that if this attachment does not take place within one hour, the parasites survive quite well in these culture conditions. The complement system does not promote PMN attachment or leukocyte-mediated Mf killing Serum heat inactivation has been shown to prevent the killing of S. stercoralis L3 larvae mediated by human PMNs and PBMCs [29]. These observations led to the conclusion that complement was required for larval killing. Given the effects of serum heat treatment on PMN attachment (Fig 3) and subsequent Mf survival (Fig 4), we directly investigated the possibility that the complement system is involved in PMN to worm attachment and leukocyte-mediated Mf killing. We repeated our PMN attachment and Mf survival assays whilst blocking complement activation via the complement specific inhibitor, compstatin [30]. In these experiments we only analyzed data obtained from those experiments where substantial Mf killing was observed, since this is the phenomenon we were seeking to study. Pre-treatment of 25% serum with 100μM compstatin did not significantly affect either attachment (Fig 5A, red panels) or killing ( Fig 5B). These data suggest that the complement system does not significantly contribute to PMN plus PBMC mediated Mf killing in vitro. However, since the attachment experiments were conducted in 5% serum we also tested the effect of compstatin on attachment under these conditions. Although there was a small reduction between the control peptide (~90% attachment) and the compstatin treated samples (~65% attachment) (Fig 5A, light blue panels, P = 0.045), there was no significant difference in PMN to worm attachment between the no peptide control and compstatin treated wells. To further investigate the role of complement we also used blocking antibodies to two complement-and adhesion-associated proteins, Cd11b (complement receptor 3, CR3) and ICAM-1. Neither blocking antibody had an effect on Mf survival (Fig 5C).Complement therefore plays only a minor role, if any, in attachment and killing of Mf, though a heat-labile component of normal human serum is required. Monocyte-mediated Mf killing Addition of PBMCs to PMNs increased the amount of Mf killing, however, PBMCs alone were not sufficient to affect survival. Monocytes and neutrophils are both crucial in immune responses to infection [31] and interactions important for helminth clearance have recently been described [18]. Monocytes make up~10% of the PBMC preparation isolated for our survival experiments (see Materials and Methods) so we hypothesized that monocytes may represent the microfilaricidal cells present within the PBMC population. To test this, we repeated our Mf survival assays but replaced the PBMC population with 1500 monocytes/Mf. In these experiments, incubation with either PMNs or monocytes reduced Mf survival after 5 days to about 40%; incubation with both cell types did not significantly reduce survival any further ( Fig 6A) although attachment of both PMN and monocytes could be detected in co-cultures at 120 hours post incubation ( Fig 6B). These data highlight that both PMNs and human monocytes are independently capable of killing B. malayi Mf in vitro (P = 0.005). Immunofluorescence staining showed that both CD14 + /CD16monocytes and CD14 + /16 + PMNs were attached to the Mf at 5 days post-infection by which time significant levels of killing had occurred (Fig 6B). In contrast, purified lymphocytes were unable to kill Mf, nor did their addition to PMN reduce survival any further (Fig 6C). Taken together, our data show that PMNs and monocytes, but not lymphocytes, isolated from North American human donors, who have presumably never been exposed to B. malayi, can recognize and kill Mf in vitro. There is some variation in the extent of parasite killing between experiments, which may represent differences between the cells isolated from individual human donors, or between batches of Mf. Killing is preceded by rapid attachment (<1 hr) of the leukocytes, but is rather slow, taking up to 5 days. Discussion Filarial nematodes, including Mf, survive for months and years in their hosts without provoking an effective immune response. Nonetheless, in this paper we confirm that leukocytes taken from uninfected people can recognize, attach to and kill the Mf stage in vitro [15][16][17]. As a starting point we wished to determine if the results we previously obtained using the animal parasite, D. immitis, and canine leukocytes [14], could be reproduced in vitro using human cells and a human filarial parasite. In particular, we wanted to extend these observations and determine if human PMN-derived DNA-based extracellular traps (NETs) [32] could ensnare and kill the Mf of B. malayi. NETosis remains poorly characterized with respect to immunity to parasitic nematodes. Recent studies have confirmed that NETs are released from human, bovine and mouse PMNs exposed to the L3 larval stages of S. stercoralis, H. contortus and L. sigmodontis respectively [19,24,25]. Both H. contortus and S. stercoralis L3 larvae were trapped but not killed by NETs alone [24,25], though DNase I-mediated extracellular trap destruction prevented human PMN-, macrophage-and autologous serum-mediated killing of S. stercoralis larvae. Extracellular traps may therefore contribute to broader killing mechanisms that require multiple immune components. Mouse neutrophils release NETs when exposed to larvae of the human parasite, S. stercoralis [24], though DNase I treatment did not block killing of the larvae by mouse leukocytes in vitro [24]. Our confocal microscopy-based assays confirmed the presence of classical NET markers when human PMNs were incubated with B. malayi Mf, demonstrating that Mf can induce NET release in vitro (Fig 1 and Fig 2A-2D), and that these structures contain all of the components reported from other systems [20,21]. DNase I treatment effectively destroyed NET structure ( Fig 2E) and blocked PMN to Mf attachment in the absence of serum (Fig 3A), as predicted, but did not inhibit human leukocyte mediated Mf killing ( Fig 4C). DNA-containing NET formation is therefore not essential for human leukocytes to kill B. malayi Mf in vitro. This suggests that the importance of NETosis to nematode parasite killing varies with both host and parasite species. It is also possible that in vivo NETs do contribute to Mf killing but that additional components were missing from our in vitro survival assay. An increase in NET-like structures was correlated with reduced S. stercoralis L3 survival within cell impermeable diffusion chambers implanted into the mouse model [24]. DPI significantly inhibited B. malayi Mf-induced DNA release in the absence of autologous serum ( Fig 2E), mirroring the results associated with H. contortus [25] and indicating a role for NADPH oxidases in parasitic nematode driven NETosis. Despite this, DPI appeared to have no significant impact on PMN to Mf attachment ( Fig 3A) and obvious NET aggregates could be seen within DPI treated wells. We were not able to examine the effect of DPI on Mf survival due to the negative effects of long term DPI exposure on worm health (the worm appeared sluggish but not dead at day 5); these effects are presumably due to DPI inhibiting the nematodes' NADPH oxidase. Human PMNs can attach to and kill B. malayi Mf in vitro, and Mf survival is further reduced if PBMCs are added to the cultures (Fig 4). The PBMC population failed to kill Mf in the absence of other cell types, yet significantly increased the level of parasite killing when cocultured with PMNs (Fig 4), suggesting some cross-talk between PMNs and a component of the PBMC fraction, presumably monocytes since these cells are also able to kill Mf (Fig 6). We noted a large amount of variation in both leukocyte attachment and Mf survival between individual experiments, and the two measurements-attachment at 1 hour and survival at 5 daysare clearly negatively correlated (Fig 4D). This could arise from differences between the cells Fig 5. Compstatin-mediated inhibition of complement activation has little effect on Mf survival. A: The percentage of Mf that had at least one PMN adhered to their surface or indirectly fastened by extracellular DNA at 24 hours post-experimental set up (n = 3). Orange and red bars represent 5% autologous serum and 25% autologous serum treated wells respectively. The autologous serum was either untreated (no peptide control), treated with 100μM inactive compstatin analogue (control peptide) or treated with 100μM active compstatin. Error bars represent standard error of the mean. B: The percentage of Mf that survived 5 days in the presence of both PMNs and PBMCs after treatment of serum with control peptide or compstatin (n = 3). C: PMNs were incubated with blocking antibodies for 2 hours prior to incubation with Mf for 5 days. PMNmediated killing occurred to the same extent in control (no antibody) wells and those treated with anti-ICAM-1 and anti-Cd11b (p = <0.0001). No significant differences were observed between PMN incubated with Mf with or without the anti-ICAM-1 or anti-Cd11b (n = 3). and/or serum isolated from individual human donors, or between different batches of Mf. It is impossible to distinguish between the two using our current protocols as we do not know the identity of the human donors, and so cannot examine HLA genotypes for example, and since cells and sera were used immediately after isolation, we could not test them on different batches of Mf. In endemic regions the majority of infected individuals are tolerant of high parasite loads and microfilaremia. In contrast, individuals with pathological manifestations (e.g. lymphedema and hydrocele) show stronger immune reactions [33]. The genetic factors that regulate susceptibility to parasitic infections and the pathological heterogeneity associated with filarial nematode infection are not entirely understood [34], but it is possible that these are reflected in the ability of the innate immune system to rapidly recognize and kill Mf, as shown here. In contrast, the nematodes used in this study are an inbred population that would not be expected to exhibit much genetic variation, but non-genetic factors may account for the differences observed between experiments. For example, differences in the amounts of immunomodulatory ES products present in the various batches of Mf may explain the variation in rapid attachment and subsequent killing that we observed. PMN attachment and Mf killing were both promoted by the addition of autologous serum, and heat treatment of the serum inhibited both PMN attachment and PMN plus PBMC-mediated Mf killing (Fig 3 and Fig 4). This was not due to complement as the complement specific inhibitor compstatin failed to have any biologically significant effects on either PMN attachment or leukocyte mediated Mf killing (Fig 5). Compstatin binds to C3 to prevent C3 cleavage and competitively inhibit all three complement activation pathways [30]. These data suggest that complement does not contribute much, if at all, to PMN attachment or Mf killing, but that another unidentified heat-labile component of human serum is involved. Antibodies against CD11b and ICAM-1, two molecules previously implicated in the interactions between Mf and the immune system [18,33], also failed to inhibit killing, suggesting that they are not required for this process to take place (Fig 5C). In the presence of autologous serum both PMNs and monocytes are capable of killing Mf alone (Fig 4 and Fig 6). Bonne-Année and colleagues have shown that human PMNs can collaborate with either PBMCs or macrophages to kill S. stercoralis larvae [24,29], however, despite using relatively high numbers of leukocytes, they did not observe reduced L3 larvae survival on exposure to individual leukocyte populations [24,29]. This could reflect differences in the nematode species or life-stages employed, particularly worm size which varies considerably between life-stages and perhaps influences the number of leukocytes required to kill the parasite. Perhaps surprisingly, we observed no increased killing when PMNs and monocytes were incubated together with the Mf. Neutrophils and monocytes are well known to communicate with each other and it is has been reported that neutrophils induce an anti-nematode immunity in monocytic cells [18,35], however this was not reflected in our in vitro experimental system. These data describe the first example of Mf induced NETosis and contribute significantly to the growing body of evidence that suggest an important role for neutrophils in regulating parasitic nematode infections [18,36]. We show that human peripheral blood innate immune cell populations can recognize, trap and kill the blood circulating life-cycle stage of the human filarial parasitic nematode, B malayi. Variation in worm killing and leukocyte attachment between human blood donors suggest that the innate immune system could significantly contribute to the regulation of host tolerance and susceptibility to infection and more specifically the regulation of microfilaremia which is a key determinant of the transmission of lymphatic filariasis. Ethics statement All experiments and informed consent procedures were approved by the Institutional Review Boards of the University of Georgia (permit number 2012-10769), and the studies were conducted in accordance with the ethical guidelines of Declaration of Helsinki. Human subjects recruited under the guidelines of IRB-approved protocols provided written informed consent for participation in the studies described below. Microfilariae preparation Live B. malayi Mf isolated from the peritoneal cavity of infected Mongolian gerbils were provided by the Filarial Research Reagent Resource Center (FR3: Athens, GA, USA). Mf were washed three times in phosphate buffered saline (PBS; centrifuged at 1500 x g for 8 min) and re-suspended in RPMI-1640 (Gibco, Life Technologies, Grand Island, NY, USA). Note that all RPMI-1640 used in this study was supplemented with 100 U/ml penicillin-streptomycin (Life Technologies, Grand Island, NY, USA) and 0.1 mg/ml gentamicin (Sigma, St. Louis, MO, USA). Re-suspended Mf samples were then filtered through a 5μm Isopore membrane (Merck Millipore Ltd., Carrigtwohill, Cork, Ireland) to capture the Mf and exclude contaminating small particles. Membranes were socked in RPMI-1640 at 37˚C and 5% CO 2 for 20-30 min to facilitate the migration of viable Mf from the membrane. Viable Mf were incubated overnight in RPMI-1640 at 37˚C and 5% CO 2 . Mf samples were washed for a second time by 5μm Isopore membrane filtration just before use. Isolation of human neutrophils, PBMCs, monocytes and lymphocytes Leukocytes were isolated from freshly donated peripheral blood drawn from healthy U.S. residents at the Health Center of the University of Georgia. 40ml of blood was anticoagulated by heparin. PMNs were isolated using the EasySep Direct Human Neutrophil Isolation Kit (Stemcell Technologies, Vancouver, BC, Canada) according to manufacturer's instructions. PBMCs were isolated using SepMate-50 Tubes (Stemcell Technologies, Vancouver, BC, Canada) according to manufacturer's instructions. The optional extended wash step (120 x g for 10 min) of the SepMate protocol was included to remove contaminating platelets. Monocytes were isolated from the PBMC samples using the EasySep Human Monocyte Enrichment Kit (Stemcell Technologies, Vancouver, BC, Canada) according to manufacturer's instructions. Isolated PMNs, PBMCs and monocytes were washed in PBS (centrifuged at 300 x g for 5 min), re-suspended in a 1:1 mixture of RPMI-1640 and autologous serum, stored at room temperature and used within 6 hours post-isolation as previously described [37]. The concentration of cells per population was estimated in a sample of cells stained with 0.4% trypan blue (Gibco, Life Technologies, Grand Island, NY, USA) using a hemocytometer. All populations used were estimated at 95% or greater viability. To estimate the purity of isolated cell populations, cell slides were prepared using the Cytospin 3 CellPreparation System (Shandon Scientific Limited, Astmoor, Runcorn, Cheshire, England) and stained with Modified Wright's stain (Hema 3 Stat pack, Fisher Scientific, Kalamazoo, MI). The average purity of the PMN population was 97% and the monocyte population~92%. The PBMC population contained predominantly lymphocytes but included~10% monocytes and~10% PMNs. Leukocytes were washed once in Hanks-balanced salt solution (HBSS; Gibco, Life Technologies, Grand Island, NY, USA; centrifuged at 300 x g for 5 min) and re-suspended in RPMI-1640 before use. Lymphocytes were isolated using the EasySep Direct Human Total Lymphocyte Isolation Kit (Stemcell Technologies, Vancouver, BC, Canada) according to manufacturer's instructions. Isolated lymphocytes were washed in PBS (centrifuged at 300 x g for 5 min) and re-suspended in a 1:1 mixture of RPMI-1640 and autologous serum. Autologous human sera 10 ml of autologous blood was collected as described above but allowed to clot in the absence of heparin (incubated for~2 hours at room temperature) [38]. Briefly, the liquid fraction of the blood sample was aspirated and centrifuged at 10,000 x g for 5 min. The supernatant was aspirated and filter sterilized. Where necessary the serum was heat treated (55˚C for 30 min) and/or diluted in RPMI-1640 to the desired concentration before use. Immunostaining and confocal microscopy of NETs and Mf-associated leukocytes For detection of NET-associated proteins, isolated PMNs, 1.5 x 10 5 cells in 100 μl RPMI-1640, were seeded onto 12 mm #1 round coverslips in 24-well flat bottom dishes (Costar, Corning, NY, USA) and left to adhere for 1 hour at 37˚C and 5% CO 2 prior to adding 100 B. malayi Mf in 100 μl RPMI-1640. The coverslips were incubated a further 18 hours at 37˚C and 5% CO 2 prior to immunostaining. Cells and Mf were fixed by adding 200 μl of 4% paraformaldehyde to the well and incubated for 20 min at room temperature. Supernatants were carefully removed and coverslips washed twice with PBS prior to blocking with PBS plus 5% FBS and 1% BSA for 20 min at room temperature. Primary antibodies were diluted in blocking solution as follows: (1) Immuno-labelling of Mf-associated monocytes and PMNs was performed on samples of Mf incubated with purified PMNs and monocytes for 120 hours in 96-well round bottomed plates (Corning Glass Works, NY). Samples were centrifuged onto microscope slides at 1000 rpm for 5 min in a Cytospin 3 cytocentrifuge (Shandon Scientific Limited, Astmoor, Runcorn, Cheshire, England), fixed in methanol for 50 second blocked with 5% FBS, 1% BSA in PBS for 30 min. The slides were incubated with a 1:100 dilution of mouse anti-human CD16-Alexa Fluor 488, clone 3GB (Stemcell Technologies, Vancouver, CA) and 1:100 dilution of mouse anti-human CD14 Alexa Fluor 594 clone HCD14 (Biolegend, San Diego, CA) in blocking solution for 2 hours RT in a humidified chamber in the dark. DAPI was added to the primary antibody solution for the last 25 min of incubation. Following rinsing for 2 x 5 min in PBS, the slides were coverslipped, mounted in Mowiol and set overnight prior to viewing. Z-stack images were collected using a Nikon A1R confocal microscope and NIS Elements software (Nikon Instruments Company, Melville, NY, USA). Images were prepared using Adobe Photoshop software (Adobe, San Jose, California, USA). Quantification of extracellular DNA release Assays were set up in Nunc 384-Well optical bottom tissue culture plates (THERMO Scientific, Rochester, NY, USA). There were four components added to each well in 12.5μl volumes, giving a total volume of 50μl.~18,750 PMNs were added to each well. PMNs were suspended in RPMI-1640 containing 12.5μM SYTOX Orange Nucleic Acid Stain (Life Technologies, Eugene, OR, USA) to give a final concentration of 3.125μM of SYTOX Orange stain and enable the quantification of extracellular DNA [37,38]. The other three well components varied between treatment groups and included: 5% autologous serum, 5% autologous heat treated serum,~25 B. malayi Mf, 10 μM diphenyleneiodonium (DPI; Sigma, St. Louis, MO, USA), 30μg/ml DNase I (Roche, Indianapolis, IN, USA) and 25nM phorbol myristate acetate (PMA; Sigma, St. Louis, MO, USA), which was employed as a positive control. The concentrations stated here represent the final concentrations obtained once all components had been added to the well. To create the negative controls, 12.5μl of RPMI-1640 was substituted for the relevant component. The tissue culture plates were incubated at 37˚C and 5% CO 2 throughout the experiment. Both transmitted light and fluorescence images were captured on a Nikon A1R confocal microscope system equipped with a 60X 1.4NA lens. A single field of view with was taken at a random position within each well. Images were taken every 30 min for 7 hours using automated capture software. The first images were taken 1 hour post-experimental set up to allow the worms and cells to settle to the bottom the wells. Mean SYTOX Orange intensities of the fluorescent images were quantified using the measure region of interest (ROI) feature of the Nikon A1 software. The entire image was highlighted as the ROI. These measurements were used to calculate changes in SYTOX Orange intensities over time (ΔMean SYTOX Orange intensity). Each biological replicate represents the mean of three technical replicates. PMN attachment assay Assays were set up in Nunc 96-Well optical bottom tissue culture plates (THERMO Scientific, Rochester, NY, USA). There were four components added to each well in 50μl volumes, giving a total volume of 200μl.~100 B. malayi Mf,~75,000 PMNs and 50μl of RPMI-1640 were added to each well. The worm to cell ratio selected (1:750) was sufficiently high so that the number of available cells did not limit attachment [14]. The final component added to each well varied between treatment groups and included: 5% autologous serum, 5% autologous heat treated serum, 10μM DPI and 30μg/ml DNase I. To create the respective controls, 50μl of RPMI-1640 was substituted for the relevant component. The tissue culture plates were incubated at 37˚C and 5% CO 2 throughout the experiment. Videos of each well were taken on an inverted microscope (40X magnification) at 2.5, 5, 16 and 24 hours post-experimental set up. Videos were blinded and all Mf were scored for attachment. Mf that had at least 1 PMN adhered to their surface or indirectly fastened by extracellular DNA were scored as attached. Each biological replicate represents the mean of two or three technical replicates. PMN attachment to Mf was confirmed by cytocentrifugation and Wright stain as described above. Mf survival assay Assays were set up in Nunc 96-Well optical bottom tissue culture plates. There were four components added to each well in 50μl volumes, giving a total volume of 200μl.~100 B. malayi Mf were added to each well. The other three components varied between treatment groups and included: 25% autologous serum or 25% autologous heat treated serum,~150,000 PMNs, 150,000 PBMCs,~150,000 monocytes,~150,000 lymphocytes and 30μg/ml DNase I. To create the respective controls, 50μl of RPMI-1640 was substituted for the relevant component. The tissue culture plates were incubated at 37˚C and 5% CO 2 throughout the experiment. Videos of each well were taken on an inverted microscope (40X magnification) at 1, 24, 48 and 120 hours post-experimental set up. Videos were blinded and the numbers of moving Mf present within each well were counted. Mf that were not moving were considered dead. The number of surviving Mf was normalized relative to the number of moving Mf scored 1 hour postset up (= 100%), and expressed as a relative percentage. Each biological replicate represents the mean of two or three technical replicates. Compstatin-and antibody-treated serum For the compstatin inhibition studies, autologous serum was pretreated with either 100μM compstatin (Tocris Bioscience, Avonmouth, Bristol, U.K.) or 100μM compstatin control peptide (Tocris Bioscience, Avonmouth, Bristol, U.K.) for 30 min at 37˚C. The compstatin treated serum and antibody treated leukocytes were added directly to the PMN attachment and Mf survival assays described above. Inhibition of Cd11b activity was implemented by incubating 2g of the monoclonal antibody clone M1/70 purified specifically for use with live cells (BioLegend, San Digeo, CA, cat #101248) with PMNs for 2 hours at room temperature prior to adding B.malayi Mf and incubating for 5 days at 37˚C prior to assessing survival as described above. Block of ICAM-1 function was carried out as for Cd11b using 2g of anti-ICAM1 antibody [MEM-11] (Abcam, Cambridge, MA). Data analysis Normality of the data was assessed based on examination of histograms and normal Q-Q plots of the residuals. Constant variance of the data was assessed by plotting residuals against predicted values. Data were analyzed using linear mixed effects modeling with independent experiment modeled as a random effect and treatment group and time (when applicable) modeled as fixed nominal effects. Two-way interactions were also included in the model when applicable. Model fit was assessed using Akaike information criterion values. When indicated, adjustments for multiple relevant comparisons were done using the method of Bonferroni. For all analyses, adjusted P<0.05 was considered significant.
2018-04-03T05:51:12.306Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "2b26c69c41a5ab784e3c42ada813ac0ecf3db990", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005279&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b26c69c41a5ab784e3c42ada813ac0ecf3db990", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13551782
pes2o/s2orc
v3-fos-license
Resolution enhancement of field asymmetric waveform ion mobility spectrometry (FAIMS) by ion focusing Background Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) is a material analysis technology which develops very fast in recent years. Resolution is an important factor used to estimate the performance of this technology. With greater resolution, it’s always easier to separate complex mixtures. Results A method to increase the resolution of FAIMS is put forward which focuses ions before they enter the drift tube. By adding several pairs of focus electrodes loaded with DC or RF voltage in front of the FAIMS drift tube, the height of the ion beam flowing into the drift tube is decreased, which improves the resolution of the FAIMS spectrum. The effectiveness of this method is verified through SIMION simulation and experiments. Both the DC focusing mode and the AC focusing mode can improve the resolution of the FAIMS system, with the biggest increase of 37%. Conclusions Compared with other methods of improving FAIMS resolution, this method needs neither additional special gases, nor additional auxiliary equipment. It is easy to miniaturize, and can work under atmospheric pressure. General background With the social and economic development, the requirement for analysis technology has increased. The detection and analysis of Volatile Organic Compounds (VOCs) are extremely needed in the field of chemical weapons, explosives, drugs, pollutants [1]. People want the detection of VOCs to be fast, highly sensitive, and portable. Gas Chromatography and Mass Spectrometry are the most commonly applied technologies in the VOCs detection [2,3]. Traditional Gas Chromatography achieved VOCs detection at ppm level with the separation time of ten minutes level [4].Multidimensional Gas Chromatography improves the separation capacity of complex VOCs but the separation time is a bit longer [5]. The Mass Spectrometry technologies used in VOCs detection mainly include selected ion flow tube mass spectrometer (SIFTMS) [6], proton-transfer reaction-linear ion trap mass spectrometer (PTRMS) [7,8] and time of flight mass spectrometer (TOFMS) [9] which show a detection level of ppb and separation time with the range from seconds to several minutes. But Gas Chromatography or Mass Spectrometry can't achieve on-site detection for its instrument volume or power consumption. Ion Mobility Spectrometry (IMS) is a very effective material analysis technology, which has a structure of several metal rings in a line arrangement, and shows a great potential to meet the requirement of detection for VOCs. However, IMS separates substances according to the differences in ion mobility between different materials. So when dealing with materials with similar ion mobility, its performance gets worse substantially. Field Asymmetric Waveform IMS (FAIMS) separates ions by using the change of ion mobility from low to high electric fields, and overcomes the shortcomings of IMS (in which substances with similar ion mobility cannot be separated). At the same time, the structure of FAIMS is simpler than IMS, so it is a promising material analysis technology for VOCs detection. However, FAIMS technology is not yet mature, even after years of development, and one of its major problems is its lower resolution (than IMS). How to raise the resolution of FAIMS has become a hot area in FAIMS research. Currently, a number of ways to improve resolution have been developed. A cylindrical drift tube takes advantage of the inhomogeneity of the radial electric field, which has an effect of bringing specific ions towards a specific location together [10,11]; but the difficulties in cylindrical structure processing, concentricity in installation, and reduction of the drift tube (a structure of cylindrical or rectangular shape in which ions fly under the effect of the electric field) volume has limited its future development. The company Owlstone Nanotech in Great Britain has utilized MEMS technology to produce a 35 μm minimal-spacing drift tube, achieving high speed of analysis [12,13]. If Owlstone put the drift time to the normal level and high resolution would be obtained. However, owing to the tiny spacing, the power frequency needs to be over 20 MHz, which is a tough requirement for the power supply. Besides, because of the deep etching in the silicon wafer, it is difficult to control the roughness of the etching surface, which cannot be ignored given the 35 μm spacing, while the electrode flatness is a vital factor for the operation of the drift tube. Therefore, it is difficult to improve the test results of the separation with this tiny spacing design. Reducing the air pressure can also increase the resolution. With a lower pressure, the number of molecules per unit volume drops, which means an increase of E/N leading to an improvement of the resolution [14]. But an additional pump and other equipment will increase the system costs, which is inconsistent with the role of FAIMS as a "mass spectrum under atmospheric pressure". A longer drift tube can also improve the resolution [15], but due to the long drift time (the time for ions to pass through the drift tube) through the drift tube, the ions will suffer severe losses and the signal intensity will be low. Besides, the long drift time make the diffusion can't be ignored while short drift tube has no this problem [16]. Another way to improve resolution which is to inject He or H 2 into the N 2 carrier gas is put forward by Shvartsburg, and it has great effect in resolution improvement. [17,18] But, He and H 2 gases are expensive, and this needs extra gas cylinders and pipings for the experiments, so it is not suitable for practical use in the future. In this paper, we come up with a novel method to improve resolution, by adding several pairs of focus electrodes in front of the planar FAIMS drift tube, as shown in Figure 1. This method combines FAIMS and ion focusing method such as ion lens and ion funnel together. The carrier gas 1 brings the sample into the ion mobility spectrometer, and the sample is ionized by UV lamp 2. When the ions are brought into the focus area 3 by the carrier gas, DC or AC voltages will be applied on the focus electrode pairs 4,5,6,7 and 8. The focus area forms an electric field to concentrate the ions to the center, so that the height of the ion beam entering the drift tube 14 is reduced, bringing about the improvement of the FAIMS resolution. On upper electrode 11A, a DC scan compensation voltage 12 and an asymmetrical waveform RF voltage 13 is applied, and the lower electrode 11B is connected to the ground. The asymmetric wave form RF voltage 13 is a high-field asymmetric waveform with an average value of zero. The DC scan compensation voltage 12 is of a specific scanning frequency and range in a certain step. After being filtered in drift tube 14, the ions move to the right pulled by the carrier gas into the detection unit 15. The ions are deflected to the detection electrode by the deflection voltage, so the ion signals are converted into an electric current, which can be measured. Currently, focusing of ions is mainly conducted in vacuum or a low pressure environment, and researches mainly focus on ion lens [19,20] and ion funnel [21][22][23][24][25]. In this paper, we extend the applications of these two focus methods to atmospheric conditions. In an ion lens, various electrodes or rings are arranged in a certain configuration, with different electrostatic voltages applied, to form a transitional electric field in space which can control ion movement. Under vacuum conditions, the ion trajectory through an ion lens is similar to the trajectory of light through an optical lens, so "ion lens" is analogous to a lens technology. But, under atmospheric conditions, due to the interaction between gases and the ions, the electric field is no longer a conservative field, the law of ion movement does not agree with that under vacuum conditions. Through the SIMION simulation (in Results and Discussion), the focusing effect of ion lens under atmospheric conditions is verified. Then, the ion lens is used to the atmospheric condition in this paper. The structure of an ion funnel is similar to that of an ion lens, but the voltage loaded on the electrode pairs is a high frequency sine wave voltage, and the difference between the voltage phases of adjacent electrode pairs is 180°. An ion funnel also can control the movement of ions, then create a focusing effect. Generally, an ion funnel works under low pressure conditions. In this paper, the use of ion funnel is extended to the atmospheric condition. The below SIMION simulation result also shows the focusing effect. For the pressure condition and focus structure is different, the simulation results of ion lens and ion funnel in this paper is different from that in the references. Theoretical background for ion focusing with FAIMS The basic principle of FAIMS is ion separation based on the differences of ion mobility under high and low electric fields. The relationship between ion mobility and the electric field under the high field condition is The mobility of different ions under high electric field condition shows different non-linear changing trends, namely, a(E/N) differs for different ions, which allows ions with the same or similar mobility under low electric field condition be separated under high electric field condition. A planar FAIMS drift tube consists of two parallel electrodes. An asymmetrical square wave voltage is applied to one electrode. As ion mobility under high and low electric fields is different, ions will have a net displacement in the vertical direction in a single RF period, which differs according to the type of ion. With a DC compensation voltage added on the upper electrode at the same time, the net displacement of a particular ion in the vertical direction can be made up to 0, which will lead the particular ion to pass through the drift tube and be detected by the ion detector, while the other ions will be subject to annihilation when hitting the electrodes. By scanning the compensation voltage within a certain range, and recording the current values corresponding to the compensation voltage values, we can get a FAIMS spectrum. In FAIMS, the resolution is defined as the ratio of the compensation voltage U M corresponding to I max in the FAIMS spectrum and the FWHM (Full width at half maximum), that is, The compensation voltage U M corresponding to the max current I max in FAIMS spectrum is the DC compensation voltage value which makes ion's net displacement zero in a RF cycle. FWHM is the difference between the compensation voltages at half of the peak value (when intensity is 0.5 I max ). It can be seen that the FWHM has a large influence on the FAIMS resolution. The greater the FWHM, the lower the resolution. The reason why FWHM is non-zero is that when the compensation voltage deviates from U M , there are still ions that can fly through the drift tube and be detected, which have relationship with ions filling up the drift tube along the vertical direction. Assuming that the drift tube spacing is g, ions are uniformly distributed along the vertical direction of the drift tube, the vibration amplitude of ions under the asymmetric RF voltage is s, and the maximum current intensity for a specific ion can be measured when the compensation voltage is U M . At U M , the net displacement of the ions in an RF cycle in the vertical direction of the drift tube is exactly zero, as shown in Figure 2A. In most cases of short drift tube, the characteristic time of diffusion t dif ¼ g 2 =π 2 DðD is the diffusion coefficient) is always much larger than the drift time t res . That means the diffusion can be neglected due to the short drift time, which simplifies the problem [16]. The height of the ion beam at the entrance is g. However, due to the amplitude generated from ion vibration, the height of the ion beam that actually arrives at the exit of the drift tube is g-2s. Ions within a distance of s from the upper or lower electrodes are neutralized when hitting the drift tube. In other words, the area between the upper and lower counter electrodes is an ion annihilation region. Now we consider the situation when the compensation voltage deviates from U M , as shown in Figure 2B. Assume that the compensation voltage is U. Denoting the net displacement of ions perpendicular to electrodes in the drift tube by Δx, the drift time by t res , and the mobility of ions in the low electric field by K 0 , then The height of the ion beam passing through the drift tube is g-s-Δx. Assume that at some point the current intensity is half of its maximum value, so this point can be referred to as a "half peak position" point, and the corresponding net displacement is set as Δx 01 , then So Δx 01 ¼ 0:5 g: Þ , so Δx 01 can reflect the value of the FWHM, and also the resolution. If ion focusing is applied, then the height of the ion beam entering the drift tube is smaller than the spacing of drift tube, which is denoted by g x . First, the peak position will not change: when the compensation voltage is U M , the intensity will be the strongest, and the height of ion beam passing through is g − 2s, as shown in Figure 3A. When the compensation voltage deviates from U M , the net displacement of ions perpendicular to electrodes in the drift tube is Δx, shown in Figure 3B. Then the height of the ion beam passing through is At the "half peak position" point, denote the net displacement by Δx 02 . Then At this time, Δx 02 is less than the Δx 01 without focusing, and the FWHM will be smaller. At this occasion, the peak position will not change: when the compensation voltage is U M , the signal intensity will be the strongest, and the height of the ion beam passing through is g x , as shown in Figure 4A. When the compensation voltage deviates from U M , shown in Figure 4B. The width of the ion beam capable of passing through is: At the "half peak position" point, denote the net displacement by Δx 03 . SIMION simulation of ion focusing effect After the above theory analysis, SIMION was used to conduct the simulation of the FAIMS resolution with ion focusing. For a drift tube with a spacing of 0.5 mm, the height of the entering ion beam is reduced from 0.48 mm to 0.40 mm. In every simulation, the ion intensity at the entrance is set as 100, and the ion intensity at the exit of the drift tube is recorded at different compensation voltage. The simulation results are shown in Table 1. When the height of the entering ion beam is 0.48 mm, the maximum intensity is 45, and the corresponding peak position is 6.9. After interpolation, it can be calculated that the half peak positions are −7.0875 and −6.55278, and the resolution is 6.9/(7.0875-6.55278) =12.9. On the other hand, when the height of the entering ion beam is 0.4 mm, the maximum intensity is 44, the corresponding peak position is 6.95, the half peak positions are −7.09722 and −6.70556, and the resolution is 6.95/(7.09722-6.70556)=17.7. It can be seen that after the height of the entering ion beam is constrained, the resolution has been improved. The improvement in the resolution is mainly due to the decrease in the FWHM. Therefore, focusing the ion beam entering the drift tube and reducing its height can help to improve the resolution of the system and enhance the performance of the instrument. SIMION simulation of ion focusing methods Through the above theoretical analysis and simulations, it can be seen that focusing the ion beam entering the drift tube can help to improve the resolution. In order to focus the ion beam, we extend certain focusing methods such as ion lens and ion funnel to atmospheric condition. The focus structure built in SIMION is designed as shown in Figure 5. In front of the drift tube, five pairs of parallel electrode pairs are added to form a focus area, and voltages will be applied to focus the ions. Simulation results of ion lens Under atmospheric conditions, we use the collision_sds database in the SIMION program to simulate ion trajectories. Figure 6 shows the trajectories of ions moving through focusing electrode pairs under different static voltages. Table 2 shows the relevant data of the ion movement under different voltages. The position of the upper ion beam and the lower ion beam represent the highest and lowest locations of the ions after they pass through the focusing area, and the zero point of the vertical coordinate is set at the plane of the lower electrode in the focusing area. It can be seen that with higher voltage, the ion beam gets more focused and narrowed. Simulation results of ion funnel Here, we also use the collision_sds database of the SIMION program to simulate ion trajectories in an ion funnel under atmospheric conditions. The simulation was conducted under two conditions: one is under the same frequency with different peak-to-peak voltages, and the other is under the same peak-to-peak voltage with different frequencies. a) Simulation under the same frequency with different peak-to-peak voltages. Figure 7 shows the trajectories of ions under the same frequency with different voltages. And Table 3 shows the relevant data of ion movement. Under the same frequency conditions, the higher the AC focus voltage, the smaller the width of the ion beam. b) Simulation under the same peak-to-peak voltage with different frequencies. Figure 8 shows the trajectories of ion movement under the same peak-to-peak voltage with different frequencies. Table 4 shows the relevant data of ion movement. From the simulation results at first sight, it seems that frequency does not have a large impact on the focusing effect. However, when we have a look at the details of ions leaving the focus area as shown in Figure 9, we can see that with the decrease of frequency, the direction of ions moving into the drift tube is no longer horizontal. The ions have a trend to diverge, which means the poorer focusing effect. And this will lead to the increase of FWHM and is verified in the later experiment. Compared with an ion lens, an ion funnel can achieve a better focusing effect, and the ions are more concentrated. Overall, both the ion lens and ion funnel have an obvious focusing effect on ions going through the focus area. For simplicity, we will call the ion lens method the "DC focus" mode, and the ion funnel method the "AC focus" mode. Experiment of FAIMS chip with ion focusing After the simulation of ion focusing effect and ion focusing methods, the experiment of FAIMS chips with ion focusing is conducted. In the experiments, nitrogen containing 10 ppm of acetone is adopted as the sample gas, and the flow rate is 0.96 L/min., the power supply frequency of the asymmetric square wave is 1 MHz, and the duty cycle is 30%. The detection mode of positive ions is adopted. The DC focus mode Figure 10 shows the spectrum of acetone under different DC focus voltages when peak-to-peak asymmetric waveform RF voltage is 850 V while the red dash line shows the FWHM. Each spectrum under different focus voltages is torn apart to be seen more clearly. And Figure 11 shows the change of FWHM with the focus voltage. When the peak-to-peak asymmetric waveform RF voltage is 850 V, DC focus can improve the resolution. The best focusing effect is achieved when the DC focus voltage is about 15 V: the FWHM is reduced by 2.124-1.9942=0.1298 V, and the resolution is enhanced by 2.124/1.9942-1=6.5%. In general, the FWHM decreases with higher voltage. The focusing effect is not very strong and there are points that don't obey the rule. A possible reason is that the positive and negative ions of acetone are both generated by the UV lamp. In DC focus mode, focusing the positive ions means a divergence of the negative ions, so the interaction between the positive and negative ions can lead to a lessened focusing effect. AC focus mode a) Same focus frequency with different focus voltages. In AC focus mode, the spectrum is shown in Figure 12 and it can be seen that the focusing effect is larger than that under DC focus mode, but the signal intensity is lower. It's mainly because that ions vibrate more in AC focus mode than DC focus mode and ion losses is bigger. When the asymmetric waveform RF voltage is 850 V, an RF focus voltage at a peak-to-peak value of 30 V achieves the best focusing effect: the FWHM is reduced by 2.2892-1.9706 = 0.3186 V, and the resolution is enhanced by 2.2892/1.9706-1 = 16.17%, as shown in Figure 13. The "ion funnel" has focusing effects on both the positive and negative ions, so resolution enhancement is bigger than that under DC focus mode. But the compounding and collision between positive and negative ions can lead to more complex conditions than where there is only one single kind of ion, so there are still experimental points with an ineffective focusing effect. b) Same focus voltage with different focus frequencies. The spectrum of the experiment of adjusting the frequency is shown in Figure 14 and Figure 15. Figure 16A; under asymmetric waveform RF voltage of 850 V, the FWHM is reduced by 2.2892-1.6638 = 0.6254 V, and resolution is enhanced by 2.2892/1.6638-1 = 37.59%, as shown in Figure 16B. The experiment shows that FWHM has the trend of decreasing with the increase of frequency. In the part of "Simulation under the same peak-to-peak voltage with different frequencies" of this paper, the simulation result shows with smaller voltage is 690 V, duty cycle is 30%, frequency is 1 MHz, gas flow speed is 6.667 m/s, and a(E/N) of the ion is 0.046. Parameters of ion focusing methods simulation In the simulation of ion focusing methods, the focus structure is shown in Figure 5. In front of the drift tube, five pairs of parallel electrode pairs are added to form a focus area. The spacing of the drift tube is 0.5 mm and the length is 10 mm. The spacing between upper and lower focusing electrode is also 0.5 mm. The length of each focusing electrode is 0.5 mm and the interval between pairs of electrodes is 0.5 mm. The gas flow speed is 3.6 m/s and the ions are acetone ions. Instrumentation of FAIMS chip experiment To verify the focusing effect, the experiment conducts the FAIMS chip with ion focusing conducted in the experiment was designed and manufactured, as shown in Figure 1. PCB board is adopted as the substrate plate of the device; rectangular pads are placed on the circuit board as electrodes. The PCB layout of upper substrate plate is shown in Figure 17 and the PCB layout of lower substrate plate is shown in Figure 18. The width of all electrodes is 10 mm. The length of focus electrodes is 0.5 mm and the interval between them is also 0.5 mm. The length of the drift tube is 10 mm. The length of detection unit electrodes is 5 mm. All geometries are confirmed by slide caliper. The PTFE plates with the height of 0.5 mm are placed in the middle between PCB boards to define the spacing of the drift tube, and the device is sealed with silicone rubber. A UV lamp is used as the ion source, and is positioned above the through-hole of the chip in the middle of upper substrate plate. Other instrument used in the experiment is laid out as follow: we used a bottle of nitrogen containing 10 ppm acetone (volume of 8 L, pressure of 10 MPa, Beijing Hua Yuan Gas Chemical Co., Ltd) to bring the sample into the FAIMS chip. D08-1F-type flow indicator (Beijing Seven Star Electronics Co., Ltd.) is used to control the gas flow. The DC scan compensation voltage source and asymmetric waveform RF voltage source are homemade. The DC scan compensation voltage source can provide a voltage changing from -15 V to +15 V with a step of 0.1 V. The asymmetric RF waveform source can provide a rectangular asymmetric waveform voltage with peak-to-peak value of 2000 V, frequency of 1 MHz, duty cycle of 30%. Conclusions After adding a focusing structure in the front of drift tube loaded with a DC or AC voltage, the height of the ion beam entering into the drift tube is narrowed, leading to a decrease of FWHM and improvement of the resolution. Both the DC focus mode and AC focus mode can improve the resolution. The resolution is increased by 37% at the most under the AC mode. Further study to improve the resolution and stability can be carried out from the perspective of removing anisotropic ions. Overall, this innovation of focus is simple in structure and easy to miniaturize. The DC and AC voltages are easy to obtain, without increasing the complexity of the system. In addition, this method has no special requirements on the sample material or carrier gas, and can be used under atmospheric pressure, so it has the potential for wide range of applications. Besides, this ion focusing method will broad the way of FAIMS resolution enhancement and might cooperate with other method of increasing resolution. FAIMS works on the base of ion behavior under electric field. The design of the electric field plays an important role in FAIMS performance and will receive more and more attention in the lateral FAIMS research.
2017-10-30T18:00:41.722Z
2013-07-12T00:00:00.000
{ "year": 2013, "sha1": "104160369d4bb106bf7259ee23e604e65116c6f9", "oa_license": "CCBY", "oa_url": "https://ccj.biomedcentral.com/track/pdf/10.1186/1752-153X-7-120", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "924fd66f4d8dfe7cf26a488042b2974cd99fa44d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
1235862
pes2o/s2orc
v3-fos-license
Influence of national culture on the adoption of integrated medical curricula Integrated curricula have been implemented in medical schools all over the world. However, among countries different relative numbers of schools with integrated curricula are found. This study aims to explore the possible correlation between the percentage of medical schools with integrated curricula in a country and that country’s cultural characteristics. Curricula were defined as not integrated if in the first 2 years of the program at least two out of the three monodisciplinary courses Anatomy, Physiology and Biochemistry were identified. Culture was defined using Hofstede’s dimensions Power distance, Uncertainty avoidance, Masculinity/Femininity, and Individualism/Collectivism. Consequently, this study had to be restricted to the 63 countries included in Hofstede’s studies which harbored 1,195 medical schools. From each country we randomly sampled a maximum of 15 schools yielding 484 schools to be investigated. In total 91% (446) of the curricula were found. Correlation of percent integrated curricula and each dimension of culture was determined by calculating Spearman’s Rho. A high score on the Power distance index and a high score on the Uncertainty avoidance index correlated with a low percent integrated curricula; a high score on the Individualism index correlated with a high percent integrated curricula. The percentage integrated curricula in a country did not correlate with its score on the Masculinity index. National culture is associated with the propensity of medical schools to adopt integrated medical curricula. Consequently, medical schools considering introduction of integrated and problem-based medical curricula should take into account dimensions of national culture which may hinder the innovation process. Introduction Globalization has confronted higher education with cross-cultural issues. For instance, currently students from developing countries may enroll in higher education in industrialized countries. Apart from adapting their daily lives to another culture, these students also may have to adapt to a pedagogical approach which may be different from that encountered in their secondary school education (Charlesworth 2008). One may also wonder whether new didactic approaches, like problem-based learning (PBL) developed in industrialized countries, can also be applied in different cultural settings (Gwee 2008). The current interest of universities in developed countries to incept satellite institutions in developing countries has added cross-cultural management of education to the issues pertaining to cross-cultural teaching and learning (Eldridge and Cranston 2009). In this study we aim to investigate the impact of national culture on the propensity of educational institutions to adopt educational innovations. We explored whether national culture is related with the relative number of medical schools in a country that adopted integrated medical curricula. As of the middle of the previous century medical curricula based on monodisciplinary courses in basic and (pre-)clinical sciences have been challenged. Major disadvantages identified for this discipline-based curriculum model were (1) exclusion of contacts of students with patients in the pre-clinical phase; (2) the haphazard sequence of presentation of basic sciences courses frustrating integration in a knowledge-base relevant for clinical contexts; and (3) departmental autonomy over the courses yielding programs to educate mini-scientists (Papa and Harasym 1999). In response, innovative curricula were constructed built from educational units focusing on organ systems or clinical problem areas like pain or blood loss. For such 'integrated curricula' both integration of basic sciences ('horizontal integration') and of basic sciences with clinical sciences ('vertical integration') was advocated (Harden et al. 1984). Integrated curricula have been implemented by a growing number of medical schools all over the world, including schools based in industrialized and in developing countries. However, differences exist between countries with respect to the relative number of medical schools that adopted integrated curricula. Focusing on Europe a preponderance of schools with problem-based learning (PBL) curricula was observed in the North of Europe and few successful implementations of such curricula in the European Mediterranean countries. An impact of national culture on the successful implementation of PBL and integrated curricula was supposed (Jippes and Majoor 2008;Stevens 2009). In our 2008 study we demonstrated for 17 European countries a correlation between the relative number of medical schools with integrated curricula and two out of four dimensions of culture as defined by Hofstede (Jippes and Majoor 2008;Hofstede 2001). According to him, Power distance is 'the extent to which the less powerful members of institutions and organizations within a country expect and accept that power is distributed unequally'. A high score on the sliding scale of the Individualism/Collectivism index indicates Individualism and 'pertains to societies in which the ties between individuals are loose: everybody is expected to look after him/herself and his/her immediate family'. A low score on the Individualism/Collectivism index indicates Collectivism and 'pertains to societies in which people from birth onwards are integrated into strong, cohesive in-groups, which throughout people's lifetime continue to protect them in exchange for unquestioning loyalty'. A high score on the sliding scale of the Masculinity/ Femininity index indicates Masculinity and 'pertains to societies in which social gender roles are clearly distinct (i.e. men are supposed to be assertive, tough, and focused on material success, whereas women are supposed to be more modest, tender, and concerned with the quality of life)'. A low score on the Masculinity/Femininity index indicates Femininity and 'pertains to societies in which social gender roles overlap (i.e. both men and women are supposed to be modest, tender and concerned with the quality of life)'. Uncertainty avoidance is 'the extent to which the members of a culture feel threatened by uncertain or unknown situations. This feeling is, among others, expressed through nervous stress and in a need for predictability: a need for written and unwritten rules'. European countries scoring high on Hofstede's indexes for the dimensions 'Power distance' and/or 'Uncertainty avoidance' had relatively less medical schools with integrated curricula. No correlation was found with two other of Hofstede's dimensions of culture, i.e. 'Individualism/Collectivism' and 'Masculinity/Femininity'. Based on a literature review in 2000Bland et al. (2000 identified 13 factors contributing to successful curriculum change. Some of these factors were also emphasized in a book chapter published by Davis and White (2002). As indicated by the latter authors, studies like these may be biased towards North America and thus almost eliminate the possible impact of different national cultures. On the other hand, world-wide medical schools consider or attempt to introduce integrated and PBL curricula. Being aware of the potential impact of culture on the innovation process and trying to circumvent possible negative aspects may help to prevent frustration and waste of time and money. Therefore, this study aims to investigate at global scale whether a relation exists between the relative number of medical schools with integrated curricula in a country and that country's scores on Hofstede's indexes for four dimensions of culture. Based on our findings for Europe we hypothesized countries scoring high on Hofstede's indexes for the culture dimensions 'Power distance' and/or 'Uncertainty avoidance' to have relatively less medical schools with integrated medical curricula than countries scoring low on the indexes for these dimensions. No relation was presumed with the culture dimensions 'Individualism/Collectivism' and 'Masculinity/Femininity'. Methods To investigate the influence of national culture on the adoption of integrated medical curricula 'national culture' and 'integrated curricula' had to be defined and operationalized. A representative sample of medical schools from all over the world was needed that would allow for testing the above hypothesis. Definition of national culture In the 1970's Hofstede surveyed through questionnaires employees of IBM branches in 80 countries dispersed over the world to record their perception of organizational culture in the office. Criticism on Hofstede's derived construct of dimensions of culture includes the restricted population sample of IBM employees and his presumption that each country harbours one culture. Nevertheless, Hofstede's dimensions of culture are widely adopted and suited to perform our studies. Fourty individual countries were included in his initial studies (Hofstede 1980). Later another 10 countries and 3 clusters of countries were added: Arab World, seven countries; East-Africa, four countries; and West-Africa, three countries. Clusters of countries were created because countries therein did not meet Hofstede's inclusion criteria for individual countries (Hofstede 2001). From his data Hofstede extracted the four dimensions of culture quoted in the Introduction and a fifth one: Short/ Long term orientation. In principle for each country and each cluster of countries scores on Influence of national culture on the adoption 7 semi-quantitative indexes for each dimension of culture were calculated. However, the dimension 'Short/long term orientation' could not be included in this study because only a limited number of countries were assessed on that dimension. Furthermore, Yugoslavia was deleted from Hofstede's selection of individual countries because the country does not exist anymore. Assessment of integrated and non-integrated curricula The first 2 years of the curricula of medical schools were assessed to differentiate between integrated and non-integrated curricula. The curriculum was scored as non-integrated if at least two of the common preclinical disciplines Anatomy, Physiology and Biochemistry were presented as individual courses (Aziz and Cullen 1994). If none or only one of these courses was found the curriculum was assumed to be integrated. Screening of curricula was performed independently by both authors. Disconcordant classifications of curricula (41 out of 461 cases) were re-examined and discussed to reach consensus. Sample of medical schools The design of this study dictated that only medical schools based in the 64 countries investigated by Hofstede (minus Yugoslavia) could be used to sample medical schools from. According to the World Directory of Medical Schools (WDMS) in 2003 these countries and clusters of countries harbored 1,184 medical schools (WHO 2003). In that directory Taiwan was not represented. For Taiwan 11 medical schools were sampled from the International Medical Education Directory (IMED) of the Foundation for Advancement of International Medical Education and Research (FAIMER 2008), yielding a total number of 1,195 medical schools. If the number of medical schools in an individual country did not exceed 15 all schools were included. From 19 countries with more than 15 medical schools (range 16-148) 15 schools were sampled at random representing at least 10% of the total number of schools in that country. Four countries assigned by Hofstede to the cluster East-Africa (Ethiopia, Kenya, Tanzania and Zambia) contained eight medical schools which were all included. Seven countries included in the cluster 'Arab World' (Egypt, Iraq, Kuwait, Lebanon, Libya, Saudi-Arabia and the United Arab Emirates) harbored 40 medical schools from which a stratified random sample of 15 was drawn. The same procedure was applied to the 19 medical schools based in the three countries assigned to the cluster West-Africa (Ghana, Nigeria and Sierra Leone). These sampling procedures yielded a final sample of 484 medical schools based in 63 countries representing all continents. Collection of information on curricula Websites of sampled medical schools were searched for specification of their curriculum of the first 2 years. Information on most European schools was collected from June-August 2006 and on all other schools from June-October 2008. If a school's website could not be found, or if the website did not yield adequate information that medical school was contacted by e-mail. In case the request by e-mail elicited no response a fax was sent, if necessary followed by a surface mail. If all attempts to establish contact failed colleagues in the same country as the unresponsive school were asked for help to retrieve information. Websites and electronic files provided in languages not mastered by us were translated into English through the website 'www.translate.google.com'. The languages of Thailand and Indonesia were not supported by this site; for that purpose, respectively, ' www.thai2english.com' and 'www.yyy.sederet.com/translate.php' were used. If translation programs did not yield adequate information bilingual colleagues were contacted to translate the essential information to English. Statistical analysis SPSS version 15 was used to calculate Spearman's Rho: correlation coefficients (CC) between the percent medical schools with an integrated curriculum in a country and that country's scores on the respective indexes of four of Hofstede's dimensions of culture. A correlation was considered significant if P \ 0.05 (2-tailed). Results Satisfactory information on the first 2 years of the curriculum could be collected from 466 of the 484 medical schools included in the sample (91%). The curricula of nine medical schools in Iran could not be directly accessed. However, two colleagues in Iran independently assured us that medical schools in their country all had similar, non-integrated curricula. In Venezuela information on only five out of the nine medical schools in that country was obtained and therefore this country was excluded from further analyses. Eventually a total of 461 medical curricula were included in the analysis. In 14 countries none of the medical schools examined had an integrated curriculum and in 6 countries all medical schools had integrated curricula. Overall 134 of the 461 medical curricula examined (29%) were classified as integrated (Table 1). Scatter plots for the variables 'percent integrated curricula in a country' and that country's 'score on the index' for each of the four dimensions of culture are shown in Fig. 1. Significant negative correlations were found between percent integrated curricula and a country's score on the Power distance index (CC = -0.352, P = 0.01) and the Uncertainty avoidance index (CC = -0.658, P = 0.000), and a significant positive correlation with the score on the Individualism index (CC = 0.387, P = 0.005). No significant correlation was found between the percentage of integrated curricula in a country and that country's score on the Masculinity index. Discussion A significant correlation was found between the relative number of integrated medical curricula in a country and that country's scores on indexes for three dimensions of culture as defined by Hofstede (2001). In accord with our hypothesis, respectively, a high score on Power distance and a high score on Uncertainty avoidance correlated with a low percentage of integrated curricula in a country. At variance with our working hypothesis a high score of a country on the Individualism index was also found to correlate significant with a high percentage integrated curricula. In our study focusing on Europe explanations for the correlation of strong Power distance and strong Uncertainty avoidance with a low percentage integrated curricula were Influence of national culture on the adoption 9 presented. In brief, we reasoned that implementation of an integrated curriculum requires a shift from departmental control over courses to curriculum control by multidisciplinary committees (Majoor and Kolle 1997). In schools in countries with strong Power distance professors may independently design the courses in their respective disciplines. By contrast, integration of the curriculum requires discussions with staff from different departments in interdisciplinary settings. Strong Power distance may impede heads of department to effectively participate in such negotiations. Curriculum change in a school in a country with strong Uncertainty avoidance may be difficult due to adherence of staff to existing national laws and university rules. ''Fear of the unknown'' may hamper curriculum innovation in those countries. Moreover, in some studies a significant correlation has been observed between the cultural dimensions Power distance and Uncertainty avoidance (Jippes and Majoor 2008;Hofstede 1991). With respect to Individualism it has been demonstrated that managers in individualistic societies prefer undertaking innovations outside organizational norms, rules and procedures (''renegade championing''; Shane and Venkataraman 1996). This may explain the correlation between a high score on the Individualism index and a high percentage of schools with integrated curricula. Conversely, in a society with strong emphasis on Collectivism, harmony and mutual respect are very important. To change from a departmentally controlled curriculum towards an integrated curriculum negotiations among colleagues from different departments are necessary, which could elicit conflict and therefore may rather be avoided. Another possible explanation for the correlation between Collectivism and low percentage of integrated curricula may derive from the relation between the Individualism/ Collectivism dimension and gross domestic product (GDP). Strong Collectivism in a society has been shown to correlate with a low national GDP (Hofstede 2001). Obviously innovation of medical curricula (including transformation from non-integrated to integrated curricula) may be obstructed by lack of financial resources. For the countries examined in this study a low percentage of integrated curricula correlated with a low GDP (CC = 0.491, P = 0.000) (two-tailed). No correlation was found between the percentage of integrated curricula in a country and its score on the Masculinity index. Departing from the theoretical readiness of students to accept PBL, countries scoring low on Masculinity (and low on Uncertainty avoidance) were reasoned as more likely to adopt integrated curricula than countries scoring high on Masculinity (and high on Uncertainty avoidance; Stevens 2009). However, several limitations of this study must be taken into account when estimating the reliability and validity of the outcomes. First, the definition of culture chosen may have negatively affected the validity of this study. Criticism on Hofstede's dimensions of culture includes doubts about the validity of his concept of 'national culture'. For instance, Baskerville (2003) (Hofstede 2001) and percent integrated medical curricula. CC correlation coefficient, NS not significant 'quantitative' scores on 'cultural dimensions' to individual countries or regions (see for instance Schwartz and Bilsky (1987); Smith and Charles (2004); Trompenaars and Woolliams (2005)). Hence these descriptions of culture are not suited for a study as presented here. Second, the inclusion criteria applied to define the world-wide sample of medical schools may have affected the reliability of the results. The World Directory of Medical Schools of 2000 was used to identify medical schools in Europe (WHO 2000). To list all schools beyond Europe the 2003 update of WDMS was used because 29 non-European schools were added compared to the 2000 edition (WHO 2003). The 2003 WDMS was used rather than a more recent update to ascertain that curricula had been in place for at least 5 years. Third, overall from 9% of the schools in the sample no information could be obtained. There may be a bias in those non-responsive schools in terms of these being more conservative (e.g., because they do not feature a website) and having non-integrated curricula. Fourth, another restriction with respect to the validity of the outcomes pertains to the discrimination of integrated and non-integrated curricula. Although the criterion of presence of two out of three basic sciences courses (i.e., Anatomy, Biochemistry and Physiology) is unambiguous, new names for ''old'' courses incidentally forced us to judge whether for instance 'functional morphology' was similar to anatomy and 'molecular chemistry' to biochemistry. In some medical curricula of schools in Latin America we found 'morphophysiology' courses occupying a prominent part of the program for the first 2 years including content matter from the three basic sciences specified above. Colleagues in Latin America assured us that such 'morphophysiology' courses are taught in the context of organ systems, and therefore should be considered integrated. In Indonesia, Japan, South Korea, as well as in the U.S. we encountered one school with a curriculum which was not integrated in the first and/or second year but clearly integrated in subsequent years. Such curricula may be referred to as 'hybrid curricula' (Nandi et al. 2000;Jaffarey 2001). In accord with our criterion these curricula were scored as non-integrated. We verified that changing the classification of these four curricula to 'integrated' did not change the conclusions from this study. Fifth, another dilemma faced with respect to discriminating integrated and non-integrated curricula regarded schools in the U.S. and Canada, which are usually preceded by bachelor programs. For two reasons we decided to base our curriculum assessment on the first 2 years of the medical schools and to ignore the pre-medical bachelor programs. First, because we intended to assess curricula of medical schools and pre-medical bachelor programs may be offered by different schools. Second, even in medical schools which pioneered with implementation of integrated PBL curricula-like McMaster University in Canada and the University of New Mexico in the U.S.-pre-medical bachelor programs were found to be non-integrated. To various extent reports in the literature supported our classification of integrated and non-integrated curricula. For the U.S. 70% of the medical schools were mentioned to have PBL (and thus integrated) curricula (Kinkade 2005); we scored 53% of the 10.6% of all U.S. schools in our sample as having integrated curricula. Reports on the individual curricula of Ziauddin Medical University in Pakistan (Huda and Brula 1999), University of Transkei in South Africa (Iputo 1999), University of Hong Kong (Nandi et al. 2000), National University of Singapore (Khoo et al. 2001), and Rosario University in Argentina (Carrera et al. 2003) confirmed our independent classification of their curricula as integrated. Three reports described isolated PBL courses offered by different medical schools in India (Vyas et al. 2008;Chandra et al. 1996;Ghosh and Pandya 2008). This finding is not incompatible with our conclusion drawn from the sample of 10.3% of all medical schools in India that all had non-integrated curricula. The same holds for the National Yang-Ming University in Taiwan whose curriculum was assessed as non-integrated whereas some courses were reported to be taught in PBL format (Yu et al. 2000). Furthermore, we classified the curriculum of the University of Malaya in Malaysia as nonintegrated although from that school an article was published on the process of implementing an integrated curriculum (Azila et al. 2001). Perhaps that school's curriculum changed after our assessment. Although this study has some limitations it demonstrates that national culture is associated with the propensity of medical schools to adopt integrated curricula. If a medical school is situated in a country with high scores on the indexes for Power distance and/or Uncertainty avoidance and/or a low score on the index for Individualism and considers adoption of an integrated or PBL curriculum, that school should take into account the potential hindering effects of these national cultural factors. To mitigate cultural barriers to curriculum innovation, resources are available providing advice with respect to strategies for change of an organization, both in general (Kotter 2003;William 2003) and specifically for medical and health professions schools (Neufeld et al. 1995). We intend to expand our studies in two directions. Firstly, we aim to investigate whether scores on indexes for the three cultural dimensions counteracting curriculum change act independently or synergistically. Secondly, we intend to explore why in some countries with cultural characteristics counteracting curriculum change surprisingly many schools succeeded to implement integrated curricula. Studying the curriculum change processes performed in these 'outliers' may reveal factors which possibly can help to overcome adverse cultural conditions.
2014-10-01T00:00:00.000Z
2010-07-25T00:00:00.000
{ "year": 2010, "sha1": "2d5dee173fe7a50590e3317632af33bd311e0145", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10459-010-9236-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2d5dee173fe7a50590e3317632af33bd311e0145", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
207994910
pes2o/s2orc
v3-fos-license
In Situ Observation of Crystalline Silicon Growth from SiO2 at Atomic Scale The growth of crystalline Si (c-Si) via direct electron beam writing shows promise for fabricating Si nanomaterials due to its ultrahigh resolution. However, to increase the writing speed is a major obstacle, due to the lack of systematic experimental explorations of the growth process and mechanisms. This paper reports a systematic experimental investigation of the beam-induced formation of c-Si nanoparticles (NPs) from amorphous SiO2 under a range of doses and temperatures by in situ transmission electron microscopy at the atomic scale. A three-orders-of-magnitude writing speed-up is identified under 80 keV irradiation at 600°C compared with 300 keV irradiation at room temperature. Detailed analysis reveals that the self-organization of c-Si NPs is driven by reduction of c-Si effective free energy under electron irradiation. This study provides new insights into the formation mechanisms of c-Si NPs during direct electron beam writing and suggests methods to improve the writing speed. Introduction With the development of semiconductor technology, fabrication of crystalline Si (c-Si) from amorphous SiO 2 via direct electron beam writing is a promising method to fabricate Si-based nanodevices [1][2][3]. It is a one-step resistless process which avoids the resolution loss during development in a developer [4][5][6][7][8]. However, the writing speed is still the main handicap for practical applications due to only one pixel being exposed at a time [9,10]. To improve the exposure speed is critical for direct electron beam writing. But it has been suggested that the writing current is limited by the Coulomb interaction between electrons, which causes beam blurring and loss of resolution [11]. Fully understanding the growth mechanisms of c-Si from SiO 2 is essential for increasing exposure speed during direct writing. Du et al. obtained c-Si nanodots under 200 keV electron irradiation at ambient temperature with a dose of 1 × 10 8 C m −2 and attributed the formation of amorphous Si to valence electron ionization and the subsequent transformation to c-Si to the elastic displacement of Si with a threshold beam energy of approximately 150.2 keV [12]. However, Takeguchi et al. grew c-Si under 100 keV electron irradiation at 577°C [13], and Chen et al. fabricated a Si nanodot array under 100 keV irradiation at room temperature with a dose of approximately 10 9 C m -2 [14]. They believed that the Si nanodot formation mechanism is process-induced SiO 2 dissociation. Hence, the mechanism of formation of c-Si under irradiation is still unclear, and the quantitative understanding of the influence of temperature and beam energy is very limited. The interaction between high-energy electrons and SiO 2 can be considered elastic and inelastic scattering. Elastic scattering is the interaction of an electron with an atomic nucleus that will induce atom displacement by direct momentum transfer if the transferred energy is larger than the threshold displacement energy (T d ) [15]. The T d of O and Si are 9.3 eV and 18.6 eV, respectively [16]; thus, the O atoms are more easily displaced. Inelastic scattering is the interaction of incident electrons with atomic electrons that can lead to ionization. The atomic electrons are considered valence electrons and core-shell electrons in SiO 2 . Ionization of a valence electron creates only one hole in the valence band with a lifetime of approximately 10 -16 s, while ionization of a core-shell electron creates at least two holes in the valence band by the Knotek-Feibelman mechanism. The hole-hole correlations can block the resonant one-hole hopping process, thus increasing the lifetime of the holes to the order of 10 -14 s. The presence of holes in the valence can induce a repulsive reaction between nearby nuclei and result in subsequent desorption of O. The desorption time for atoms is on the order of 10 -13 s, so the core electron ionization has a larger probability of inducing dissociation of SiO 2 [17]. In this work, we apply in situ transmission electron microscopy (TEM) to investigate the detailed growth process of c-Si nanoparticles (NPs) under different electron energies and temperatures to explore methods to increase the exposure speed during direct electron beam writing. In situ transmission electron microscopy is a powerful and versatile tool for real-time investigation of the properties of nanomaterials under electron irradiation and external stimulations [18][19][20][21][22]. We demonstrate that amorphous SiO x (x < 2) NPs form firstly and then transform to c-Si NPs, when amorphous SiO 2 is exposed to an electron shower. The elastic sputtering and the Knotek-Feibelman dissociation mechanisms induced desorption of O in SiO 2 , results in the formation of amorphous SiO x NPs. The critical dose for SiO x NP formation is independent of temperature and decreases with reducing electron beam energy. The formation of c-Si NPs is driven by the self-organization of Si atoms, which is caused by phase stability inversion between c-Si and amorphous SiO x under electron irradiation. Detailed analysis reveals that a larger effective free energy difference between SiO x and c-Si is critical to improve the speed during direct writing. This energy difference increases with decreasing electron beam energy and increasing temperature. The critical dose for c-Si NP formation can be decreased three orders of magnitude to approximately 10 5 C m -2 under 80 keV irradiation at 600°C. Figure S1 (Supporting Information) clearly show the formation of nanoparticles in SiO 2 . Then, a Si nucleus is formed in the SiO x NP at the dose of 1:026 × 10 7 C m −2 , shown in Figure 1(a3). Figure 1(a4)-1(a8) represents the growth process of the Si NP with an increase of irradiation dose. The growth of c-Si NPs is induced by attachment of Si atoms to the nucleus driven by free energy difference, which will be discussed latter. Due to this growth manner, once the misattachment of atoms occurs, the formation of twins will be observed ( Figure 1(a8)). The c-Si NP size is about 4 nm and the dose is 1:528 × 10 7 C m −2 . Results and Discussion To investigate the influence of temperature, SiO 2 is irradiated at 25°C and 600°C. Figure 1(b1)-1(c2) shows images of amorphous SiO 2 after 300 keV electron irradiation at 25°C and 600°C. The images in Figure 1(b1)-1(b3) are high magnification TEM images revealing the changes in the surface from curved to flat and then to curved again as the irradiation dose increased at 25°C. The former process indicates the deformation of SiO 2 , which is attributed to beam-induced athermal activation of massive plastic flow and surface migration [23,24]. The latter process is the formation of amorphous SiO x NPs due to the desorption of O. The detailed process and low magnification images are shown in Figure S2 (Supporting Information). In Figure 1(b4), a Si nucleus is observed in a SiO x nanoparticle when the dose of electron irradiation is up to 2:64 × 10 8 C m −2 . This dose is approximately one order of magnitude higher than that at 400°C, indicating that heating can accelerate the formation of crystalline Si. However, the critical dose for SiO x NP formation does not change noticeably, on the order of 10 6 C m -2 . Hence, the critical dose for SiO x and Si NP formation will eventually overlap with increasing temperature. Figures 1(c1) and 1(c2) show that c-Si NPs appear directly under a dose of 1:0 × 10 7 C m −2 , without the observation of SiO x NPs at 600°C. Figure S3 (Supporting Information) shows the morphological changes at different dose rates. The critical dose for NP formation does not change significantly, which indicates that this process is dose-dependent. The Si (111) lattices are shown in Figure 1(c2). Figure 1(d) shows the changes of the Si L 2,3 edge in the electron energy loss spectrum (EELS) with the increase of dose at 25°C. The edge at 99.8 eV is evidence of elemental Si, and it arises at a dose of 2:42 × 10 8 C m −2 , which is consistent with the observations of TEM images. To investigate the influence of electron beam energy on the growth process of c-Si NPs, amorphous SiO 2 was irradiated under 80 keV at different temperatures. Figure 2(a)-2(d) shows the formation of c-Si NPs at 25°C. Amorphous SiO x NPs can be observed at the surface highlighted by the cyan dashed oval in Figure 2(a). The critical dose for the formation of SiO x NPs is approximately on the order of 10 5 C m -2 and is independent of temperature, as shown in Figures S4 and S5 (Supporting Information). The c-Si is formed at a dose of 5:21 × 10 7 C m −2 , and the Si (111) lattice fringes are represented in Figure 2(d). These two critical doses are both approximately one order of magnitude lower than that under 300 keV electron irradiation. The EELS in Figure 2(e) confirms the formation of elemental Si when the dose increases to~10 7 C m -2 . As the temperature increases, the critical dose for growth of c-Si NPs decreases ( Figure 2(f)-2(h)), the same as in the case under 300 keV irradiation. The direct formation of Si NPs is also observed under 80 keV irradiation at 600°C (Figures 2(h) and S5, Supporting Information). These results show that the sensitivity of SiO 2 for the fabrication of c-Si is higher under low energy electron irradiation at high temperature. Our findings are different from those proposed by Du et al., who believe that the formation of c-Si NPs results from the elastic displacement of Si atoms and that the threshold energy is 150.2 keV [12]. Hence, the c-Si NP growth mechanism cannot be the elastic displacement of Si atoms. To elucidate the mechanism behind the growth process, we create phase diagrams for temperature and dose with different electron energies (Figures 3(a) and 3(b)). The phase diagram can be divided into four parts. Part I indicates the deformation of SiO 2 under low-dose irradiation, which has been discussed in other works [23,24]. Parts II and III represent the formation of amorphous SiO x and crystalline Si NPs. In Figures 3(a) and 3(b), the critical dose of SiO x NP formation is temperature insensitive, implying that this process is athermal. The critical dose is averaged for different temperatures and plotted in Figure 3(c), which indicates that the growth rate of SiO x NPs is faster under low electron beam energy irradiation. However, the critical dose of c-Si NP formation decreases exponentially with increasing temperature. The relationships between the critical dose and temperature are acquired by linear fitting in (1) and (2)) are the same, indicating that the influence of temperature and electron beam energy is uncorrelated. There are intersections between the two critical doses at approximately 600°C, above which direct formation of c-Si NPs is observed. The critical dose of crystalline Si formation at room temperature under different electron beam energy irradiation is shown in Figure 3(c). The influence of electron beam energy on the critical dose is estimated by linear fitting in Figure 3(c) as follows: where E is the electron beam energy in keV. Because of the independence between temperature and electron beam energy, we can simply combine Equations (1) Equation (4) implies that there are two types of mechanisms behind the growth process. One is electron irradiation-induced reduction of SiO 2 , which plays a part during the whole growth process (I➔II➔III). The dissolution of SiO 2 can be induced by elastic sputtering and/or inelastic ionization. As discussed above, O is more easily sputtered, and in Figure 3(d), the elastic scattering cross section of O is approximately one order of magnitude larger than that of Si. The cross section is calculated with the Mckinley-Feshbach equation [15]. The core electron ionization can also result in desorption of O with the cross section: where σ Core is the core electron ionization cross section, f O is the fraction of interatomic Auger events that result in two holes localized in a bonding orbital, and Pðt c Þ is the probability that two holes are localized in the orbital at time t c after being created in a surface bond orbital at time t = 0 [25]. The σ O for SiO 2 is 10 -26 -10 -25 m 2 when the electron beam energy is 150-3000 eV [25][26][27][28]. We assume that f O and Pðt c Þ remain constant with increasing electron beam energy because they are properties of SiO 2 . The core electron ionization cross section is acquired by Bote's analytical formulas and shown in Figure 3(e) and Figure S6 (Supporting Information) [29]. The ionization cross section at 80-300 keV is approximately one-tenth of that at 150-3000 eV. Hence, the O desorption cross section at 80-300 keV is approximately 10 -27 -10 -26 m 2 , which is comparable with the elastic sputtering cross section of O ( Figure 3(d)). However, the creation of O vacancies leads to dangling bonds in Si atoms, and these unpaired electrons provide a channel for fast charge-transfer screening, which reduces the hole-hole correlation energy and lowers the lifetime of localized holes. Intuitively, an increase in the O vacancy density causes a decrease in Pðt c Þ in Equation (5); therefore, the desorption of O by core electron ionization will be greatly suppressed. This result indicates that both elastic and inelastic scattering is responsible for O desorption at the beginning of the process, and then elastic scattering dominates with the continuous generation of O vacancies, which means that the formation of c-Si NPs is mainly driven by the elastic displacement of O. However, the critical dose reduces with decreasing electron beam energy, and the cross section does not change noticeably in Figure 3(d Figure 3(f), in which a decrease of the cross section can be observed with an increase of electron beam energy. Therefore, the elastic displacement of O is responsible for c-Si formation. The other mechanism shown in Equation (4) is temperature-dependent diffusion, which only influences the growth process of c-Si NPs (II➔III). Figures 4(a) and 4(b) show that the equivalent average diameter of c-Si NPs is approximately 4 nm under different electron energies at 600°C. The average diameter is slightly smaller at 400°C ( Figure S7, Supporting Information). The diffusing and condensing of Si atoms is essential for c-Si NP formation. However, without irradiation, silicon oxide is more energy favorable than crystalline silicon [26,30]. SiO 2 under highintensity irradiation is an open and highly dissipative system. Therefore, the growth of c-Si NPs is a self-organization process rather than an equilibrium thermodynamic process from the perspective of energy [31,32]. This ordering phenomenon has also been observed in the transformation of carbon onions to diamonds under electron irradiation [33]. The ratio of thermal-activated carbon atom jump rates across the interface between graphite (G) and diamond (D) obeys the relationship Γ G-D /Γ D-G = exp ð-ΔG/k B TÞ, where ΔG is the Gibbs free energy difference between diamond and graphite, k B is Boltzmann's constant, and T is temperature. Without irradiation, ΔG is positive and carbon atoms tend to diffuse from diamond to graphite. Apart from thermally activated jumps, ballistic jumps may lead to atom exchanges between graphite and diamond, when the system is under electron irradiation. Considering these ballistic jumps, the nonequilibrium effective free energy difference is expressed using Zaiser-Banhart's equation [33]: where r th is the thermal jump rate of atoms across the interface, φ is the irradiation flux, and σ G and σ D are elastic displacement cross sections in graphite and diamond, respectively. The equation is defined to describe the phase stability under irradiation [33]. When the system is under electron irradiation at not too high temperature (small r th , φσ G /r th >>1, φσ D /r th >>1), the nonequilibrium free energy of diamond is reduced by approximately k B T ln ðσ G /σ D Þ. Because σ G is much larger than σ D , the effective free energy difference is reduced below zero, which means an inversion of phase stability [33]. In this work, the displacement cross section of SiO x is larger than that of Si because of the much lower T d (O). The ratio of the cross section is approximately 10 and ∞ when the electron beam energy is above 150.2 keV and between 64.0 and 150.2 keV, as shown in Figure 3(d). The effective free energy of c-Si is greatly reduced, and transformation from amorphous SiO x NPs to c-Si NPs is activated. Intuitively, the high vacancy density in SiO x produces many highenergy elemental Si atoms with strong diffusivity, and the low vacancy density makes the Si atoms in the c-Si stable with weak diffusivity. Control experiments have been conducted to confirm that heating alone up to 600°C cannot induce the transformation from amorphous SiO x NPs to c-Si NPs or even reduce defects in the as-formed c-Si NPs ( Figures S8 and S9, Supporting Information). This temperature is below the Si NP crystallization temperature at approximately 800°C [34]. However, due to the high vapor pressure of Si, the sublimation temperature is lower than the crystallization temperature under an ultrahigh vacuum, and sublimation of Si is observed when the temperature is increased to above~750°C (Figures 3(a) and 3(b) and Figures S10 and S11, Supporting Information) [19]. Moreover, the only dissociation of SiO 2 is observed without the formation of SiO x or Si NPs when SiO 2 is irradiated at 900°C ( Figure S12, Supporting Information). Directly heating SiO 2 up to 1000°C and even repeatedly heating SiO 2 between 25°C and 1000°C cannot cause the growth of SiO x and Si NPs ( Figure S13, Supporting Information). All these results reveal that electron irradiation is the key factor for c-Si NP formation, and the heating effects induced by irradiation are also excluded. The growth process can be divided into two parts as shown in Figure 4(c)-4(f): formation of amorphous SiO x and c-Si NPs. As discussed above, initially, the desorption of O is induced by elastic sputtering (O 0 ) and core electron ionization (O + ), resulting in the growth of silicon suboxide (Figure 4(d)). This process does not involve the thermal diffusion of atoms, so the critical dose is electron beam energy dependent but temperature independent. Continuous generation of O vacancies not only suppresses core electron ionization-induced desorption but also reduces the threshold energy of O, which increases the displacement cross section in SiO x and reduces the effective free energy difference between Si and SiO x . Once the energy difference becomes negative, the Si atoms start to condense and c-Si NPs nucleate from the SiO x NPs, as shown in Figures 4(e) and 4(g). With increasing doses, the amorphous SiO x NPs completely transform into c-Si NPs. Equation (4) reveals that the critical dose for c-Si NP formation decreases when the temperature is increased, and the electron beam energy is reduced. From the energy perspective, increasing temperature and reducing electron beam energy can both lower the effective free energy of c-Si, leading to larger rates of Si atom diffusion from SiO x to Si. Larger rates mean a short time for growth and a low integrated dose. When the temperature is above 600°C, the c-Si NPs grow so fast that amorphous SiO x NPs cannot be observed in our experiments. It is worth noting that the con-stant under E in Equation (4) is the same as the threshold electron beam energy (64 keV) for O displacement in SiO 2 [16]. This result may imply that to trigger the growth process, the electron beam energy should be larger than~65.62 keV. Similarly, the constant under T may imply that the selforganization process only occurs above -105.03°C. Amorphization in crystalline Si under electron irradiation was observed at -248°C [35]. However, to confirm the physical meaning of these two constants, further detailed experiments should be carried out. Conclusion In summary, the formation of c-Si NPs from amorphous SiO x NPs is a self-organization process induced by the elastic displacement of O. The high displacement cross section of O in SiO x significantly reduces the effective free energy of c-Si, causing the phase stability inversion between c-Si and amorphous oxide and thus promoting the growth of c-Si NPs. Quantitative experiments reveal that the critical dose for c-Si NP formation decreases exponentially with increasing temperature and reducing electron beam energy. The exposure speed during direct electron writing can be enhanced by three orders of magnitude under 80 keV irradiation at 600°C. The formation of amorphous SiO x NPs is attributed to O desorption induced by elastic sputtering and core electron ionization, which will be suppressed by a high density of O vacancies. The critical dose for SiO x NP formation is temperature independent and decreases with high scattering cross sections under low electron beam energy. Our work reveals the detailed mechanism and quantitative conditions of the fabrication process and provides valuable information for direct electron beam writing for the fabrication of Si into SiO 2 . Preparation of Samples. The amorphous SiO 2 was purchased from Xianfeng Nano Materials Co., Ltd. Firstly, the power was dispersed in deionized water. After sonication for 30 min, a drop (~10 μL) of the suspension was placed at the center of the heating chip (Aduro 100, Protochips Inc., and Wildfire, DENSsolutions Inc.) and dried under ambient conditions. 4.2. In Situ TEM Observation. The growth process is conducted in a Cs-corrected transmission electron microscope (FEI Titan 80-300) with a beam current density of 0:31 -5:4 × 10 4 A m −2 at 300 keV and 1:28 × 10 4 A m −2 at 80 keV. Figure S1: morphological changes corresponding to Figure 1 at 400°C under different doses. Figure S2: morphological deformation of SiO 2 and formation of SiO x NPs at 25°C under different doses. Figure S3: morphological changes at 600°C under different dose rates when electron energy is 300 keV. Figure S4: 80 keV electron beam irradiation effects at 25°C. Figure S5: morphological deformation and NP formation at different temperatures under 80 keV. Figure S6: inelastic cross section of the Si L shell for low energy electron. Figure S7: crystalline Si NP equivalent diameter distribution. Figure S8: heating effects on as-formed SiO x NPs without electron irradiation. Figure S9: heating effects for as-formed c-Si NPs without electron irradiation. Figure S10: sublimation of as-formed crystalline Si NPs at high temperature. Figure S11: heating effects when temperature was increased above 800°C. Figure S12: fast dissolution of SiO 2 at 900°C under 300 keV irradiation without c-Si NP formation. Figure S13: heating effects for SiO 2 without electron irradiation. (Supplementary Materials)
2019-11-07T15:26:23.465Z
2019-10-30T00:00:00.000
{ "year": 2019, "sha1": "676a892812e048759d58aa37b9edbd72fb3b5f64", "oa_license": "CCBY", "oa_url": "https://downloads.spj.sciencemag.org/research/2019/3289247.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47f1a5c49e599fcd31f0315f113309f280fdd69f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
67767042
pes2o/s2orc
v3-fos-license
Imagine , Drawing , Representation . Representation of the Project † Today the teaching of drawing is required to following a framework in fast and continuing development that requires both speed and flexibility to adapt contents and organization. The new aesthetic values in the representation of the project come with the need to develop new techniques consistent with digital tools. This paper aims to present a recent experience that was conducted at Scuola del design of Politecnico di Milano with students of the first year of the course “Strumenti e Metodi del progetto” of the Degree program in Design degli Interni, developed from a short workshop with a group of Chinese students and focused on representing the image of the city. The workshop provides the opportunity to familiarize with editing-photo software. The students experimented with digital collage as a graphical technique consistent with the fragmentation, which in some cases is contradictory, of urban imagery, in order to develop threedimensional expression forms, a booth, to be inserted into the real city (The work of the students was designed, coordinated and monitored by Sara Conte, teaching assistant and tutor of the course “Strumenti e metodi del progetto”). Introduction: The Teaching of Drawing, Image and Project In the last twenty year the teaching of drawing in design has been affected by the emergence of digital technology that put a strain on convictions and consolidated settings. This revolutionized the visual culture of project, which was based on drawing to represent the project both in technical and descriptive images, which anticipated the finished work traditionally by perspective views. These were geometrically correct and plausible but not liable to confusion with reality. In the twenty century the drawing of architecture had strengthened expressive forms consistent with design research, conditioned by the aversion of artistic avant-gardes to the imitative drawing academically oriented, perceived as false and misleading. In the last decades, it has found a tool capable of producing virtual models so faithful to the reality to adopt realistic simulation as a privileged solution for displaying the project, moving away from drawing, borrowing visual references from other arts, cinema in particular. The shift linked to the potentiality of virtual modeling included the methods to concept and design the image, but also the figurative references of the visual culture of project. When the simulation of computer graphics shifted attention from drawing to representation, the vibrant tradition of architectural drawing has lost in enthusiasm for three-dimensional modeling and photo-realistic rendering, losing the essence of the first in relation to the second. The potentiality of this type of representation, capable to offer virtual and dynamic images came from a single digital model, supported the languages and the traditional codes of static images, which characterized the graphic of technical drawing and the general view of the project [1]. The graphic of the project has looked for a new repertoire to reassess the static image of drawing with the proposal of visual references more consistent with contemporary tools, redefining the concept of rendering, in balance between image and design. The deep dissatisfaction of the "digital migrant" generation with the photo-realistic rendering was born from the aversion to the misleading simulation, which is instinctively traced back to relationship between the "good drawing" and the "bad project". But actually, the realistic representation has a semantic reason in the first meaning of the word image, which is more suited to describe the representation of virtual model. This term expresses first of all the recognisability of an object through the appearance: "the connotations connected to the exterior appearance underlines the risk of the false evidence and the superficiality of judgment, already emphasized by Platone in the myth of the cave. On the contrary, those related to representation emphasize the need for us to describe reality through depictions and models that help us to better understand the essence through visible manifestations that express the substance of form" [2]. In the contemporary sense, the concept has widened in reference to an abstract idea and therefore also to the project, which in the drawing has found a tool capable of creating images that left just the right to the imagination. Realistic rendering fascinates by the chances of controlling color rendering and interior lighting and its communicative effectiveness with non-professionals. In other words, "traditional" rendering, as useful, does not meet all the requirements of design. The issues of architectural drawing and the image of the project, caught between drawing and representation, is very complex. Unlike what had been hypothesized, when digital technologies came into the world of representation, the chances to develop images in a three-dimensional environment have not broken the importance of drawing, which is returned in the training of architects and designers thanks to its a synthetic and discrete nature, unsurpassed to express the formal essence of existing or projected reality. The two-dimensional and static image, which seemed unsuitable for the digital communication, is an important actor in the contemporary visual communication, which is conceived as multidimensional space made of entangled links and movement, but it also needs bedrock. The digital technology and the consequent development of the network contributed to state the importance of images in communication which today relies on their immediate effectiveness to attract and focus the attention of a casual and hasty audience, generalizing and amplifying the role already attributed to graphics in publishing and advertising. In this way the network reassesses the importance of static image, implicit character of drawing, for summarizing the contents through their illustration. The specific discipline of visual communication is not born today. Futurists have already applied its criteria based on objective knowledge of the perceptual effects, which linked the image to psychological and mental factors. The digital technology amplifies its scope, moving towards the development of increasingly engaging approaches and techniques. The image, easily memorable and recognizable without the "fatigue" of reading, is integrating and progressively replacing writing, until it overrides the text. Today the visual communication is a new area of design alongside the consolidated ones of engineering, architecture and urbanism. The communication codes have adapted themselves through the drawing to the times and the ways of digital communication, conditioning the contemporary visual culture and inevitably also the drawing references in the presentation of the project. So the drawing interests new specializations in which the image is placed at the center of the project as the object of the project and not just as its representation" [3]. This dual aspect is not limited to its original two-dimensional nature, but it involves other arts with different dimensional references. In the digital multi-dimension seem to meet, with a century of delay, the expectations of total work of art, which shook the avant-gardes of the last century and led a turning point in the formal search for visual arts, including architecture and applied arts from which design was born. Other aims, tools and methods will contaminate the architectural drawing and the representation of the project, renewing the rules of what has emerged as an autonomous genre. As a consequence, it is changing the teaching of drawing in the academic field, which now also covers new areas where drawing is not only representation but becomes a project, first of all the visual communication, which has turned from a discipline of representation into a design specialization. The digital consciousness, which comes from this variety of fields and references, requires a rethink regarding the contents of teaching of drawing in design disciplines. Today the teaching must discern but not divide the design of image and the image of the project. The diffusion of the control of digital image requires an integration of both methods into a single visual culture. The dichotomy between two-dimensional drawing and three-dimensional modeling has finally been resolved. The representation uses the three-dimensional modeling as a possible graphical technique to create twodimensional images, which after will be reworked; similarly, the development and management of the project use both drawing and model to design it. The software is the flexible and effective tool, through which manages the project and the graphical techniques of its representation. The design drawing is based indeed on the interaction of different techniques and software that coexist in digital drawing, more and more integrated with the analogue one. Today the teaching supports a constantly changing context. It requires therefore speed and flexibility; programs should be less rigid in their contents and more experimental in the use of tools for finding a suitable graphic language. This does not deny the importance of knowledge of tools and the software control, but alongside the technical aspect of design of image, it is necessary to redefine a visual culture of the project: which means how to deliver the results according to the tradition of drawing also using digital representation tools. The research of a new visual culture in the project is contaminated by new representation techniques. They recall images from new disciplinary specializations in different application fields, enlarging the figurative repertoire with references that are far from the models of twenty-century architectural drawing. Several examples demonstrate the potential of digital drawing in redefining the fees of the presentation of the project, such as Enric Miralles's prematurely interrupted research that has reworked the concept of collage to experiment with new expressive modes [4,5] (Figure 1). The teaching is in transition, because there are missing the reliable references from the last century. First of all you have to think what kind of drawing a designer needs and what it is the know-how of drawing in the 21st century to define, which is the drawing that we want to achieve and then what are the competences to develop. But it is not enough to figure out what to train students; it is necessary to do it effectively, because in many graduate courses, the hours of lessons are reduced. In this way the teaching can only teach to learn, leaving time to the growth. What does it mean to know how to draw or what does it mean do it in digital way? The problem of a "good drawing" is not the skills of using the software, but is the setting of the work and the ability to choose the most appropriate software, integrating them through drawing, according to communication of the project. The main aim is not the skills in using software, but the control of the design image. The teaching must support a constantly changing context, updating in an organic way: • The contents (the graphical language and the communication of the project, which is the design image) • The tools (the various software for digital drawing, which is the technical skills in raster and vector graphics) • The techniques (the research of a personal meaningful language through the use of different graphical methods, which is two and three-dimensional drawings, raster and vector images and hand-drawings) • The contraction of time caused by subdivision into semestrial periods requires that these three aspects be developed in parallel. Experimentation: The Two Sides of the Drawing The course "Strumenti e metodi del progetto" supplied in the second semester of the first year of the Degree program in "Design degli interni", aims to lead the students to achieve a personal graphic sensibility in order to communicate coherently a project using the encoded language of drawing and the digital representation tools. Thanks to the skills acquired in the "Laboratorio del disegno" of the first semester, where the students learned the technical drawing code and developed manual expressive skills, in the second semester are explained the tools and notions for software control. The software lessons, which are raster and vector graphics or 2D and 3D-aided design, complete the theoretical lessons concerning the representation methods. The student can draw on a wide technological baggage and not for communication. This year the course is started with a twoday workshop, where the students had the chance to compare with a group of students coming from two Chinese universities. The workshop involved about 60 students and was organized together with the teachers which accompanied the Chinese students: prof. Ma Hui and Prof Leng, Hong School of Architecture, Harbin Institute of Technology (HIT); prof. Xiaohui Li, School of Architecture & Fine Arts DaLian University of Technology (DLUT). The aim of the workshop was designed the concept for a temporary set-up, which described the city through its image, not necessarily representing a physical reality, to tourists and citizens. The group of Politecnico di Milano's students, with few exceptions, was at the first approach with digital drawing and graphics software, contrary to the Chinese one, which was all in the third and fourth year of the respective degree programs. In the first part of the day Italian students were taught the raster graphics bases: defining the unit of measure and correct image quality management, color profile management, the explanation of the interface and workflow of the most popular software that allows digital image editing, Adobe Photoshop. The short lesson was preparatory to the work of the next few days, when reduced times would require a hybrid use of various techniques and tools to speed up and optimize the design work. Following the students from the three universities, assembled in small mixed groups, were confronted with the perceived image of the city and the idea that it was derived. In the specific case was Milan, on one hand, with the eyes of those live the city everyday and on other with those of foreign tourists, with a different and far culture from the western one. The working groups were made uniform by matching to each five Italian students two foreigners, one for each university. Student were asked to complete a table selecting instinctively some keywords in order to guide them to describe Milan through pictures and to encourage a first debate. The table, presented like a word-game, was divided in three thematic areas: • The physical elements, which set up the Milan's architectural shape and structure • The perception criteria, which portray Milan's environment through coulors, smell, sounds and weather conditions • The anthropogenic factors, which are detected among all the. • Activities, products and people which symbolically representing Milan. Each student has reflected on the sensations and features that the city expresses and later has compared his choices with the rest of the group, discovering how distant cultural backgrounds can lead to different interpretations of the same city, highlighting sometimes unexpected features and objects. The results of the all key words, chosen by Italian students, are fairly uniform and partly expected. The symbol spaces identified are mainly the consolidated ones of the city such as Piazza del Duomo, Parco Sempione, Navigli, San Lorenzo and Galleria Vittorio Emanuele. Typical and popular places in the city that are in line with the places chosen by foreign students as with the selected Milan's landmark, which are the new modern buildings such as Torre Unicredit, the Bosco verticale and those of the sporting appeal such as the Meazza's Stadium in the San Siro district, in addition to the traditional ones. If buildings and places, which identify Milan are the same for the two groups with different culture, it is the atmosphere described with the colors and the weather conditions to be discordant. The adjectives used to describe the city by Chinese students inspire calm and tranquility; Milan is defined a slow city, historical but modern, gentle, romantic and colourful place, but also fashion and well-organized where the sun and the wind rule (thanks to climate in the day of their stay). The adjective using by the Italian students to describe Milan, seem to tell us the image of another city; The metropolis lived in everyday life is chaotic and noisy, hectic, polluted and international, where fog and rain rule, making it gray, a colour that prevails among those chosen to describe the city along with the green of the parks. The red of the traditional construction, the green of the outdoors, the blue reflections of the sky on the glazed curtain wall and the yellow of the sun are the colours using to describe the city by foreigners. The smell of the fresh air, of the grass and the flowers, of the good food accompanied by shouting of kids, the chirping of the birds, the whistle of the wind and the squealing of the hearth evoque a calm, free and clean city, frequented mainly by models, street performers, soccer players and tourist and where, taking time for themselves, people go shopping, take the dog out for a walk and lie down in the park to take a sunbath. Once again the Italian view is opposed; the music of the street performers, the horns, the traffic, the hum of people, the railway announcement are the mainly noises in the streets of Milan, which are characterized also by smells of pollution, deodorants and cologne, the canals water and food. These images tell us a metropolis, which works and is in a hurry, populated by businessman, students, tourists, waiters, salespeople and ice-cream guys, but who have time to visit exhibition, have an aperitif or drink a coffee with friends. The analysis of the chosen keywords led especially the Italian students to reflecting on the city. The descriptions made by Chinese visitors to the city surprised the students and led them to consider their vision superficial and stereotyped; this vision is typical of who lives a city only for work or study, with a careless look at the details and specifies but based on everyday life and repetition of the actions that are carried out. Hence, the debate has encouraged a deeper discussion on the city, perhaps not much appreciated by those who live in it, and on how many little known buildings and places the city can offer a tourist. In the second part of the day the students have interacted with each other to design a booth using pictures and drawings; the main aim of Chinese students was to enhance the mix between the historical and modern part of the city, which the city often exhibits in the street across from each other; the focus of Italian students was to tell the city through its symbolical objects. (Figure 2). A project which well represents this duality is a parallelepiped with two entrance in which is internally set up with some benches look like the tram ones, where you can sit to watch some pictures of the city. The surface of the booth is externally divided in two parts: the first one is mirrored and modular and represents the modern city with its glazed buildings, instead the second part is drawn with the outlines of some symbolical buildings, such as Arco della Pace, Castello Sforzesco and the windows of Duomo. (Figure 3). At the end of the two days experience the groups has presented their works. The goal of the workshop were both the design of the booth and the realization of an uniform image of the city but because of the short time and linguistic communication problems, only partially solved by the use of the drawing, the projects aimed rather for first than the second. The short times have favored an immediate approach and a hybrid use of different tools of representation, and also have stimulated the synthesis in reporting the perception of the city through its salient features. The Italian students, during the academic course and after acquiring more skills in the use of digital processing software, have experienced the translation of chosen word in images for the construction of a unitary drawing of Milan, which tells the various souls of the city. The achieved collages describe, using organized overlapping or perspectival capriccio and in some case lettering, the main features of the city in a digital reconstruction of an imaginary space made up of real buildings and objects. Then these digital collages have been reworked with different tools up to lose realism and become a drawing. The Chinese students have carried on the works too and have described "their Milan" with collages. In the suggested examples is evident the correspondence between drawings and the antithetical views of the city, like shows used colors and the texture of the sky. In the collage of Italian students dominate the gray scales and the historical sites of the city, particularly the Piazza Duomo and the Navigli, where modern buildings overlooked while the Chinese view is that of a modern city, surrounded by greenery and dominated by blue sky. (Figure 4). The students have reviewed their drawings for the final exam of the course "Strumenti e metodi del progetto", in order to use the booth as a support for the digital collage. In some case the same image and its construction process determinate the design of the booth: for example a sequence of aligned and transparent panels, each containing a part of image. (Figure 5). In other projects is the city that exhibits itself. Five corrugated board panels, which are shaped with different outlines, identify the souls of the city: monuments, museums, skyscraper, churches and parks; each thematic panel shows other important buildings to visit with a drawing of a skyline. In other case is the symbol of Milan, universally acknowledged by both cultural groups, the Duomo, which exhibits the various views of the city. Some panels of different heights, placed on a hexagonal platform, outline the Duomo. These walls make up a narrative path that collects all the images processed by the group members. Tourists and citizens can interact with collage, positioned on turning panels, to compose their personal idea of city ( Figure 6). Conclusions: Towards the New Visual Codes The drawing in architecture has been always going beyond the correct use of the standard, in order to be an expressive type of drawing in itself, which could combine both the agreements and the expressive research of graphical language. The emergence of digital technologies has been extending the representational scenario of the project to virtual representation of geometrical and perceptual reality. This opens up to use new tools and methods for designing images, which represent the project. The increasing relevance of visual communication, due to the potentiality of digital tools has been enlarging the boundaries of representation, confirming the main role of the images and drawing, but making more complex the relationship between image and message, which is between the signifier and the signified. The discipline of drawing has to lead the students to learn the digital drawing tools, in order to develop an awareness regarding images to enhance the technical drawing contents and to improve a own form of expression. The architecture and its design are the result, as art, of the ability to image new forms of expression, reworking the experienced imagery. The mental images shows themself in a translated model which represents the reality with a synthetic figuration through drawing: a twodimensional image drawn up by the author or other types of representation which add threedimensionality to an artificial image with symbolic references, as in the cases of the proposed collage and booth. The culture of image, which characterizes the digital society, enlarges the figurative repertoire of the representation of the project, however the architectural drawing holds its original role. The accuracy of the immersion in virtual reality of digital representation does not satisfy the need of imagination, which is an intrinsic quality of drawing and identified by Michelucci as the gap between the project and the finished building: "C'è sempre uno scarto tra ciò che si sarebbe volute fare e ciò che si è potuto fare, un coefficiente di irrazionalità che il disegno documenta, proponendo un tracciato parallelo e ideale dell'opera dell'architetto" [6].
2018-12-18T04:44:17.926Z
2017-11-24T00:00:00.000
{ "year": 2017, "sha1": "38d1b9a0fbd08a34fe443706b05905a1f3f8d629", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/1/9/866/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "38d1b9a0fbd08a34fe443706b05905a1f3f8d629", "s2fieldsofstudy": [ "Art", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252624507
pes2o/s2orc
v3-fos-license
PriPA: A Tool for Privacy-Preserving Analytics of Linguistic Data The days of large amorphous corpora collected with armies of Web crawlers and stored indefinitely are, or should be, coming to an end. There is a wealth of hidden linguistic information that is increasingly difficult to access, hidden in personal data that would be unethical and technically challenging to collect using traditional methods such as Web crawling and mass surveillance of online discussion spaces. Advances in privacy regulations such as GDPR and changes in the public perception of privacy bring into question the problematic ethical dimension of extracting information from unaware if not unwilling participants. Modern corpora need to adapt, be focused on testing specific hypotheses, and be respectful of the privacy of the people who generated its data. Our work focuses on using a distributed participatory approach and continuous informed consent to solve these issues, by allowing participants to voluntarily contribute their own censored personal data at a granular level. We evaluate our approach in a three-pronged manner, testing the accuracy of measurement of statistical measures of language with respect to standard corpus linguistics tools, evaluating the usability of our application with a participant involvement panel, and using the tool for a case study on health communication. Introduction There is a wealth of hidden linguistic information which is increasingly difficult to access, hidden in personal and private data that would be unethical and technically challenging to collect using traditional methods such as Web crawling and mass surveillance of online discussion spaces. Additionally, advances in privacy regulations and changes in the zeitgeist bring into question the problematic ethical dimension of extracting such information from unaware if not unwilling participants. Since the generation of knowledge from large amounts of empirical data is at the heart of corpus linguistics, its practitioners have long sought ways to protect the privacy of those who have generated it. However, so far the use of privacy-preserving methods has focused on post hoc processing such as automated anonymisation and de-identification. Those automated methods are severely lacking when faced with modern methods of re-identification and de-anonymisation. Nonautomated methods on the other hand are not as scalable. As a first step towards addressing this issue, we developed PRIPA 1 , a software tool using a distributed participatory approach and continuous informed consent by allowing participants to stay in control of their data, and only voluntarily contribute their own censored personal data on their own terms (McClaughlin et al., 2022). We evaluate our prototype by producing a compar-1 https://c19comms.wp.horizon.ac.uk/ pripa ison of word frequencies and collocate association scores between two standard state-of-the-art systems and PRIPA, showing that PRIPA is on par with those tools for some of their common features. We produce a small scale quantitative and qualitative evaluation of the tool by users of different levels of expertise, highlighting some key challenges in the production of privacy-preserving linguistic analysis tools. This paper is structured as follows: In section 2, we discuss the overall methodology of PRIPA: general design for continuous consent, and software architecture. In section 3 we describe our evaluation methodology. We will finally conclude with key challenges and recommendations for further development in section 4. Privacy-Preserving Corpus Linguistics Privacy-preserving technologies allow for the processing of personal data in a way that minimises risks towards the privacy of the people who generated it (Noble et al., 2019). There are several approaches to privacypreserving analytics, which rely on different tools to protect this privacy: trusted execution environments, homomorphic encryption, secure multi-party computation, differential privacy, and personal data stores. We opt for the personal data store approach to privacypreserving analytics because it is the most compatible with the notion of continuous consent and granular sharing of data that is key to PRIPA, however those approaches are not mutually exclusive and further development of the tool will investigate the use of additional privacy layers such as differential privacy for statistics which cannot be computed locally. Other approaches to privacy-preserving analytics use the personal data store approach. Mozilla's Rally project (Mozilla, 2022) for example focuses on passive monitoring of data volunteers for Web-based data. One key difference with PRIPA is that Rally does not differentiate websites of interest, while PRIPA predetermines a set of websites of interest from which statistics are collected. Additionally, Rally monitors a wider set of interactions such as videos watched, time spent on each page, and all domain names of websites visited during the experiment while PRIPA focuses on specific linguistic items. In the remaining parts of this section, we will describe the overall design principles of PRIPA and contrast them to the requirements of the General Data Protection Regulation (GDPR). We will then describe two key aspects of PRIPA for privacy-preserving corpus linguistics: the software architecture allowing data to be collected according to our key principles, and the user interface design allowing for the informed consent of users to be monitored at each key step of the data collection process. Design principles of PRIPA Being privacy-preserving by design involves the adherence to a set of principles, described in Table 1. Instead of collecting the data on the online discussion platform, we recruit participants who install a plugin into their Web browser. The PRIPA plugin then allows participants to enrol themselves into different experiments. Those experiments specify multiple things: the websites that will be watched, the words that will be observed, and the statistics that will be collected. The principles used to develop PRIPA aim to be compatible with modern regulations in Internet privacy such as the European Union's General Data Protection Regulation and its United Kingdom counterpart. While it is possible to use PRIPA in a malicious way, the transparency in data collection helps make this more difficult. Principle 1: Lawfulness, Fairness and Transparency According to the first principle of GDPR, a service provider must specify a legal basis in order to collect data. PRIPA only collect data which are specific to an analysis which is agreed to by a participant. Additionally, PRIPA enforces the asking of consent from the user at multiple stages of the analysis, as well as allows a finer-grained control of which datapoints reach the central server. Items 1, 2, 3, 4, 6 and 7 from Table 1 correspond to this principle. Principle 2: Purpose limitation The linguistic analysis is defined before the collection of the data; purpose limitation is built into PRIPA's core. Principle 3: Data minimisation According to the third principle of GDPR, a service provider must only collect data that is adequate and limited to the claimed purpose of the system. The data to be collected being P1 Participants are aware of the purpose of the experiment. P2 Participants are aware of the parameters (web sites, words, time scale) of the data collection. P3 The features of interest (words, statistical measurements, excerpts) are described in an intelligible way for the participants. P4 Participants are aware of their right to anonymity. P5 Participants can consult their data before it is shared with the researchers. P6 Participants can decide to exclude selected results from the data that is shared with the researchers. P7 Participants can decide to withdraw completely from a study at any time. P8 If participants omit to remove personally identifiable information, the researchers should remove it before long-term storage of the data. Principle 4: Accuracy By allowing participants to consult their data and choose which datapoint to communicate to the researchers, and by allowing participants to remove their data post collection, PRIPA allows the information to remain accurate. Items 4, 5, 6, 7, 8 from Table 1 correspond to this principle. Principle 5: Storage limitation The fifth principle of GDPR states that the service provider must not store data for longer than needed for the claimed purpose. This is not enforced in software, but the fact that PRIPA is integrated with the Microsoft Office 365 back-end for storage of results makes it easy to set data storage policies. Principle 6: Integrity and confidentiality By being integrated in the Microsoft Office 365 back-end for data storage, it is easy to enforce a higher level of security and protect whatever personal data was collected. Principle 7: Accountability (UK GDPR) The United Kingdom's version of GDPR contains a seventh principle: accountability. The principle of accountability requires the service provider to take responsibility of the way personal data is used, and have appropriate measures and records to be able to demonstrate compliance. Much like principle 6, being tied to the Office 365 ecosystem means that existing systems for limiting the use of data and logging access to those datasets can be used out of the box. Data collection process PRIPA collects 3 types of linguistic information: Word frequencies Word frequencies are the raw number of occurrences for words in a specific word list, defined as part of the experiment. The word list is specific to the experiment and as such a participant that does not want to share a specific word frequency needs to withdraw from the experiment in order to preserve the integrity of the data without violating their privacy. Collocates Collocates are pairs of words of interest (defined in a word list as part of the experiment) along with their strength of association, given a pre-specified window of words. The list of word pairs is specific to the experiment, and, like word frequencies, a participant that does not want to share a specific word pair needs to withdraw from the experiment. Concordance lines Concordance lines are lines of text showing the context for a particular word, along with the source of that line. The size of the context is specified in the experiment, and the participant can review the list of concordance lines and exclude the ones they do not want to share. Architecture and design PRIPA is built in a client-server architecture, where the server hosts experiments which are defined in a specific format using JSON syntax 2 . Figure 1 describes the format. The query allows for six big types of parameters: (1) the title of the study, (2) meta-instructions which apply for the entire experiment and contain details about the way text is meant to be processed (e.g., punctuation, casing, etc.), (3) an allow list which specifies which websites need to be observed Client-side data collection The client of the application sits in a plug-in for Chromium-based Web browsers (e.g., Google Chrome, Microsoft Edge). We make use of the JavaScript regular expression engine in order to process word lists which are downloaded from the experiment server. Once the user selects an experiment they would like to take part in and accept the disclaimers regarding the way their data will be processed and how they can access/modify/remove it, the PRIPA extension downloads an experiment specification file and watches for the opening or closing of specific websites (depending on the specification of the experiment). When such action (open/close) is triggered, PRIPA attempts to extract the core of the webpage by ignoring banner ads and other informational noise, and runs the analysis based on the word lists provided in the experiment file. The data is stored in the Web browser itself, never leaving the participant's device until they have decided to share their data with the researcher. Monitoring on tab open/close Being able to collect data on either the opening or the closing of a tab/window is an important distinction for linguistic analysis. Since some websites dynamically load data based on user input (e.g., Twitter feed, Facebook messages), collecting data at opening would not be effective. Collecting data at close allows for more flexibility in the data collection process by asking participants, for example, to scroll through a month of Twitter feed before closing the tab to start the analysis. Server-side aggregation The statistical measures collected by PRIPA can be aggregated after the fact. Word frequency can be aggregated with a simple sum, and collocate strength is measured using pointwise mutual information (Bouma, 2009) which can be aggregated using simple frequency measures and information about document length. Considering that the Pointwise Mutual Information of two words w 1 and w 2 in a document d can be computed as P M I where |d| is the length of document d, we only need to communicate individual and joint word frequencies as well as length of the web pages in order to aggregate that measure over all participants. Consent monitoring In order for PRIPA to adhere to the principles laid out in the beginning of the project, consent of the participants needs to be monitored at regular intervals when user data is manipulated. This is done at the following stages: During the enrolment stage The first stage of consent is whether the participant wants to enrol in the experiment. During the activation stage The second stage of consent is whether the participant accepts the collection of data from their device. Participants are asked to explicitly enable the data collection, which will start the monitoring of a specific and explicit set of websites. By explicitly enabling this monitoring, participants are informed that they can disable it at any moment. During the review stage When reviewing concordance lines, participants can choose to exclude specific data points they do not want to share by simply disabling a checkbox, as shown in Figure 2. A number representing the percentage of data censored by the participant is communicated in the results, so that the researcher can make an informed decision about whether to consider this data point. Figure 3, when submitting results to the researchers participants are asked to consent to the process of sending their data, and can instead opt to stop the experiment and delete their data. Evaluation and results We evaluated our system in a three-pronged approach: Accuracy of word counting As pointed out by Anthony (2013), corpus linguistics applications often differ in their measurements due to having different standards in the way they process text. For example, some software would break "We'll" into two word tokens, while some would keep it as a singular word token. Small variations, repeated over large corpora, can lead to vastly different linguistic measurements and affect interpretation. As such, we calibrated our measurement so that it is close to standard tools such as AntConc (Anthony, 2005) and LancsBox (Brezina et al., 2018). We designed a set of test web pages with minimal noise and hosted them on a university website, analysing them both offline with AntConc and Lancs-Box and online through PRIPA. In Table 2 we show a comparison of frequencies of single words when running a study on modal verbs on a pre-selected corpus. We can observe that counts mostly match. A visual inspection determined that readings which were not matching were due to tokenisation differences when handling punctuation and apostrophes. In Figure 4 we show a comparative histogram of the differences between measurements of collocation strength between PRIPA, AntConc, and LancsBox on an experiment measuring collocation strength between modal verbs and pronouns. We can see from this graph that out of our samples, most measurements fell within [0,0.2[ of LancsBox and [0,0.3[ of AntConc. A visual inspection showed that the readings that did not match were due to tokenisation differences, like with standard term frequencies. Figure 4: Histogram of differences between PRIPA, AntConc and LancsBox in calculating strength of association between collocates on a sample corpus. Difference between LancsBox and AntConc also provided for baseline. Usability of the software Since participants are rarely researchers themselves, it is important that the software produced is adapted for laypeople and general non-experts. To test this, we ran a usability questionnaire with a small participant involvement panel of 6 people. The quantitative results of the study are summarised in Table 3. We can see from the data that most participants felt confident in using PRIPA, but had a difficult time understanding the goal of the application. This raises the issue of the importance of a clear user interface and shows that PRIPA can be improved with respect to its first key design principle: participants are aware of the purpose of the experiment. Additionally, we note from the quantitative data reported in Table 3 as well as from qualitative data collected during the same survey that participants were concerned about the privacy of their data. This is partly explained due to the permission model of Chrome-based extensions, which require ask- 4.5 Q4 I think that I would need the support of a technical person to be able to use this extension 1.5 Q5 I found the analyses and results were clearly explained in the extension 2.5 Q6 I felt very confident using this extension 4 Q7 I would imagine that most people would learn to use this extension very quickly 3.5 Q8 I am concerned about the privacy and security of my personal data (i.e., who may be able to access my personal information and how it is protected) when using the extension 2.5 Table 3: Usability questionnaire given to 6 participants -Median value of the Likert data (1 = strongly disagree, 5 = strongly agree). ing the participants access to their entire browsing experience and them trusting that we will filter only the websites and the data that is stated in the experiment details. Recent updates in the Chrome permission models allow for fine-grained website permissions at runtime and therefore that problem will soon be patched out of PRIPA. In-depth study of health communication In order to evaluate our tool in the field, we ran a study of health communication from the British government during the COVID-19 pandemic. We defined a list of websites of interest based on an empirical study of the most visited news websites in the UK, on which to carry out a pilot study to examine modality markers surrounding key terms from health messages (e.g., "mask", "vaccine", "lockdown", and more). Results from our study shows that PRIPA allows us to access language data from the perspective of the people consuming it. However, it also highlighted a weakness of PRIPA in that when dealing with communication-oriented web applications such as Twitter direct messages or Facebook Messenger, it cannot differentiate between language being produced by the participant and language being consumed. Such information would be useful from a linguistic perspective and will therefore be added in future versions of PRIPA. Conclusion In this paper we present PRIPA, an early prototype of a new family of corpus linguistics tools that allow for collecting personal data in a privacy-preserving way. PRIPA is an early prototype and therefore a work in progress, but its development raised a number of questions and helped us uncover a set of research directions and good practices for a more trustworthy privacypreserving type of linguistic analysis.
2022-10-01T13:18:25.197Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2fe99137820994d2e3a9b7f1aeedae488a965301", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "2fe99137820994d2e3a9b7f1aeedae488a965301", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
13596073
pes2o/s2orc
v3-fos-license
Setback Distances as a Conservation Tool in Wildlife-Human Interactions: Testing Their Efficacy for Birds Affected by Vehicles on Open-Coast Sandy Beaches In some wilderness areas, wildlife encounter vehicles disrupt their behaviour and habitat use. Changing driver behaviour has been proposed where bans on vehicle use are politically unpalatable, but the efficacy of vehicle setbacks and reduced speeds remains largely untested. We characterised bird-vehicle encounters in terms of driver behaviour and the disturbance caused to birds, and tested whether spatial buffers or lower speeds reduced bird escape responses on open beaches. Focal observations showed that: i) most drivers did not create sizeable buffers between their vehicles and birds; ii) bird disturbance was frequent; and iii) predictors of probability of flushing (escape) were setback distance and vehicle type (buses flushed birds at higher rates than cars). Experiments demonstrated that substantial reductions in bird escape responses required buffers to be wide (> 25 m) and vehicle speeds to be slow (< 30 km h-1). Setback distances can reduce impacts on wildlife, provided that they are carefully designed and derived from empirical evidence. No speed or distance combination we tested, however, eliminated bird responses. Thus, while buffers reduce response rates, they are likely to be much less effective than vehicle-free zones (i.e. beach closures), and rely on changes to current driver behaviour. Introduction Protected areas generally have a dual mandate of protecting important and irreplaceable features of the natural environment ('conservation'), whilst providing visitors with opportunities to experience nature or engage in recreational activities [1,2]. Meeting these dual objectives is often difficult, and the ratio between environmental costs and economic benefits is a dynamic one, often driven by political and cultural forces [3]. For example, although the values of conservation in parks can, arguably, outweigh those of tourism and recreation, visitors also create political capital that influences a government's budget allocations to conservation on public lands [1]. The interactions between wildlife and vehicles are one of the central issues and conservation challenges associated with these twin objectives. Vehicles are a well-documented threat to wildlife through mortality, disturbance (behavioural or physiological disruption), and other processes such as air pollution and habitat alterations [4][5][6]. The potential for conflict between conservation and human use is starkly evident on sandy beaches and coastal dunes, where motorised transport is a widely practiced recreational activity, especially in Australia and the US. The activity is, however, seldom compatible with the conservation function of beaches and dunes, including the protection of wildlife and habitats [7]. Beaches and dunes are particularly attractive as sites for recreation, but also form unique habitats that contain species and assemblages vulnerable to vehicles: this juxtaposition of spatially concentrated, and often intense, motorised recreation with bio-diverse and malleable habitats results in magnified vehicle effects along many sandy shorelines [8,9]. Impacts of vehicles driven on beaches and dunes are reported from all levels of ecological organisation, including substantial habitat modifications [10][11][12], significant reductions in biodiversity and lower abundance of plants and invertebrates [13][14][15], and mortalities or significant disturbance of vertebrates [16,17]. Other processes demonstrated in terrestrial settings, such as the transportation of weeds, also presumably apply to beaches and dunes [18]. In terrestrial environments, roads can act as barriers to wildlife and tend to fragment wildlife habitat. Some beaches are functionally -and sometimes legally -'roads', where the 'road' occurs within a large portion of, or encompasses, the habitat of beach fauna [19,20]. Additionally, unlike terrestrial roads, beaches as roads may not host traffic during high water periods (when sand is soft and beaches impassable), and so vehicles may be episodic; unpredictable stimuli that may be more disturbing than predicable benign stimuli [21]. Like other vertebrates, beach-dwelling birds are prone to disturbance when they encounter potentially threatening stimuli, such as humans engaged in recreational activities [21]. On beaches, vehicles also potentially crush eggs and young and cause fatal collisions with birds [22,23]. Lowered reproductive success and adult survival are key demographic parameters which potentially influence population viability, thus the impacts of vehicles potentially represent conservation threats to populations of beach-dwelling fauna. Bird escape responses are considered anti-predator in nature, regardless of the agent which evokes a response [24]. Escape behaviour balances the risk of staying (risk of collision and death) with the costs of leaving (increased energy expenditure and displacement from habitat) [25]. Although bird responses to human stimuli vary considerably between and within species, two general principles have emerged which can explain a sizeable part of this variation: 1) the probability of flight, or the intensity of the response, decreases with increasing distance between the birds and the stimulus, and 2) attributes of the stimulus (e.g. size and speed) can be important in mediating the responses [24]. In conservation practices, these principles translate into management interventions where establishing separation distances ('buffers' or 'setbacks') and lowering vehicle speed are thought to reduce the occurrence of collisions and costly escape responses [26,27]. This approach may be particularly important on open-coast beaches, where birds settle in generally unpredictable locations, making engineering solutions to reduce vehicle impacts (e.g. fences, localised signage, small-scale beach closures) impractical [28,29]. Birds get killed in collisions with vehicles when escape behaviours are late or inadequate. Because escape responses most probably evolved in relation to the speed of natural predators rather than modern motor cars, lowering vehicle speed limits may permit birds to better respond to vehicles. Because setback distances differ greatly between species and depend on the environmental and biological context [30], their applicability as a conservation tool to specific situations needs to be supported by empirical evidence. Because all setback distances require adoption and compliance by humans interacting with wildlife [27], evidence of their efficacy should improve uptake and, ultimately, conservation effectiveness. In this context, there were three key information gaps which we addressed in this study: 1) essential attributes defining vehiclewildlife interactions on beaches are ill-defined, 2) the metrics for setback distances during vehicle encounters have not been determined, and 3) the efficacy of setback distances is undocumented for open-coast beaches. Significant conservation concerns for coastal birds exist [31], some of which could, hypothetically, be ameliorated by the adoption of -largely untested -setback tools to reduce the impacts from vehicle encounters on open beaches [26]. We used both observational and experimental approaches to address four inter-related objectives: i) to describe the nature of encounters between motorised recreationists and birds on open-coast beaches ('driver behaviour'), ii) to quantify the frequency and intensity of disturbance responses by birds resulting from these encounters ('disturbance effects'), iii) to identify factors that influence the probability of birds escaping by flight ('determinants of probability of flushing'), and iv) to test the efficacy of alternative setback distances and encounter speeds to reduce the probability of flushing ('tool evaluation'). Our model system were the open-coast beaches of two barrier islands in Eastern Australia where interactions between birds and vehicles are frequent, leading to conservation concerns about the practice of beach driving and its management. Ethics statement All beach sites are open to members of the public and open to vehicles with a permit. Vehicle access permits were purchased through standard channels used by the general public / recreational users (e.g. http:// www.straddiecamping.com.au/4wds.php#4wd-permits). Observations on disturbance of birds by other beach users did not involve direct contact with vertebrates or the beach users nor did the experiments. The latter were no different than the thousands of encounters that birds experience from vehicles used by the general public on these beaches. Neither of the two bird species in the experiments is currently listed as endangered or protected http://www.ehp.qld.gov.au/wildlife/ threatened-species/endangered/endangered-animals/ index.html#birds_16_species). No vertebrates were collected, sampled, sacrificed, or physically harmed in any way. The work was carried out under animal ethics permit AN/A/10/56, issued by the Animal Ethics Committee of the University of the Sunshine Coast. Study area The study was conducted on the ocean-exposed beaches of two barrier islands, North Stradbroke Island and Fraser Island, located off southeast Queensland, Australia ( Figure 1). The sites were selected based on four attributes particularly relevant for the study of interactions between vehicles and birds: 1) the eastern shores of both islands feature long stretches of exposed sandy beaches [32], 2) dunes and beaches of both islands are important feeding, roosting, and breeding sites for coastal birds [17,33,34], 3) motorised traffic is very intense on the beaches [35], and 4) this heavy traffic, most of it recreational, causes substantial disturbance to birds [20], resulting in conservation concerns about the impacts of traffic on birds and other wildlife [36,37]. Vehicles on beaches in the region comprise mostly conventional off-road vehicles ('ORVs'). Other motorised traffic includes tour buses (operated by tourism companies), trucks (service and delivery vehicles), motor bikes and small aircraft that use beaches as a landing strip for scenic flights [20]. We use the term 'vehicle' to encompass all motorised traffic, 'car' for 4x4 capable passenger cars (ORVs), and 'bus' for off-road tourist buses. Species selection Crested terns, Thalasseus bergii (henceforth 'terns') and Australian pied oystercatchers, Haematopus longirostris (henceforth 'oystercatchers') were selected as the model species to examine disturbance to birds because they are abundant on North Stradbroke and Fraser Islands, they occur extensively around sandy shores of Australia, and they have ecological analogues on many sandy shores around the world [38,39] These species encounter vehicles frequently in the study area [20]. While neither species is nationally threatened, factors such as sea-level rise and disturbance are recognised as potential threats [38,39]. Terns are primarily a marine species which roost, and sometimes breed, on sandy shores, while oystercatchers are a sandy-shore obligate species. These species were chosen because they represent two common groups of beach-dwelling bird and, being non-threatened, meant that experiments did not compromise population viability. Field methods Our main objectives were to document how drivers react to the presence of birds on the beach, including driver behaviours that resulted in flushes of individual birds or flocks, and to test the efficacy of setback distances and reduced speeds in lowering disturbance to birds. Consequently, we used an observational phase to document human behaviour and responses of birds, followed by an experimental phase to test whether buffer distances and reduced speed mediate the intensity of disturbance caused by motorised traffic to birds. During the observational phase we estimated distances between vehicles and birds by eye, following an extensive series of 'observer calibration' trials: these are described in detail in section 2.7 and in the supporting figures and tables. Interactions between wildlife and humans can be conceptualised as a sequence of events centred around the birds. First, a 'stimulus' must be present which is capable of eliciting a response in the species. In the case of recreational traffic on beaches, the stimuli are vehicles of various types and we define the stimulus based on attributes of size and form (e.g. cars, buses, motor bikes) and their behaviour (e.g. speed, whether or not evasive action is evident when birds are present). Second, the stimulus comes into contact with, or proximity to birds, defined here as an 'encounter', where a vehicle moves along the beach past the birds. Encounters between the stimulus and the wildlife can evoke a 'response' on the part of the wildlife, defined here as a measurable change to their behaviour. 'Disturbance' is the behavioural or physiological change to normal behaviour which occurs as a result of any responses ). Observations of vehicle-bird encounters Two key metrics that define interactions between people and birds on sandy shores are the type of behaviour that humans display towards birds (e.g. the reaction, if any, of drivers when encountering birds whilst driving on the beach), and the type and intensity of the behavioural responses (e.g. the likelihood of birds being flushed or other significant changes to their behaviour). We recorded both these aspects during field observations without interfering with the normal wildlife-human encounters. Interactions between birds and vehicles were recorded by two observers using binoculars. Observations were made from a vehicle parked on the upper shore near the dunes. The vehicle was at least 100 m from focal birds to minimize observer effects on bird (or driver) behaviour; we checked this by noting whether birds displayed enhanced vigilance, and abandoned further recordings in a few cases where this occurred. Recordings included variables in four classes: i) attributes that define the stimulus (i.e. type of vehicle, speed), ii) the behaviour of drivers once they had encountered birds on the beach (distance between birds and vehicle (cf. Section 2.6), change of speed or direction (i.e. evasive action), iii) the time birds spent engaged in different behaviours in the absence of vehicles (i.e. behavioural time budgets between or before encounters with vehicles), and iv) the intensity of the response resulting from encounters. Behaviours in the absence of vehicles were categorised according to [20], and measured as the time spent in each category (using the application "IObserver" on an Apple TM Ipad). Changes to behaviour following encounters with vehicles were ranked on an ordinal scale of increasing disturbance intensity: 0 -no change in behaviour, 1 -vigilance: at least one individual in the flock alters its behaviour to become vigilant, but no bird flees, either by running or by flying away, 2 -shuffle: at least one individual in a flock shows a mild escape response by shifting position in the form of a short (< 1 m) run or swift walk; no bird takes flight, 3run: at least one individual in a flock shows a distinct escape response in the form of a run (> 1 m) away from the stimulus, but no bird takes flight, and 4 -flight: at least one individual in a flock takes flight. Our procedure of ranking disturbance response on an ordinal scale resembles methods used in similar bird-human interaction studies [40,41]. We also recorded the following contextual variables: number of individuals in the flock, beach width (m), wind speed (km h -1 ), temperature (°C), and state of the tide (hours since low water); these variables could potentially influence bird behaviour [17,20,32]. Focal observations lasted for 30 minutes or until an encounter with a vehicle, whichever came first. We chose this time cut-off to maximise the probability of obtaining replicate observations between vehicles and birds (the focus of the study) rather than locking up field resources at single locations that did not produce data within a reasonable time period. Birds were haphazardly 'sampled' by making focal observations on birds as we encountered them whilst driving along the beach. To increase the spatial dispersion of focal observations, the along-shore starting position of drives was randomised for each field day. On a given day, birds at a site were sampled only once and the minimum spatial separation to other observation sites was 200 m. For flocks of up to ten individuals we used scan sampling, where each individual in the flock was instantaneously sampled during regular scans [42]. For flocks of more than ten individuals, we haphazardly selected ten birds and applied scan sampling to this subset. A total of 144 observations of encounters between birds and vehicles were made. To achieve temporal dispersion, surveys were spread out between 01 Oct. 2012 and 18 Jan. 2013. We partitioned total sampling effort to include weekdays (n = 98 observations, 68%) and weekends (n = 46, 32%) at their respective calendar ratio (No. weekdays : No. weekenddays = 1 : 2.5). Experimental study Our second objective was to test the efficacy of setback distances (i.e. spatial buffers between vehicles and birds) and the effects of vehicle speeds on the probability of flushing; this required an experimental approach. Specifically, we wished to measure flushing because this response involves the greatest energy expenditure. Critical design aspects were the separation distances and the speeds to be tested in the experiments. Distances to be tested were derived from the relationships between separation distance and the intensity of disturbance obtained during the observational phase of the study, tempered by considerations of practicality of implementation of management practices. We used three criteria to identify the distances to be used during the experimental phase of the study: 1) Distances need to be biologically effective. Separation distances must significantly and substantially reduce flushing rates. In encounters with cars, the probability of taking flight declined with greater separation distances for both species. Terns were significantly less likely to be flushed by cars at a separation distance of 25 m or more, and oystercatchers generally did not take to the air if cars passed them at 25 m or wider. Thus, the maximum separation distance adopted in the experiments was set at 25 m. It is unrealistic to recommend (or legislate) separation distances in increments that are difficult to judge by drivers, or to recommend more than one separation distance for different species of birds; the most practicable approach was therefore to use a single 'minimum distance' that drivers must maintain between their vehicle and birds. 3) Distances must be socially acceptable. Setback distances must have reasonable public uptake and be realistic. Since 79% of vehicles drove closer than 25 m to birds during our focal observations, a separation distance of 25 m, notwithstanding its biological efficacy, may not be achievable, or would result in very low compliance. Our experiment therefore also included a much less stringent separation distance of 5 m. Minimum approach distances to birds of 5 and 25 m have also been suggested previously [43,44]. Vehicle speeds were selected based on the existing upper speed limit on the open beaches, 80 km h -1 , contrasted with 30 km h -1 which is the limit in pedestrian access zones on the beach (http://www.nprsr.qld.gov.au/parks/fraser/ about.html#driving_safely). The procedure for the field experiment consisted of a fixed sequence of events: i) two observers drove a car (white Land Rover Defender) along the beach until they spotted a flock in the distance; ii) counts of individuals were made with binoculars and the safety of the site assessed (e.g. no approaching vehicles, suitable driving conditions near birds), iii) the vehicle passed the flock at a set distance and speed (i.e. permutations of 25 and 5 m, 80 and 30 km h -1 allocated randomly), and iv) the behaviour of the birds was recorded using the intensity of response scale (see above). Sixty experiments were conducted over two days in Feb. 2013 on Fraser Island. Distance calibration and diagnostics Because distance between a stimulus and an organism is often the most critical factor determining the likelihood and intensity of a response by the organism [24], accurate estimation of distances between birds and vehicles is important. Since we estimated this distance by eye in the field (as would be the case if management prescriptions were made), we incorporated extensive protocols to test the accuracy of our estimates and to improve their reliability during both the observational and experimental phase of the study. During the observational phase of the study, each observer undertook a series of 'eye calibration' trials at the start of each day in the field when encounters between birds and vehicles were recorded. Each of these trials consisted of five steps: 1. a model of a bird was placed on the lower beach near the swash (the place where most birds roost and feed); 2. small markers were placed up-shore from this mock bird in the sand at series of distances towards the dunes (1, 2, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50 m); 3. a driver made at least three replicate passes with a vehicle at each of these marked-out distances, 4. observers (independent of drivers) estimated the distance that the vehicle was separated from the mock bird; and 5. the actual distance at which the vehicle passed the mock bird (known to the driver based on the small markers placed on the sand) was compared with the distances estimated by the observer. The observer was placed on the upper shore near the dunes in the same position from which the actual recordings of birdvehicles encounters were made. After a number of vehicle passes the observer received feedback on his/her 'estimates' to improve the accuracy and repeatability of the distances judged by eye. The mean distances estimated by the observer did not differ by more than 7% from the actual value of the distance between the mock bird and the vehicle. The maximum residual (i.e. estimated -observed) was 8 m at a test distance of 50 m, representing a deviation of 16% from the actual value. Observers also judged distances with good precision: the standard error of mean estimates made by observers was < 5% of the mean across the full range of distances tested. There was no drift (i.e. directional bias) of distance estimates at increasing values of test distances; the slope of the regression line of estimated vs. actual distances did not differ significantly from a 1:1 line (F (1,273) = 0.63, p = 0.43; Table S1, Figure S1). For the experimental phase, we trained drivers to maintain fixed distances of 5 and 25 m between their vehicle and a model bird. Drivers drove past a model bird and after each pass received feedback via radio on the actual distance they had achieved. This distance was measured by two investigators after each vehicles pass with a tape measure using the mock bird and tyre imprints in the sand as markers. We repeated this process until all drivers could competently implement a set separation distance between their vehicle and a bird. For both distances tested (i.e. 5 and 25 m), encompassing four drivers in 70 replicate trials, the mean distances that drivers maintained were within 10% of the actual test distance required for all but one driver and distance (max. residual of 18% at 5 m). Half of the mean distances achieved by drivers had a residual better than 5% of the true value ( Figure S2). Drivers also achieved the required distances with good precision: standard errors of repeated approaches ranged between 2 and 4% of the mean. Thus, experimental distances were deemed both accurate and reproducible by us. A central goal of our focal observations was to identify factors (e.g. type of vehicle, speed) that can predict the probability of birds flushing (i.e. encounters that culminate in birds escaping by taking flight). Because variations in the probability of flushing of birds caused by different vehicle types may, theoretically, be influenced by beach width and the state of the tide during which focal observations were conducted (e.g. vehicles approach birds closer when the beach is narrower), we partitioned total variance in beach width and tide state with a Generalized Linear Model (GLM) containing the fixed terms 'Species' (terns vs. oystercatchers) and 'Vehicle Type' (cars vs. buses). There were no significant differences in beach width for focal observations on different species (p = 0.48), for different vehicle types (p = 0.76), or for combinations of vehicle type and species (p = 0.48). There were also no systematic differences in the state of the tide during which focal observations were conducted between species, vehicles, or combinations thereof (min. p = 0.43). Thus, systematic bias due to beach driving conditions between treatments is unlikely to have confounded the results. Data analyses The probability of flushing during encounters was the response variable of primary interest, chiefly because of the high energetic costs involved in taking flight as an escape strategy [45]. We modelled the probability of flight (a binary outcome of flying vs. remaining on the ground) using Generalized Linear Models (GLZ) with logit link functions [46]. Saturated GLZmodels contained eight predictor variables: 'Species', 'Separation Distance', 'Vehicle Type', 'Beach Width', 'Tide', 'Flock Size', and 'Air Temperature'. Model performance was evaluated using the corrected Akaike Information Criterion (AICc) based on all possible combinations of environmental variables used in model building [47,48]. A multi-model inference approach was used to assess the contributions of individual variables, based primarily on their summed Akaike weights [49]; summed AICc weights (w+) provide relative probabilities of variable importance; variables with w+ < 0.3 are likely to be of little or no importance [50]. Variation in response intensity scores was partitioned with Permutational Analysis of Variance, PERMANOVA [51] to account for possible non-normal error structure in the response variable; the crossed design encompassed 'Vehicle Type' and 'Species' as fixed factors. Spatial separation between vehicles and birds Distances that separated birds from vehicles during encounters on beaches ranged from 0 m, where vehicles drove directly at birds, to 55 m. More than one third (39%) of all vehicles passed birds closer than 10 m (n = 56), three-quarters kept a distance of 20 m or less (n = 107), and 90% of vehicles were closer than 30 m to birds (n = 140). Conversely, only 26% of drivers kept a separation distance of at least 25 m (n = 23). A quarter of drivers used less than 40% of the available beach to create a spatial buffer between their vehicles and birds. This indicates that close encounters with birds were not unavoidable due to narrow beaches, but that despite the available space, few drivers utilised a sizeable fraction of this space to lower their effects on birds. Mean separation distances did not differ significantly between species (p = 0. 22 Responses to encounters Behavioural responses of birds to vehicles were common in both species, despite different habitat use patterns of terns and oystercatchers (Table 1). Terns use sandy beaches exclusively as roosting sites and for social interactions between foraging flights over the surf-zone and marine waters beyond. Oystercatchers forage on beaches, allocating half of their time to foraging (Table 1). Birds encountered vehicles, on average, within 3 min after focal observations were started (mean time to first encounter; terns: 139.5 ± 16.02 s; oystercatchers: 147.06 ± 21.08). Motorised traffic strongly altered the behaviour and habitat use of birds on the beaches. Of 144 focal observations on birdvehicle encounters, birds responded in 130 (Table 2). Birds displayed heightened vigilance in 21% of encounters, and escaped from vehicles by walking (14%) and running (17%). Fifty five encounters (38%) ended with flying. Predicting the probability of flushing Fleeing on the wing ('flushing') is energetically the most costly type of escape response, and preventing, or reducing, the incidence of flushing is therefore a conservation priority. To this end, models that can identify factors important in predicting the probability of flushing are of conservation value. The best overall model predicting the flush rate of birds contained four predictors: species, vehicle type, distance, and beach width (Table 4). Larger separation distances significantly decreased the probability of birds flushing (odds ratio = 0.91; Figure 3, Table S2), whereas birds were very marginally more likely to take flight when the beach was wider (OR = 1.01). Response patterns differed strongly between species, with terns much more likely to take flight (OR = 5.76). Buses were twice as likely to flush birds (OR = 2.00) than were cars (Table S2). Summed variable weights provide relative probabilities of variable importance, and this analysis shows that three variables were clearly most influential in our models predicting the probability of flushing of birds: species, the type of vehicle (bus or car), and the separation distance between vehicles and birds ( Table 5). Beach width had a moderate, but nonsignificant (p= 0.09), influence on flushing. All other variables included in our models, were inconsequential in explaining the probability of flushing; unimportant variables included attributes of the habitat (state of the tide), weather (temperature, wind speed), a biological trait (flock size), and a property of the stimulus itself (vehicle speed). Experimental tests on the efficacy of setback distances and speeds The purpose of the experiments was to test combinations of speed and setback distances which may underpin management recommendations. We recorded sizeable (-44%) and significant (p = 0.004) reductions in mean response intensity scores when drivers slowed to 30 km h -1 and when they created a buffer between their car and birds of 25 m (Table 6). Increasing the separation distance from 5 to 25 m halved the response intensity score from 4 (i.e. flush) to 2 (i.e. shuffle; Table 6). Increasing the separation distance was generally more important in reducing responses than were changes to vehicle speed. Substantial declines in response intensity scores at 25 m were recorded at both speeds, whereas lowering the speed only produced a sizeable effect when cars were more distant from the birds ( Table 6). Terns flushed at rates which permitted further analysis of the probability of flushing. Separation distance had a large and significant effect on probability of flushing terns in the experiments (GLZ distance term; G 2 = 16.90, P < 0.001). At a separation distance of 5 m, four out of five passes resulted in terns being flushed (Figure 4). When distances were increased to 25 m, the probability of terns flushing declined significantly to half or less (Figure 4). Overall, terns were 72% more likely to get flushed when cars passed them at 5 m (Figure 4). Vehicle speed influenced flushing rates less than did distance (GLZ speed term; G 2 = 2.98, P = 0.092) in the experiments, being to some degree also contingent upon separation distance (GLZ dist. * speed term; G 2 = 2.46, P = 0.116). When vehicles passed birds at 30 km h -1 and at a separation distance of 25 m, only a single experimental encounter culminated in a flush (Figure 4). Conversely, vehicles travelling at the legal speed limit of 80 km h -1 flushed 54% of birds even when a separation distance of 25 m was maintained. Speed had no significant effect (P = 0.945) on the proportion of encounters evoking flushes when vehicles passed birds closely at 5 m ( Figure 4). Overall, our model predicting probabilities of flushing using vehicle speed and distance as predictors correctly classified 78% of responses. Discussion Vehicles using the beaches caused frequent and substantial disturbance effects to wildlife, with many drivers approaching birds close and at high speeds to force escapes on the wing. Classic models of human disturbance to birds posit that vehicles evoke responses from wildlife that are less frequent or intense than those evoked by pedestrians [24,52]. These models do, however, rarely consider the speed or distance covered by vehicles, the large number of vehicles in some parks, and the rate at which they encounter wildlife, but see 20,23 -all factors that contribute to the considerable disturbance effects recorded here. Effective setback distances are considered species-and situation specific [27]. The overall response distances we report here are within the broader range, 14-126 m, of flight initiation distances (FIDs) reported for shorebirds [24]. Very few studies on effective buffers for birds against anthropogenic stimuli test vehicles or are conducted on beaches [24]. Of the studies that measured the response of birds to vehicles [53,54], none report responses of terns or oystercatchers. In our study, about half of all encounters between terns and vehicles resulted in flights at separation distances of ca. 20 m ( Figure 3). This is very broadly comparable to distances reported for terns when approached by walkers, i.e. 17 m [24]. This Figure 2. Comparison between buses and cars in terms of the intensity of disturbance-related behaviours shown by birds and the distances separating vehicles from birds for terns and oystercatchers. doi: 10.1371/journal.pone.0071200.g002 similarity suggests that terns may not always discriminate between vehicles and walkers in terms of escape responses. Alternatively, terns could have altered their response distances via learning or perhaps local selection -controlled studies are required to test this. On the other hand, the response distances of pied oystercatchers recorded by us (i.e. < 10% of encounters resulting in flight at 20 m; Figure 3) appear shorter than those reported elsewhere, being 39-83 m when approached by walkers [24]. It is commonly suggested that vehicles evoke bird responses at shorter distances than walkers although this has been virtually unstudied, but see 55. This possible difference warrants further investigation using properly controlled comparisons, especially because people will often walk from vehicles and have the potential to disturb birds further when doing so. Attributes of disturbance responses and separation distances One of the key findings of our study is that terns are more sensitive to vehicles than oystercatchers. Oystercatchers are waders and can move rapidly on foot. By contrast, terns have webbed feet and cannot move rapidly on the ground, being forced to take flight for rapid escapes. Terns use the lower beach for roosting and social interactions only, feeding in the surf zone and beyond. By contrast, habitat use in oystercatchers is more diverse, the species using beaches as feeding, roosting and breeding sites [32]. Oystercatchers use parts of the beach which is not traversed by the same number of vehicles (i.e. the wet swash zone near surf) and feeding patches in this zone confer considerable value to birds, making oystercatchers less likely to leave. Regardless of the cause of species differences, the fact that they exist emphasises the need tailor management solutions to accommodate species and situations where most bird responses occur. One of the salient findings of our study, and one which has management and conservation implications, is that buses are significantly more disruptive to birds than passenger cars. The substantially larger effect of buses results most likely from a combination of driver behaviour and the characteristics of the stimulus itself. Bus drivers tended to approach birds closer than drivers of passenger cars, and birds reacted more strongly to buses, escaping on the wing in the majority of encounters. Birds perceive humans, and, presumably, vehicles, as potential predators, eliciting behavioural responses designed to lower the risk of predation [56]. It is therefore possible that because of their larger size, often greater noise, and consistent nonevasive behaviour (i.e. bus drivers never altered course or reduced speed to avoid birds), buses represent a more threatening stimulus and hence elicit stronger responses. Traffic noise negatively affects wildlife [57], and other anthropogenic noise stimuli are known to elicit behavioural disturbance in birds, such as barking dogs [58]. Although we did not measure noise directly, buses are generally noisier than most cars, and so may provide more threatening auditory cues to birds. Few studies have disentangled the effect of stimulus speed from stimulus type [27]. We did not find a significant effect of speed on the probability of flushing, and our experiments suggest that speed is less important than distance in eliciting escape on the wing. Speed is, however, a significant predictor of roadkills in vertebrates [5], and fatal collisions occur between birds and vehicles on beaches [20]. Thus, it would be expected that speed would mediate risk to birds, and so would be used by birds to mediate their responses to vehicles [56]. Birds may be unable to judge speeds, especially for extremely rapid approaches (e.g., 80 kmh -1 ), when the time lag between detection of the stimulus and decision to fly may be short. Thus, vehicles may be so dangerous, vehicle speed so variable (i.e. unpredictable), and the risk associated with delayed response so high, that birds have generalised their response to vehicles regardless of the speed at which they occur. Thus, while reductions in speed per se will, based on our experimental data, make only a small contribution to reduce disturbance levels to birds, speed limits are presumably very important in lowering avian fatalities on beaches and hence should form part of a broader conservation toolbox. Birds experiencing repeated encounters with predictable, benign human stimuli may respond less over time, suggesting habituation [59]. Conversely, animals that are frequently The best model (based on AICc) for each number of predictor variables is shown, with * denoting the best overall model, and 'x' denoting inclusion of a variable in a model for a given number of predictors. disturbed by dangerous stimuli, especially when they are unpredictable, can become sensitized to react more strongly [40]. Vehicles are a frequent stimulus at the study site (i.e. > 250 000 vehicles per year [15], but are not a benign stimulus, because a lack of response or a response not made adequately or quickly enough, causes mortality [20]. Vehicles are also unpredictable, in terms of speed and whether drivers adopt evasive behaviour or not, and this unpredictability may explain the intense responses of birds. From a management perspective, plasticity on the part of the birds is not evident, so encounters between vehicles and birds require management interventions. Management and Conservation Implications A large and growing body of research demonstrates that motorised recreation has numerous and widespread detrimental effects at multiple levels of ecological organisation [10,13,60,61]. The intensity of environmental concerns arising from beach traffic is such that it calls for the design and implementation of management interventions that reduce environmental harm and minimize conflict between motorised and non-motorised recreationists [62]. At the core of interventions that best achieve these goals are measures that separate the threat (vehicles) from sensitive ecological or human targets (e.g. habitats, wildlife), by creating spatial exclusion zones in the form of permanent or temporary beach closures [63,64]. Despite evidence for rapid and significant conservation returns for coastal birds generated from removing vehicles [31,[64][65][66][67], almost all of the beaches at our study sites remain open to vehicles, both inside and outside of public lands designated explicitly for conservation. Because the core management tool (i.e. spatial refuges) is not available to reduce human-wildlife conflicts in these conservation areas, alternatives need to be scoped. One alternative suggested by the management authorities is that drivers adopt behaviours less harmful to wildlife when encountering birds on beaches (i.e. 'codes of conduct'). Such codes often feature low compliance and their efficacy in reliably changing behaviour of tourists or recreationists is questionable [68]. Vehicle owners are not discouraged from driving on beaches. Current behaviours underlying recreational activities based on motor vehicles require fundamental changes to reduce their currently large ecological impacts, on birds and other features of the beach-dune environment [10,35,69]. While the most, and arguably only, effective solution is to drastically reduce traffic volumes or establish traffic-free areas on beaches, changing driver behaviour during encounters with birds can, hypothetically, be an interim measure to lower impacts on wildlife [70]. We showed that separation distances of at least 25 m between birds and vehicles, combined with cars travelling 30 km h -1 or slower, measurably reduce disturbance to one common species of coastal bird, crested terns. By contrast, the great majority of drivers currently approach birds much closer and at much greater speeds. This juxtaposition of a desired target behaviour (i.e. 25 m buffer and slow speed) with the prevailing behaviour of beach users (i.e. close encounters at higher speeds) poses a formidable management challenge. Changing behaviours towards better environmental behaviour is complex. Monroe [71] suggest that, in the context of the theory of planned behaviour, interventions should provide information about the consequences of the behaviour, the ease at which change can be adopted, its effectiveness, and its social acceptability. To encourage appropriate behaviour during encounters with birds, three aspects of the target behaviour can be communicated readily to beach users: 1) the consequences of the current behaviour (i.e. significant disruption of wildlife), 2) the efficacy of the action (i.e. lower probability of flushing), and 3) the ease of implementation (i.e. beaches are generally wide enough to create 25 m buffers between birds and vehicles). Education is frequently suggested as a tool to promote better environmental stewardship and ecologically more benign behaviour, but it is often unclear to which extent this translates into tangible conservation benefits [72][73][74]. The link between awareness and positive outcomes for wildlife is tenuous in the case of motorised traffic and birds. Beach users in the region consider driving of vehicles on the beach to cause 'extreme' levels of disturbance to shorebirds [75], yet the practice continues unbridled [20]. Arguably, insufficient education about the numerous negative ecological effects caused by vehicles may, paradoxically, lead to negative conservation outcomes. This situation may arise for motorised beach recreationists who undertake commercial off-road training courses that may engender some sense of environmental literacy, but do not strongly discourage vehicle owners from driving on the beach. Despite the popularity of beach driving in some sectors of society, the nature of the activity itself, and its numerous and widespread environmental consequences, can engender conflicts between motorised and non-motorised users [76]. These conflicts can be particularly insidious when nonmotorised users are subjected to disproportionally greater impacts caused by motorised recreationists, including direct disturbance of non-motorised beach users, safety hazards, diminished opportunities for wildlife viewing, and loss of the wilderness character and aesthetic values of the area [62]. Only negligible opportunities currently exist for non-motorised recreational users on the beaches studied by us, particularly on Fraser Island; this lack of social equity runs counter to the generally accepted mandate of public lands to adequately provide for multiple uses [3]. Accreditation schemes regarding ecologically sensitive tourism were entirely ineffective in this study: drivers of buses operating as accredited 'eco tours' disturbed birds more than did the 'unaccredited' drivers of non-commercial cars which behaved more eco-friendly. All bus traffic impacting shorebirds is generated by commercial tour operators transporting visitors around the island -a tourism activity that frequently employs 'green' marketing credentials (e.g. 'eco', 'nature-based', etc.) or uses accreditation which purports to safeguard natural assets including wildlife. The broader question of whether tourism can be a sustainable activity in national parks is a multi-faceted and complex one, including environmental impacts caused by 'ecotourism' [1,77,78]. In the current situation there is a clear and urgent need to correct environmentally harmful practices within the industry (e.g., lack of evasive actions by bus drivers, frequent impacts on birds) to decrease tourism's ecological impacts on beaches and dunes. Conclusions 1. Vehicles driven on sandy shores frequently and intensely disturb birds on open-coast beaches. 2. The intensity and probability of a disturbance response is a function of the distance that separates birds from vehicles. Thus, sufficiently wide setback distances are expected to reduce impacts by vehicles. Our experimental test of practical setbacks (and speeds) showed that they will reduce, but not eliminate, disturbance for birds. Setback distances therefore can make a contribution to reducing impacts on wildlife and can be an effective conservation tool, provided that they are carefully planned and tested. 4. In this case of beach driving, setback distances will, however, never be as effective as the creation of spatial refuges from motorised recreationists (i.e. permanent or temporary beach closures), and are thus only a complementary and temporary measures that cannot replace beach closures. 5. The question as to how much disturbance is tolerable, or how much vehicle-based mortality is required to compromise population persistence, remains unknown. 6. Paradoxically, the 'eco-friendly' tourist industry operating buses caused greater disturbance to birds, suggesting that organised tourism -notwithstanding its 'eco accreditationdoes not necessarily engender better outcomes for wildlife. This unexpected result calls for improved industry commitment to conservation. 7. Any solution that has tangible conservation outcomes with regards to the issue of bird-vehicle interactions on beaches will require changes to the status quo and hence may engender controversy. Table S1. Summary statistics of estimates made by observers who judged the distance separating birds from vehicles. Each trial involved a model bird and a number of set distances (1 to 50 metres) repeatedly tested at the start of each field observation day. Observers were near the dunes, ca. 100 m distant from a model bird that other members of the study team (not the observers) passed in a vehicle at distances marked out in the sand; these markers were small enough not be seen by the observer. (DOCX) Table S2. Summary statistics of logistic regressions, modelling the probability of birds being flushed (i.e. escaping vehicles by taking flight) in relation to separation distance between birds and vehicles. (DOCX) Figure S1. Relationship between estimated (by observers in the field) and actual (measured with markers from a model bird) distances separating birds from vehicles on beaches (n = 270 trials). Supporting Information (TIF) Figure S2. Evaluation of driver performance in experiments requiring the maintenance of fixed separation distances between birds and a vehicle. Distances tested were 5 m and 25 m. Shaded bands encompass two standard deviations around the mean. Deviance is expressed as the percentage difference between the mean distance maintained by a driver and the required test distance. (TIF)
2017-04-01T12:48:27.242Z
2013-09-05T00:00:00.000
{ "year": 2013, "sha1": "2e89da7408bda96ad119f0c33165925e9ceb2f4f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071200&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c873baa0fec4e9299bde56974ec5ac3f6e4aaa96", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
7284987
pes2o/s2orc
v3-fos-license
Accurate measurement of the electron beam polarization in JLab Hall A using Compton polarimetry A major advance in accurate electron beam polarization measurement has been achieved at Jlab Hall A with a Compton polarimeter based on a Fabry-Perot cavity photon beam amplifier. At an electron energy of 4.6 GeV and a beam current of 40 uA, a total relative uncertainty of 1.5% is typically achieved within 40 min of data taking. Under the same conditions monitoring of the polarization is accurate at a level of 1%. These unprecedented results make Compton polarimetry an essential tool for modern parity-violation experiments, which require very accurate electron beam polarization measurements. Introduction The Continuous Electron Beam Accelerator Facility (CEBAF) at the Jefferson Laboratory(JLab) is a new particle accelerator which makes extensive use of challenging [1,2]. The photon density is amplified with a Fabry-Perot cavity of very high finesse which provides a power of 1700 W of IR light at the Compton interaction point. This performance, unequalled in a particle accelerator environment, results in a statistical accuracy for a polarization measurement below 1% within an hour at 4.6 GeV [3]. This number scales with the inverse of the beam energy. In section 2 of this paper, we briefly summarize the experimental set-up of the Compton polarimeter. Section refsec:datatake describes its operational properties achieved during two polarized experiments, N − ∆ [4,5] and GEp [6,7]. Next, we describe a new analysis method developed to restrain systematic uncertainties in the polarization measurement with a high confidence level. We explain in detail the sources of these systematic errors and present longitudinal electron polarization measurement results. Finally, we give for the first time at JLab a measurement of the polarization difference between the two helicity states of the electron beam. Compton polarimeter at JLab Compton scattering of polarized electrons off a circularly polarized photon beam shows an asymmetry of the counting rates n +/− for different orientations of the electron polarization [8] A exp = n + − n − n + + n − = P e P γ A c where the asymmetry A c is calculated from QED. Measurements of the experimental asymmetry A exp and of the circular photon polarization P γ give access to the mean longitudinal electron polarization P e . The electron beam polarization is flipped at a 30 Hz rate to minimize systematic effects. Data Taking We describe here how the Compton polarimeter data-acquisition system works, and the strategy used to minimize false asymmetries. Acquisition The data acquisition is driven by the 30 Hz electron beam polarization flip. Two milliseconds after each reversal, the trigger system is activated and events are taken from the photon and/or electron detectors, according to the trigger configuration determined by the user. The trigger system is inhibited a few ms before the next reversal. Each detector has its own trigger logic. The photon calorimeter trigger system generates an event when the signal of one the photo-multiplier tubes exceeds a given threshold. This signal is then integrated over a period of 150 ns. The electron detector triggers when signals are detected in coincidence on a given number of the silicon strip planes, at the same dispersive position. A specific logic is used to take care of cases where both detectors fire in coincidence. The data-acquisition system can read out photon and electron events at a rate greater than 100 kHz with a dead time of only a few percent. These data are read by either a custom-built buffer card for the electron detector signals, or 10 bits buffered ADCs for the photon calorimeter. Calibration signals from a LED can be used to monitor the gain variation of the photon detector. All these raw data are read through VME block transfer by two Power PC Photon polarization reversal Helicity-correlated differences in the electron beam parameters (charge, position and angle) lead to false asymmetries b i which add to the experimental asymmetry where i runs over the different sources of false asymmetries. The charge asymmetry is corrected to first order by normalizing the counting rates to the beam current. The remaining systematic effects from position and angle are independent of the photon beam polarization state. Hence, in changing the sign of the photon polarization the major part of this type of false asymmetries is canceled. This defines the procedure for data taking as a sequence of alternating right and left laser circular polarization, as illustrated in figure 1. Moreover, between two photon polarization states, the cavity is unlocked in order to measure the background. Thanks to a high quality vacuum in the beam pipe and the control of the beam envelope using quadrupoles upstream the magnetic chicane a signal over background ratio of 20 is routinely achieved. Experimental asymmetry For a given circular photon polarization, right (R) or left (L), we can calculate the asymmetry of integrated event numbers for two consecutive windows of opposite electron helicity states, as where n ± refers to the normalized numbers of photons with a deposited energy greater than a given threshold. These are defined as where I± is the electron beam intensity, Γ ± is the acquisition live time, N ± i is the number of detected events in the i th ADC bin and i s is the threshold corresponding to the lower edge of the bin. The normalized counting rates N ± /I ± Γ ± are shown in figure 2 versus the energy in ADC bin units. The threshold i s is a software threshold applied to the total charge deposited and not to the maximum amplitude reached by the signal. It can be varied off line in order to obtain the optimal value that maximizes the statistical accuracy and minimizes the effect of false asymmetries. This operating point is found to be between the 6th and the 9th bin (see section 6). For a typical 40 minutes run, a raw asymmetry A R/L raw is defined as the average of all pulse-to-pulse asymmetries A R/L p . The distribution of these asymmetries is shown in figure 3, for both right and left photon polarizations. We can see that the pulse-to-pulse asymmetry distributions follow a Gaussian law. The raw asymmetry has to be corrected for background according to where (B/S) R/L is the background to signal ratio for each photon polarization and A B is the background asymmetry. B/S is of the order of 0.06 with a threshold set to the 8 th energy bin (≈ 230 MeV), and A B is found to be compatible with zero at the 10 −4 level. Finally, the mean experimental asymmetry is computed as where ω R/L corresponds to the statistical weight of each experimental asymmetry. The mean experimental asymmetries measured above the software threshold for E = 4.6 GeV are around 6% and can be measured with a relative statistical accuracy of 0.65% in one hour at I = 40µA. Analysing power The second part of this analysis concerns the determination of the analyzing power. In order to account for detection effects, we define the response function of the calorimeter R(ADC, k) as the ADC spectrum for a set of photons with a given energy k. From this response function the probability to detect photons of energy k above a given ADC threshold ADC s is Using this probability one can then calculates the analyzing power of the polarimeter defined as the average of the Compton asymmetry weighted by the Compton cross section Calibration and analyzing power The response function measured during a specific reference run has to be corrected for mean gain variations when used to analyze a later run. To this end a calibration coefficient λ is introduced which accounts for gain corrections λ is fitted to the experimental spectrum of each run (Fig. 5) using the convolution of the unpolarized Compton cross section dσ 0 (k)/dk with the response The probability of photon detection is deduced from Eq. (7), where the lower integration boundary ADC s is replaced by ADC s /λ. The analyzing power is then calculated from Eq.(8) for each data run (with i s = 8). An overview is given in figure 6 6 Systematic uncertainties Experimental asymmetry The largest source of systematic error in the experimental asymmetry is the false asymmetry related to the electron beam position, since the Compton luminosity is determined by the overlap of the electron and laser beams. If one assumes a Gaussian intensity profile for these two beams, the luminosity is also a Gaussian function of the distance between the two beam centroids. Since the optical axis of the cavity is fixed by the monolithic mechanic of the mirror holder, the position variation of the electron beam directly affects the Compton luminosity with a sensitivity equal to the derivative of this Gaussian function. In order to minimize this effect, two position-feedback systems were used, one at high frequency to reduce the jitter (down to 20 µm) and one at low frequency to lock the mean position at the point corresponding to the maximum of the Gaussian overlap curve, where the sensitivity to beam position goes to zero. Finally, averaging over several photon polarization reversals cancels out most of these false asymmetries provided that the statistical weights of right and left circularly photon polarization states are similar. In practice, these statistical weights ω R/L are not exactly equal, and some residual effects must be taken into account. So, in agreement with equations (2) and (6), we have: Studies of the four beam parameters (x, y, θ x , θ y ) show that their correlations tend to reduce the total false asymmetry. As a safe and simpler estimate of the error we assume them to be uncorrelated. The final error quoted in Table 1 should be read as a typical run-to-run error. It corresponds to the width of the distribution of all res(b i ) which turns out to be centered at zero. For each individual run one can also choose to correct for res(b i ) and its error. When averaging the polarization over a sufficient number of runs Table 2. Photon polarization The circular photon polarization is measured at the exit of the Fabry-Perot Table 3 The mean value of the DOCP for both laser polarization states is : The photon polarization used for the electron polarization measurement is the average value between the two polarization states : where we took to first order ω L = ω R . 7 Results and discussions General results A review of the uncertainties is given in Table 4. The last column shows the accuracy of the monitoring of the electron beam polarization for which all normalization errors cancel. A summary graph of all polarization measurements performed during the N-∆ experiment is shown in figure 9 (300 measurements in 60 days). The jumps in the beam polarization are directly correlated with operations at the polarized electron source when the laser spot is displaced to illuminate a different spot on the photocathode in order to increase the beam current. These significant variations in the beam polarization demonstrate that the Compton polarimeter is an ideal and a mandatory tool to provide a meaningful polarization measurement over long data-taking periods. Determination of ∆P e Most of the polarized physics experiments in Hall A are only sensitive to the mean longitudinal electron polarization defined as where P + e and P − e denote the electron polarization in each electron spin configuration (parallel or anti-parallel). However, some experiments, such as the N-∆ experiment, are sensitive to One way to measure this quantity is to use the photon polarization reversal, sacrificing the cancellation of helicity-correlated effects. Experimental asymmetries are thus computed from counting rates between two opposite signs of the photon polarization, for each electron helicity [11]. However, the photon polarization is reversed every three minutes only, resulting in a false asymmetry of the same size as the Compton asymmetry itself. If one makes the assumption that these false asymmetries are independent of the backscattered photon's energy, variation of the Compton asymmetry with respect to energy allows one to isolate ∆P e . An example is shown in figure 10 where the sum of both experimental asymmetries A + exp and A − exp is fitted with a function such as f (E) = ∆P e · P γ · A C (E) + cst. For a set of Left/Right photon reversals over several days, we assess ∆P e for the first time at JLab and find it statistically compatible with zero at a level of 0.3 %. Conclusion We have continuously measured the CEBAF electron beam polarization over two periods of 30 days at an electron energy of 4.6 GeV and an average current of 40 µA. The use of a highly segmented electron detector in coincidence with the photon detector was a key element to reduce the systematic errors. By using 40 minute runs a total relative systematic error of 1.2 % was achieved. Thanks to our high-gain optical cavity and a double beam position feed-back, a statistical accuracy of 1 % could be reached within 25 minutes. In the relative variations of the beam polarization from one run to another the correlated errors cancel out and the systematic error is reduced to 0.7 %. Because most of the recent experiments in Hall A take advantage of the highly polarized and intense electron beam available at JLab, the Compton polarimeter has been routinely operated over the last three years to monitor the beam polarization. Its performance are crucial for the upcoming parity experiments [13,14,15] which aim for a very accurate measurements (≤ 2%) in an energy range of 0.85 to 3.00 GeV. Such a precision remains challenging and require detectors and laser upgrades which are under study. At higher energy (6 GeV Table 2 Relative systematic uncertainties applied to Compton analyzing power during and GEp experiments [6,7]. Table 4 Review of uncertainties for an absolute (2nd column) and relative (3rd column) electron beam polarization measurement.
2014-10-01T00:00:00.000Z
2005-04-27T00:00:00.000
{ "year": 2005, "sha1": "5f56f444a85aa90a88214e6351426c390a4f9d5f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0504195", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4e7d1df256c3ffd0968560a1abb1326bdb317b51", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232055189
pes2o/s2orc
v3-fos-license
Graphene-Based Composites for Phosphate Removal A variety of methods, including chemical precipitation, biological phosphorus elimination, and adsorption, have been described to effectively eliminate phosphorus (P) in the form of phosphate (PO43–) from wastewater sources. Adsorption is a simple and easy method. It shows excellent removal performance, cost effectiveness, and the substantial option of adsorbent materials. Therefore, it has been recognized as a practical, environmentally friendly, and reliable treatment method for eliminating P. Nanocomposites have been deployed to remove P from wastewater via adsorption. Nanocomposites offer low-temperature alteration, high specific surface area, adjustable surface chemistry, pore size, many adsorption sites, and rapid intraparticle diffusion distances. In this Mini-Review, we have aimed to summarize the last eight years of progress in P removal using graphene-based composites via adsorption. Ultimately, future perspectives have been presented to boost the progress of this encouraging field. INTRODUCTION Globally, freshwater for recreation, amusement, farming, and domestic usage is progressively being jeopardized due to the stressors of climate change, growing human demand, and water contamination. 1 Many countries regulate contamination by requiring industries to treat wastewater before dumping it into natural watercourses. However, due to the limited accessibility of high-quality water resources, recycling and repossession of treated wastewater have become imperative to renewable water management. Phosphorus (P) is the primary nutrient pollutant in water bodies such as rivers, streams, lakes, reservoirs, and estuaries. 1c−e P often enters water reservoirs via sewage releases, treated wastewater, agricultural and industrial activities, and mining. Excessive concentrations of phosphorus in water bodies can cause eutrophication. 2 This phenomenon deteriorates water quality due to the overabundant development of plants, e.g., algae. In advanced eutrophication, dissolved oxygen (O 2 ) can become diminished to threateningly low levels, leading to fish kills. Efficient treatment processes for removing phosphorus are essential to address water quality deterioration. 3a−c Conventional procedures such as adsorption, chemical precipitation, biological treatment, and membrane separation have been investigated to eliminate phosphorus. 3d,e Among these processes, the adsorption method is currently the most practical phosphorus removal process in water. Phosphorus is found in numerous forms, such as HPO 4 2− , H 2 PO 4 − , and H 3 PO 4 , in ecological environments. 4 Adsorbed phosphorus can potentially desorb from the adsorbent, and the retrieved phosphate can be further utilized in various applications such as in fertilizers or the production of steel. 5a For any adsorption technique, an adsorbent should have a high surface area, pore volume, and suitable functionalities to sorb contaminants from the soil, water, and air. 5b A variety of porous materials such as granular activated carbon, clays, fly ash, zeolites, furnace slag, metal oxides, graphene, graphene oxide, functionalized graphene, metal−organic frameworks, and carbon nanotubes have been studied as absorbents for phosphorus removal. 6 Over the past few years, graphene-based composites such as graphene, graphene oxide (GO), reduced graphene oxide (rGO), and modified graphene and graphene oxide have drawn interest for wastewater treatment applications. 7 Graphene is a 2-D carbon nanomaterial with a single layer of sp 2 -hybridized carbon atoms organized in one plane of six-membered rings. Graphene demonstrates 2630 m 2 /g of theoretical specific surface area with robust thermal, mechanical, and electrical characteristics. 8 Functionalized graphene with variable oxygen functionalities is known as graphene oxide (GO). Numerous reviews have been reported on applying graphene-based materials as adsorbents to remove pollutants in water and wastewater treatment. 8c However, to the best of our knowledge, no review on phosphate removal in wastewater treatment using graphene-based composites is presented. This review reports the research in graphene-based composites as adsorbents for phosphate removal in water systems. PHOSPHORUS REMOVAL BY GRAPHENE COMPOSITES Vasudevan and his research group utilized graphene as an adsorbent to remove phosphate from water (C 0 = 100 ppm). Graphene showed an excellent adsorbent capacity of up to 89.37 mg/g at 30°C. 9a They also investigated the effect of pH on phosphate adsorption by varying the solution's pH from 2 to 12. The optimal phosphate adsorption was obtained at a pH of 6−8. Lower adsorption of phosphate was observed in basic conditions (pH > 8) due to the electrostatic repulsion between phosphate ions (PO 4 2− ) and the negatively charged graphene surface. However, phosphate adsorption is enhanced in acidic media and reaches the peak elimination efficacy of 99.1% at pH 7. Phosphate's adsorption on graphene also intensifies at higher temperatures, indicating the endothermic process. A rise in temperature decreases the interaction between the solvent and solute in the solution. This phenomenon then enhances the interaction between the solute (phosphate) and adsorbent (graphene), ensuring the viability of more active sites for phosphate binding. 9b This adsorption method undergoes second-order kinetics, signifying that phosphate's adsorption on graphene is a chemical directing process. GO has also been utilized for the removal of phosphate in wastewater treatment. 10 Similarly, graphene oxide is highly dispersible in water. 10b GO has been shown to eliminate 70% of phosphate (C 0 = 100 ppm) from water at 30°C. By incorporating iron nanoparticles on graphene oxide, GO's intake efficiency toward phosphate was increased from 70% to 80%. 10b Recently, Huang and his research group studied the static adsorption of trace concentrations of phosphorus on reduced graphene oxide (rGO). 10c Their technology removed 98.9% of P from water. Hydrogen bonds between the reduced graphene oxide and phosphate ions enhanced the adsorption process. To prove the concept, acetaminophen, which also forms hydrogen bonds with rGO, was spiked into the real water as a contender for phosphate adsorption. Consequently, the coexistence of acetaminophen reduced the adsorption of phosphate on rGO. Overall, graphene, graphene oxide, and reduced graphene have shown an adequate ability to adsorb phosphate. Furthermore, these studies demonstrate the ability of carbonaceous nanomaterials to treat water. However, these materials are ineffective in removing phosphate in the presence of foreign multianions/copollutants due to their nonspecific selectivity toward phosphate. Additionally, these investigations have been performed at the bench scale. Column studies have not yet been conducted in detail. PHOSPHORUS REMOVAL BY FUNCTIONALIZED/MODIFIED GRAPHENE Graphene and functionalized graphene can be easily dispersed in water homogeneously due to their low density. 11 Due to their homogeneous nature, these composites have increased the interaction area with phosphorus. 11d Therefore, modified graphene-based composites have been applied widely as adsorbents for phosphate removal. This section will discuss using different types of functionalized and modified graphenebased composites as adsorbents to remove phosphate. a. Lanthanum/Graphene Composites. Metal cations (M n+ ) are often recommended as effective constituents to alter the graphene's negatively charged surface to enhance the loading of anions such as PO 4 2− and NO 3 − . From the previous studies, it has been established that La 3+ ions have a high sorption affinity toward phosphate. 12 Therefore, several investigations have been performed using lanthanum-supported graphene to increase the nanocomposite's adsorption efficiency. 12b In one study, lanthanum hydroxide (LaOH) was immobilized onto graphene nanosheets (GNS) via a microwave-mediated hydrothermal process and utilized for phosphate adsorption from an aqueous solution. GNS-LaOH showed two times higher phosphate adsorption capacity (41.96 mg/g) than lanthanum hydroxide supported on activated carbon fiber (15.3 mg/g). 12c In other studies, 3-D lanthanum oxide immobilized graphene composites exhibited a promising phosphate adsorption capacity of 82.6 mg/g. 12d The addition of coexisting anionic species such as SO 4 2− , NO 3 − , and Cl − (8000 ppm) did not affect these adsorbents' efficiency and showed 100% phosphate (C 0 = 25 ppm) removal. Similarly, Nouri and his research group developed an innovative technology of lanthanum (La 3+ ) hydrate immobilized magnetic reduced graphene oxide (MG@La) nanocomposites for phosphate removal from river and sewage media. The synthesized MG@La nanocomposite demonstrated a high adsorption capacity of 116.28 mg/g for phosphate. 13a The introduction of La 3+ hydrate on graphene sheets also enhanced their affinity toward oxygen−donor compounds. Also, graphene nanosheets with a high surface as support evade the accumulation of La 3+ hydroxide nanoparticles. The presence of a high concentration of coexisting ions, including SO 3 2− , CO 3 2− , Br − , Cl − , Fe 3+ , Cu 2+ , Ca 2+ , K + , Na + , and Zn 2+ , shows only a minor effect on the adsorption efficiency of MG@La toward phosphate. This may be due to a large number of active sites or the high adsorption capacity of MG@ La. Moreover, MG@La showed excellent chemical stability during the leaching test. Even though the developed adsorbent was shaken for 24 h in water with pH range 4−10, a significantly lower amount of La was released. Recently, innovative phosphate ion-imprinted polymer (GO-IIP) was synthesized by Hu et al. and used for phosphate recovery. 13b GO-IIP was fabricated by evolving La(III)-coordinated 3methyacryloxyethyl-propyl bifunctionalized graphene oxide. The developed GO-IIP showed exceptional selectivity and higher adsorption capacity (104.3 mg/g) for phosphate at 25°C . Also, GO-IIP can be utilized up to seven times, with only about 8.95% loss of initial adsorption capacity. Recently, Li and his research group fabricated a membrane by blending a lanthanum supported metal−organic framework with graphene oxide under pressure and tested for the removal of phosphorus in water. The membrane showed a maximum adsorption capacity at pH = 4. Also, the phosphorus adsorption removal rate can reach 100% when the contaminated water (<100 ppm) is passed through the membrane during the treatment process. 13c Therefore, La-modified graphene-based composites propose a new method for optimizing the highly effective adsorbent for eliminating pollutants from water samples via adsorption. b. Zirconium/Graphene Composites. Due to their nontoxicity, chemical stability, resistance to oxidation, heterogeneity, and amphoteric nature, zirconium-based oxides have been extensively utilized to eliminate phosphate from water. However, some of these materials have ultrafine characteristics and are very difficult to isolate from water. The fine powders of zirconium-based materials help them to immobilize on appropriate supports to address the leaching issue. 14 To take advantage of the benefits of both zirconiumbased oxides and graphene oxide (GO)/reduced graphene oxide (rGO), the zirconium-loaded reduced graphene oxide (RGO-Zr) adsorbent was synthesized via a one-step green hydrothermal process. These materials were utilized for phosphate adsorption in an aqueous environment under various conditions. 14b RGO-Zr showed an adsorption capacity of 27.71 mg P/g at pH 5 and 25°C. The surface hydroxyl groups may play a key role in phosphate sorption on the adsorbent surface. Phosphate adsorption on the RGO-Zr surface followed the ion exchange and ligand exchange mechanisms in a weakly acidic solution at pH 5 (Scheme 1). Similarly, zirconium-cross-linked graphene oxide/alginate (Zr-GO/Alg) aerogel beads were tested for phosphate uptake performance. 14c The integration of graphene oxide provides the composite beads more strength and uniform pores. The Zr-GO/Alg beads showed the highest adsorption capacity of 189.06 mg/g as established by batch and fixed-bed column studies at an optimal pH range of 2.1−4.0. Also, increases in temperature and amount of adsorbent supported enhanced phosphate uptake. The existence of HCO 3 − and F − repressed phosphate adsorption, whereas the presence of SO 4 2− , NO 3 − , and Cl − did not affect phosphate uptake. By comparing fresh and used Zr-GO/Alg aerogel beads, it was confirmed that the strong binding affinity between phosphate and adsorbents primarily occurred by ligand exchange effect and electrostatic interaction (Scheme 1). Also, spent Zr-GO/Alg aerogel beads were easily regenerated using 0.1 M NaOH solution, and recycled beads showed high adsorption capacity after five reuses. Recently, Hosseinifard et al. developed a zirconium application, an immobilized nanochitosan−graphene oxide (NCS@GO/H−Zr) adsorbent for the removal of phosphate from water. NCS@GO/H−Zr demonstrated an excellent phosphate uptake of 172.41 mg P/g and retained a 76% phosphate adsorption ability after ten recycles. 14d In summary, zirconium-immobilized graphene-based composites have promising applications in the remediation of water eutrophication. However, further research should be conducted through the column for their scalability and industrial applications. c. Layered Double Hydroxide/Graphene Composites. Layered double hydroxides (LDHs) have lamellar hydroxides of divalent (M I 2+ ) and moderately substituted trivalent (M II 3+ ) cations, which are parted by water molecules and anions in the interlayer spaces to stabilize the overall charge. 15 Due to their capacities to exchange ions, several forms of LDHs have been documented as encouraging and heterogeneous materials for phosphate treatment. 15 However, these processes require prolonged treatment time and insufficient renewal capability. Limited investigations have demonstrated the recycling of used LDHs. 15c Extremely concentrated NaCl or NaOH solutions have been utilized to extract the adsorbed phosphate from LDHs. However, this regeneration process is complicated, is Scheme 1. Plausible Reaction Mechanism of Phosphate Adsorption on the Surface of RGO-Zr in Weakly Acidic Solution unprofitable for commercial applications, and produces a vast amount of harsh wastewater. 15 Similarly, MgMn-LDH has been employed as a prospective alternative for removing phosphate due to its high stability in solutions, its selectivity for phosphate ions, and the low cost of manganese compounds. 16 By utilizing MgMn-LDH, Tai and his research group established an ultraefficient method of a continuous electrosorption−desorption system for the selective adsorption and discharge of phosphate. They synthesized GO/magnesium manganese-layered double hydroxide (LDH) composites, GO/MgMn-LDH-300, by calcinating at 300°C. 16 In this process, adsorbed phosphate can be quickly discharged by monitoring the applied voltage. The graphene oxide incorporated within the layered structure enhances the surface area and produces additional mesopores to capture phosphate. Also, oxidation of Mn increases when oxygen-carrying functionalities of GO interact with metal ions. This phenomenon generates different active sites for phosphate adsorption. The synthesized GO/MgMn-LDH-300 demonstrated ultrahigh productivity, selective phosphate elimination, and outstanding recyclability, with phosphate uptake and release rates of 0.97 and 3.56 mg P/g/min, respectively. 16 Recently, the same research group again synthesized a scalable and sustainable hierarchical porous adsorbent using inexpensive Garcinia subelliptica leaves as a bioderived natural template for enhanced phosphate adsorption. 16b First, MgMnlayered double hydroxide (MgMn-LDH) and GO were grown in situ on Garcinia subelliptica leaves to get L-GO/MgMn-LDH. Then, L-GO/MgMn-LDH was calcinated at 300°C to obtain the final hierarchical porous L-GO/MgMn-LDH-300 adsorbent. The leaves are composed of vessels and fibers and possess a natural hierarchical porous structure. Therefore, they can act as a potential biotemplate. The L-GO/MgMn-LDH-300 adsorbent selectively uptakes phosphate and shows high, reusable phosphate adsorption capacity and a desorption rate of 244.08 mg P/g and 85.8%, respectively. 16b Overall, layered double hydroxide/graphene composites are capable, scalable, suitable, and recyclable selective phosphate adsorbents. These techniques propose an appropriate process for efficient and cost-effective phosphate recycling from water. d. Iron-Based Nanomaterials/Graphene Composites. In combination with graphene or its derivatives, iron oxide nanomaterials show great potential in catalysis, sensing, water, and wastewater treatment. 17 Previously, an innovative triethylene tetramine-functionalized magnetic graphene oxide chitosan composite (TETA-MGO/CS) with a high uptake efficiency toward phosphate has been synthesized. 18a The maximum adsorption capacity of TETA-MGO/CS was found to be 353.36 mg/g at pH 3. The adsorption methods achieved equilibrium in 50 min. Also, adsorbed PO 4 3− ions could be released from TETA-MGO/CS and recycled three times. Therefore, TETA-MGO/CS has been investigated as an efficient and renewable adsorbent in phosphate removal. Losic and his research group developed a technology of 3-D graphene aerogels fabricated with goethite (α-FeOOH) and magnetite (Fe 3 O 4 ) nanoparticles for capturing phosphates in water. 18b These synthesized aerogels demonstrated a high capacity to eliminate phosphate (C 0 = 200 ppm) up to 350 mg/g. Similarly, α-Fe 2 O 3 -immobilized graphene oxide (GO-Fe 2 O 3 ) was utilized for the adsorption of phosphate. 18c GO-Fe 2 O 3 adsorbed 93.28 mg/g phosphate (C 0 = 50 ppm) at pH 6.0 and 25°C. The synthesized GO-Fe 2 O 3 showed very stable phosphate adsorption capacity between the pH range of 2.0− 10.5 and the temperature range of 20−60°C. GO-Fe 2 O 3 achieved adsorption equilibrium within 5 min. Mainly, GO-Fe 2 O 3 follows the electrostatic attraction (physical adsorption) and ion exchange (chemical adsorption) mechanisms to remove the phosphate in treatment application. In another study, akaganeite nanorods (β-FeOOH) integrated on GO sheets were utilized to remove phosphate from water at pH 7 and 30°C. 18d The incorporation of GO during the preparation of β-FeOOH nanorods raises the characteristic ratio of rods from 5 to 7. The kinetics data demonstrated second-order kinetics, and the equilibrium condition was attained within 2 h. The removal of phosphate was enhanced at a lower pH and decreased at a higher pH solution. β-FeOOH/GO displayed good recyclability at different pH solutions and showed a maximum of 78% at pH 7 and 30°C. Overall, iron-based nanomaterials/graphene composites are stable, recyclable, and scalable adsorbents to remove phosphate in wastewater treatment applications. Therefore, these materials can be an excellent choice to deal with phosphatecontaminated water for commercial purposes. e. Other Miscellaneous Graphene-Based Composites. Graphene-based composites possess high chemical stability and good mechanical strength. 19 Titania-functionalized graphene oxide (TiO 2 /GO) has been widely utilized in water treatment applications compared to other oxidative derivatives. 19c The large surface area of graphene oxide and its high uptake efficiency also boost titanium/graphene-based composites' adsorption capacity. Sakulpaisan and his research group synthesized titania-functionalized graphene oxide by the sol− gel method. TiO 2 /GO composites yielded better adsorption results than titania and graphene oxide. 20a The synthesized TiO 2 /GO showed 30.4 mg/g of phosphate adsorption capacity at pH 6. Phosphate adsorption decreases at high pH levels due to a rise in the repulsion between phosphate anions and the oxygen-carrying functional group of adsorbent surfaces. Similarly, Martínez and his research group performed a comparative study between GO and GO-functionalized silver nanoparticles (GO@AgNPs) as adsorbents to eliminate phosphate from water samples. An amount of 20 mg of GO removed 75% phosphate (C 0 = 30 ppm) at pH 10. Only 500 μL of GO@AgNPs eliminated 100% phosphate (C 0 = 30 ppm) at pH 7. 20b Recently, Keggin-type aluminum polyoxocation species, Al30, modified graphene oxide nanosheets, and triaminotriazine-functionalized GO composites were investigated for the efficient removal of phosphate. 20c These adsorbents are cost-effective and can be reused up to several cycles without significant loss of their uptake efficiency. Further, these heterogeneous composites could be synthesized at a large scale for commercial use in the industrial application of wastewater treatment. f. Conclusions and Future Perspectives. Graphene is increasingly appealing to more researchers and scientists due to its exceptional thermal, electronic, and mechanical characteristics. Modified graphene-based materials have been synthesized by cross-linking organic scaffolds via noncovalent and covalent interaction and impregnating inorganic metals. These modified/functionalized graphene-based composites demonstrate exceptional and enhanced abilities in numerous fields. In this mini-review, we summarize the applications of graphene and functionalized graphene-based composites in removing phosphorus in the form of phosphate. The elimination of phosphorus from contaminated water is a worldwide concern as an excess of phosphorus instigates negative ecological effects. Excess phosphorus can cause eutrophication, which further leads to inferior water quality and marine life damage. Many treatment processes have been investigated to eliminate phosphorus from water to stop the excess toxic ecological effects from phosphorus. Among many techniques, the adsorption method has its exclusive benefits for practical and large-scale applications, such as high efficiency and easy operation. The present mini-review focuses on phosphate removal in wastewater treatment using graphene-based composites. Several metals (e.g., titania, zirconium, iron, layered double hydroxide, lanthanum, aluminum, and silver) and modified graphene composites have been studied for the effective adsorption of phosphate. Among these, iron-based nanomaterials/graphene composites and layered double hydroxide/ graphene composites have shown promising, stable, recyclable, and scalable adsorbents for the selective removal of phosphate. Some biomasses (e.g., cellulose and chitosan) and functionalized graphene-based composites have also been investigated for the cost-effective removal of phosphate from water. These graphene-based adsorbents can be an excellent alternative to treat phosphate. However, most of the studies have been performed at the bench scale. Further research needs to be conducted at the pilot scale, including column study for their industrialization. This will most likely be done by further examining phosphorus elimination mechanisms and favorable removal conditions on a large scale during column studies. He is also an adjunct professor at Wright State University, in Dayton, OH. He has received several Scientific and Technological Achievement Awards (STAA) from the EPA, including the National Risk Management Research Laboratory Goal 1 Award. He is the recipient of "Chemist of the Year" from the American Chemical Society. He is a member of the editorial advisory board of several international journals, has published over 250 papers in reviewed journals with a citation index ∼14 900 (h-index 58), and holds several patents. He has worked in the areas of water research, nanomaterials, nanotechnology, green chemistry, polymer blends, solid coatings, solid-state chemistry, and drug delivery. ■ AUTHOR INFORMATION Dr. Sanny Verma received his M.Sc. degree in organic chemistry in 2007 from Delhi University, Delhi, and his Ph.D. degree in 2014 from the Indian Institute of Petroleum (IIP), Dehradun, Uttarakhand, under the supervision of Dr. Suman Lata Jain and Dr. Bir Sain. He has worked on Microfluidics with Prof. Dong Pyo Kim at POSTECH, South Korea, and as an ORISE postdoctoral fellow with Dr. R. S. Varma and Dr. M. N. Nadagouda at the U.S. EPA, where he conducted research on nanocatalysis, on greener methods for synthesizing bioactive compounds, on the synthesis of value-added products from biological feedstock, on the conversion of biomass into industrial feedstock, and on the treatment of per-and polyfluoroalkyl substances (PFASs). Currently, Sanny is working as a chemist at Pegasus Technical Services Inc. (on-site contractor to the U.S. EPA), where he is carrying out research on the treatment of emerging contaminates and sustainable chemistry. ■ ACKNOWLEDGMENTS The United States Environmental Protection Agency through its Office of Research and Development funded and managed the research described here. It has been subjected to the Agency's administrative review and approved for publication. The views expressed in this article are those of the author(s) and do not necessarily represent the U.S. Environmental Protection Agency's views or policies. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.
2021-02-27T05:09:10.954Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "28a534d71138937b6170a6ddc187d5589bf6cd12", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c05819", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28a534d71138937b6170a6ddc187d5589bf6cd12", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
257496893
pes2o/s2orc
v3-fos-license
Small Fermi pockets intertwined with charge stripes and pair density wave order in a kagome superconductor The kagome superconductor family AV3Sb5 (A=Cs, K, Rb) emerged as an exciting platform to study exotic Fermi surface instabilities. Here we use spectroscopic-imaging scanning tunneling microscopy (SI-STM) and angle-resolved photoemission spectroscopy (ARPES) to reveal how the surprising cascade of higher and lower-dimensional density waves in CsV3Sb5 is intimately tied to a set of small reconstructed Fermi pockets. ARPES measurements visualize the formation of these pockets generated by a 3D charge density wave transition. The pockets are connected by dispersive q* wave vectors observed in Fourier transforms of STM differential conductance maps. As the additional 1D charge order emerges at a lower temperature, q* wave vectors become substantially renormalized, signaling further reconstruction of the Fermi pockets. Remarkably, in the superconducting state, the superconducting gap modulations give rise to an in-plane Cooper pair-density-wave at the same q* wave vectors. Our work demonstrates the intrinsic origin of the charge-stripes and the pair-density-wave in CsV3Sb5 and their relationship to the Fermi pockets. These experiments uncover a unique scenario of how Fermi pockets generated by a parent charge density wave state can provide a favorable platform for the emergence of additional density waves. Here we use a combination of angle-resolved photoemission spectroscopy (ARPES) and spectroscopic-imaging scanning tunneling microscopy (SI-STM) to unveil an intimate connection between density waves and reconstructed Fermi pockets in AV 3 Sb 5 . Our ARPES measurements reveal six small ellipsoidal hole-like pockets generated by the 2 x 2 CDW state forming at T*. Scattering between these pockets leads to new, dispersive wave vectors q* in SI-STM measurements, oriented along each reciprocal lattice direction. While the three q* peaks all lie along the Γ-K directions of the original Brillouin zone when the 4a 0 charge-stripe order is absent, we discover that the morphology of these scattering peaks changes dramatically when the charge-stripe order sets in. Namely, the dispersive nature of one q* peak becomes markedly suppressed, while the other two q* peaks remain dispersive and exhibit a slight deviation from the high-symmetry axes. This strongly suggests further reconstruction of the pockets as the charge-stripe order emerges, which could explain some of the smaller frequencies in quantum oscillations experiments of CsV 3 Sb 5 33,34,38-41 . Remarkably, the Cooper pair-density wave 19 that condenses in the superconducting state emerges at the same q* wave vectors that connects the hole pockets in reciprocal space. Our experiments reveal a direct link between vanadium kagome orbital derived Fermi pockets, an inherent feature of the bulk electronic band structure, and the surprising cascade of lower-dimensional density waves in AV 3 Sb 5 . Bulk single crystals of AV 3 Sb 5 exhibit a layered structure consisting of V 3 Sb 5 layers stacked between A-site alkali metal layers (Fig. 1a). The crystals tend to cleave between the A-site layer and the Sb layer (Methods), resulting in two different types of surfaces: A termination and the Sb termination [17][18][19][20]28,42 . In STM experiments, we focus on the Sb surface positioned directly above the kagome plane due to its structural stability and the direct access to bulk vanadium-derived kagome bands 18 . Similar to previous experiments [18][19][20]42 , STM topographs of the Sb surface of CsV 3 Sb 5 show a unidirectional electronic modulation related to the 4a 0 charge-stripe order (Fig. 1b,e), which forms below about 50-60 K 18 . As more clearly seen from the Fourier transform of the STM topograph at low temperature (Fig. 1b,e), the 4a 0 charge order spatially co-exists with the 2a 0 by 2a 0 charge-density wave 6,32,43 . By replacing Cs with K in an identical AV 3 Sb 5 crystal structure, the long-range 4a 0 charge-stripe order vanishes and cannot be detected on the equivalent Sb surface termination at 4.5 K (Fig. 1c,f) 17,28 , while most of the other known properties remain qualitatively the same. For example, superconductivity in both materials emerges from the same metallic normal state with a 2a 0 by 2a 0 CDW in the kagome plane, which also breaks rotational symmetry of the lattice 17,28,[44][45][46] . In STM measurements, this high-temperature rotation symmetry breaking can be visualized by anisotropic CDW amplitudes 28,44,45 , with one preferred direction being markedly different from the other two that are nearly indistinguishable (insets in Supplementary Figure 1b,e). This symmetry breaking gives rise to three types of domains rotated by 120 degrees from one another observed by optical birefringence measurements 46 . Muon spin spectroscopy 47,48 , magneto-optical Kerr measurements 46 and circular dichroism 46 have also revealed signatures of time-reversal symmetry breaking in both CsV 3 Sb 5 and KV 3 Sb 5 . The two materials provide an exciting playground for the exploration of Fermi surface reconstruction driven by emergent density waves, and a fortuitous opportunity to use KV 3 Sb 5 as a foil for comparison with CsV 3 Sb 5 to understand the emergence of the charge-stripe order. We first measure the temperature-dependent Fermi surface and energy-momentum dispersions of AV 3 Sb 5 using ARPES. While previous experiments investigated the overall renormalization of the Fermi surface driven by the 2 x 2 CDW transition 22,49-52 , here we specifically focus on the formation of small Fermi pockets to uncover their renormalization in connection to the emergent density waves using a combination of ARPES and SI-STM. Our ARPES measurements clearly reveal the formation of Fermi pockets, which can be seen in the second Brillouin zone in all AV 3 Sb 5 systems studied here: KV 3 Sb 5 (Fig. 2a), CsV 3 Sb 5 and RbV 3 Sb 5 (Supplementary Figure 2). Notably, these pockets appear at the M 2 points of the reduced Brillouin zone (blue dashed hexagons in Fig. 2a) induced by CDW order, and they disappear in the normal state above the CDW transition (Fig. 2b). Further insights into the formation of small pockets can be obtained by examining the detailed energy-momentum dispersions. Consistent with the previous reports 22,49-52 , the p-type van Hove singularities (vHSs) from the K1 and K2 bands are clearly visible in the ARPES spectra above the CDW transition (Fig. 2h,i). In the CDW state, the main CDW gap of ~80 meV opens at the vHS of the K1 band, forming the 'M'-shaped dispersion along the K -M -K direction ( Fig. 2j) 22,53 . Importantly, this opening of the CDW gap at the M point drives the reconstruction of the K1 Fermi surface around the M 2 point, resulting in the observed small Fermi pockets (Fig. 2c,e) 52 . The pockets exhibit hole-like dispersion along both axes of the ellipse (Fig. 2e,m). We note that along the minor axis, the hole-like dispersion is a consequence of the back-folding of the K1 band across the M 2 point (Fig. 2g,m) with the reduced spectral weight of the folded side (see Fig. 2a,c). In addition, the K2 and K2 bands show clear back-banding in the CDW state, suggesting the existence of a CDW gap (red and blue arrows in Fig. 2m). The overall spectral weight of the observed Fermi surface and dispersions are closely reproduced by DFT calculations (Fig. 2f,g, Methods) 54 . We note that while earlier work reported evidence suggesting the existence of Fermi pockets in KV 3 Sb 5 52 , our combination of ARPES and DFT unambiguously demonstrates the formation of small Fermi hole pockets in the CDW state of all AV 3 Sb 5 systems, arising from the interplay of vHS and the CDW gap. Moreover, the estimated area of the elliptical pockets (obtained as πꞏk Fa ꞏk Fb , with k Fa and k Fb being the major and the minor radius of the pocket, respectively) found in the present study translates to a quantum oscillation frequency of ~86.8 ± 26.2 T via the Onsager relation, in agreement with recent quantum oscillation studies within the experimental resolution (Supplementary Table 1). The quantitative correspondence between the ARPES and quantum oscillation experiments further confirms the bulk nature of the observed Fermi pockets. Complementary to ARPES measurements of the electronic band structure in the normal state, we image the scattering and interference of electrons using SI-STM. Fourier transforms of dI/dV(r, V) maps on the Sb surface of the two systems display similar, dispersive scattering wave vectors q 1 and q 2 (Supplementary Figure 1 and 3). In addition to these previously reported wave vectors, our high-resolution SI-STM measurements reveal a set of new, dispersive scattering wave vectors q* i , where i = a, b or c lattice direction (Fig. 3a). For simplicity, we first examine q* i in KV 3 Sb 5 , in the absence of 4a 0 charge-stripe ordering. In contrast to q 2 scattering wave vectors that are markedly unidirectional (Supplementary Figure 1), q* i wave vectors appear along all three atomic Bragg peak Q i Bragg directions (Fig. 3a,b). They are detectable around Fermi level and disperse with energy in a similar manner along the three lattice directions (Fig. 3d-f, Supplementary Figure 4). The dispersive nature of q* i suggests that these wave vectors originate from scattering between different points on the constant energy contour, which changes concomitant with the band structure evolution. The magnitude and the direction of the dispersion of q* i from SI-STM measurements are beautifully consistent with the pockets extracted from ARPES (Supplementary Figure 5). The scattering primarily occurs between the outer sides of the pockets with the significantly larger spectral weight as observed in ARPES measurements (Supplementary Figure 5, 6). It is important to note that these electronic states correspond to residual electronic states near Fermi level after band folding and gapping induced by the 2 x 2 CDW state in the kagome plane. Taken together, our data demonstrates an intimate relationship between the emergence of q* i and the existence of Fermi pockets near the reduced Brillouin zone boundary (Fig. 3c). Interestingly, the morphology of q* i changes profoundly as the charge-stripe order forms in CsV 3 Sb 5 . While q* b and q* c are still present and disperse with energy, we no longer observe a dispersive wave vector along the third direction (Fig. 4a,b). Instead in the vicinity we only detect a non-dispersive peak (Fig. 4b, top panel). As q* i represents the fingerprint of scattering between the small pockets, the change in q* a is directly tied to the additional renormalization of the hole pockets, which in turn accompanies the emergence of the charge-stripe order. A possible schematic of the charge-stripe order driven modification of the constant energy contour is shown in Fig. 4c, where the size and the shape of the pockets connected by the Q 4a0 wave vector changes. Another intriguing aspect of this band structure change is that q * b and q * c now bend away from the high-symmetry Γ-K directions (Fig. 4,f). We note that this is not the case for the equivalent vectors when the 4a 0 charge order is absent (Supplementary Figure 7). This deviation of q * b and q * c from high-symmetry directions demonstrates additional renormalization of the Fermi surface. The measurements above explored the effects of density waves in the normal state. In the superconducting state of CsV 3 Sb 5 , superconducting gap and the coherence peak height also vary spatially in a periodic manner, as reported in a previous experiment 19 . Such modulations are a hallmark of a Cooper pair-density wave phase. The period of the emergent pair-density wave in CsV 3 Sb 5 is about 4a 0 /3 by 4a 0 /3 in-plane, and it coexists with the 4a 0 charge-stripes and the 2a 0 by 2a 0 CDW 19 . Interestingly, the 4a 0 /3 by 4a 0 /3 modulation in real-space translates to about 0.75 Q Bragg in reciprocal space, which is exactly the same magnitude and the direction of q* reciprocalspace wave vectors uncovered here. As such, our work also sheds light on the spectroscopic origin of the pair density wave related to the same Fermi pockets (see schematic in Fig. 1d) originally formed by band folding in the 2a 0 by 2a 0 CDW state. The difficulty of pinpointing spectroscopic origins of the 4a 0 charge-stripe order thus far and its apparent absence in X-ray diffraction experiments prompted a hypothesis that the charge-stripe order may be a surface reconstruction 36 . Our data reveals how the formation of the 4a 0 charge order accompanies the renormalization of the electronic band structure tied to small hole pockets that are intrinsic parts of the bulk electronic band structure. This in turn highlights charge-stripe ordering as an inherent feature that can be realized in this family of kagome superconductors. It was theoretically proposed that favorably positioned Fermi pockets could drive the emergence of the 4a 0 charge order 15 (Figure 1d). The combination of our ARPES and STM data suggests that such condition may indeed be satisfied. Further supporting the notion that unidirectional charge ordering can be realized in kagome superconductors, we note that recent scattering measurements unveiled signatures of unidirectional bulk charge correlations in doped CsV 3 Sb 5 55 . Interestingly, we also mention that the sole presence of these Fermi pockets is not sufficient to drive the formation of the charge stripe-order in all AV 3 Sb 5 members, as q* is still present in KV 3 Sb 5 (Fig. 2) although no long-range charge-stripe order is detected in STM measurements 17,28 . It will be of interest to explore if the pair-density wave also emerges in the superconducting state of KV 3 Sb 5 . Future experimental and theoretical work will be necessary to fully understand the physical mechanism necessary to drive these phenomena. The intriguing renormalization of the Fermi surface in CsV 3 Sb 5 in the presence of charge-stripe order suggests that a subset of Fermi pockets reconstruct in reciprocal-space. It is conceivable that this in turn may explain some low frequencies in quantum oscillation experiments that are difficult to be captured by a theoretical model that only takes into account the 2a 0 x 2a 0 CDW in the kagome plane. We note that the inevitable presence of charge-stripe domains of sub-micron scales 18 hinders the observation of the additional renormalization of pockets by ARPES, as it averages over larger regions of the sample likely spanning stripe domains oriented along all 3 lattice directions. Shubnikov-de Hass 33,38 and de Haas-van Alphen 40 quantum oscillation experiments have detected many low-frequency orbits, several of which carrying a nontrivial Berry phase 38,40 . The pockets observed here are due to Fermi surface reconstruction and could acquire concentrated Berry curvature and orbital magnetic moments if time-reversal symmetry is broken in the CDW state 15 . As a result, these pockets may be tunable by magnetic field, which could be explored in future field-sensitive experiments.
2023-03-14T01:16:43.169Z
2023-03-13T00:00:00.000
{ "year": 2023, "sha1": "9cfd9440a11fe5e18a123aabfeaddba253e14227", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9cfd9440a11fe5e18a123aabfeaddba253e14227", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12695498
pes2o/s2orc
v3-fos-license
Differential effects of morphine on operant escape behavior and averse symptom induced by dorsal central gray stimulation in rats. The involvement of a central opiate mechanism in the operant escape behavior induced by dorsal central gray (DCG) stimulation was investigated in rats. Morphine (2-10 mg/kg, i.p.) produced a rise in the DCG-stimulation threshold, but did not suppress rapid running as an averse symptom. Naloxone alone affected neither the threshold nor the averse symptom. Nevertheless, naloxone counteracted morphine-induced increments in the threshold. These results suggest that the opiate system may be indirectly involved in certain aspects of the operant escape behavior induced by the DCG-stimulation. It is known that the mesencephalon or the periventricular system respectively plays an important role in the integration of aversion in animals (1)(2)(3)(4). Stimulation of the mesencephalic dorsal part of the central gray (DCG) caused strong behavior indicating averse sensation, such as jumping, running and escape behavior (2)(3)(4). The animals learn to stop the DCG stimulation in an operant situation (operant escape response). As to the involvement of endogenous amines of the brain in this behavior, Kiser et al. (5) showed that the averse behavior induced by DCG-stimulation was affected by the manipulation of brain serotonin function. Cazala and Garrigues (4) reported that 5-methoxy-N,N-dimethyl-tryptamine decreased the latency time of the escape response induced by the DCG-stimulation in mice. In addition, our previous study showed that the DCG-stimulation threshold for induction of the escape response was increased by 5-hydroxytryptophan (5-HTP) and chlorimipramine, and it was decreased by p-chlorophenylalanine (PCPA) and cyproheptadine (6). Furthermore, we have observed that cholinergic drugs such as physostigmine and arecoline caused an increase of the stimulation threshold, and anticholinergic drugs such as scopolamine and atropine caused a decrease of the threshold (7). These findings indicate that the operant escape behavior induced by the DCG-stimulation is related not only to the central serotonergic mechanism but also to a cholinergic mechanism. On the other hand, as to the involvement of the opiate mechanism on mesencephalic stimulation, Kiser et al. (8) and Schmitt et al. (9) observed that latencies of escape behavior induced by DCG-stimulation were increased by electrical stimulation of the dorsal raphe nuclei which contain some serotonergic cells. Kiser and German (10) suggested that serotonin suppresses the susceptibility to foot-shock and potentiates the analgesic action of morphine. Furthermore, Jenck et al. (11) and Moreau et al. (12) observed that the microinjection of morphine into the DCG or the ventral tegmentum suppressed the escape responding induced by the DCG-stimulation. However, it is not yet clear whether the opiate mechanism may be directly involved in the development of operant escape behavior induced by the DCG-stimulation or not. So, the purpose of this study was to determine whether an opiate mechanism may be directly involved in the DCG-stimulation, by examining the effect of morphine and the combination of the opiate antagonist naloxone with morphine on the operant escape responding as well as averse symptom induced by the stimulation. Seventeen male Wistar rats weighing 250 -300 g at the time of electrode implantation were used as subjects. They were housed by one or two in plastic cages (26 X 36 X 25 cm) and were given food and water ad libitum throughout the experiment. All the animals were maintained on a 12-hr light-dark cycle (light on 900 to 2100) at a room temperature of 22-24T with a relative humidity of 50-60%. All the animals were anesthetized with Na pentobarbital at 45 mg/kg, i.p., and placed on a stereotaxic instrument (Takahashi). Each animal was chronically implanted with a bipolar stainless steel electrode (250 ,um in diameter, insulated except for the tip) aimed at the DCG (coordinates A, 0.6; L, 0.6; H, 0.4 mm, according to the rat brain atlas (13). The electrodes were bilaterally inserted into the target sites at an angle of 15° in order to avoid injuring the sagittal venous sinus as far as possible. All animals were given 150,000 units of penicillin subcutaneously after the surgery. At the end of the experiment, the animals were given an overdose of Na pentobarbital, and the head was perfused with 0.9% saline and subsequently with 10% formalin via the heart. Then the brain was immersed into a formalin-saline solution at least for one week. After removing the skull, each brain was cut in 40-,um sections using a cryostat (Chiyoda). The sections were stained with cresylviolet. All electrode tips were located in or on the border of the DCG by the inspection of the stained sections. The experiment was carried out in a Skinner box (30 X 27 X 25 cm) with a metal lever, which was already described in the previous reports (7). A lever press turned off or decreased the brain stimulation current. A swivel was mounted in the ceiling of the Skinner box holding the electrode lead, allowing the animal to move freely. The stimulation current was derived from a square-wave stimulator (Watanabe) which allowed a decremental stimulation paradigm. After a recovery period of at least one week from implantation surgery, each animal was placed into the Skinner box, and the stimulation cable was connected to the electrode plug mounted on the animal's head. The DCG was stimulated with negative squarewave pulses (5 trains/sec, train duration = 100 msec, pulse duration = 0.5 msec, stimulation current = 100 -600 ,u A) . The stimulation current was gradually increased until the animal began to show averse behaviors such as rapid running around the box and jumping, defecation and urination. These rats were trained to press a lever to stop the DCG-stimulation (escape response). Only animals that showed the stable escape response were trained with the DLP paradigm in which the rat itself could decrease the DCG-stimulation by pressing a lever. In the DLP paradigm, one trial consisted of a 120-sec stimulation period alternating with a 60-sec rest period. During the stimulation period, each lever press decreased the DCGstimulation current by 5% of its initial level. The initial stimulation current was chosen for each animal such that its average lever pressing rate was 4-6 per trial. When the threshold was stable for three successive days, the animals performed 10 trials at 0.5, 1, 2, 4 and 24 hr after the drug injection in the morphine administration experiment or at 0.5 hr after morphine administration in the morphine + naloxone combination experiment. After that, the animals were allowed to rest for at least 10 days between the drug administrations. All drugs were dissolved in saline solution (vehicle) and administered intraperitoneally. Control animals were given vehicle alone (0.1 ml per 100 g body weight). The experimental data are represented as the mean percent change of the stimulationthreshold. The Mann-Whitney U-test was used for statistical analysis. Figure 1 shows the effects of morphine at various doses on the DCG-stimulation threshold. Morphine at doses of 2-10 mg/kg caused a dose-dependent increase of the DCG-stimulation threshold at 1-4 hr after the administration. The peak time of this drug effect at the doses used was 1-2 hr after the administration. Significant differences as compared with the vehicle administered control values were found at 1 and 2 hr (U = 2, P < 0.05, respectively) after 2 mg/kg administration; at 1, 2 (U = 0, P < 0.01, respectively) and 4 hr (U = 2, P < 0.05) after 5 mg/kg administration; and at 1, 2 (U = 0, P < 0.01, respectively) and 4 hr (U = 2, P < 0.05) after 10 mg/kg administration. By 24 hr after the administration, the threshold increasing effect of morphine was no longer observed. Rapid running behavior was observed 1 and 2 hr after the morphine administration. On the other hand, naloxone, an opioid antagonist, at doses of 5 and 10 mg/kg did not affect the DCG-stimulation threshold and general behavior such as running (without figure). The combination effect of morphine and naloxone on the stimulation threshold is shown in Fig. 2. These measurements were carried out 60 min after 10 mg/kg morphine administration, and 10 mg/kg of naloxone was administered 30 min before testing the stimulation threshold. The DCG-stimulation threshold was markedly increased by morphine at 1, 2 and 4 hr after the administration, and the effects of morphine were completely antagonized by naloxone. There were significant differences (U = 0, P < 0.01, respectively) between the morphine group and the morphine + naloxone group at 1 and 2 hr after morphine administration. It is said that the operant escape behavior induced by DCG-stimulation may be related not only to a central serotonergic mechanism but also to a cholinergic mechanism (6,7). Especially as to the involvement of serotonin in this behavior, a serotonin precursor and agonist, 5-HTP and chlorimipramine, increase the DCG-stimulation threshold while the antagonists, cyproheptadine and PCPA, decrease the threshold (5,14). On the other hand, serotonin decreases susceptibility to foot-shock, i.e., the analgesic action of morphine is potentiated by an increase of the brain serotonin level, and a microinjection of morphine near to the dorsal raphe nuclei causes a strong analgesic action that is antagonized by anti-serotonergics (5). These data indicate that the opiate mechanism may be involved in operant escape behavior induced by DCG-stimulation. In the present experiment, morphine increased dose-dependently the DCG-stimulation threshold, but the symptoms such as rapid running behavior induced by DCG-stimulation were not affected. The opiate antagonist naloxone did not have any effect on the stimulation threshold and behaviors such as rapid running. However, naloxone suppressed completely the morphine-induced increase of the stimulation threshold. The authors already observed that morphine did not suppress the lever pressing for stopping the DCG-stimulation under the intensity of initial stimulation in the DLP paradigm (M. Moriyama et al., unpublished data). These observations indicate that morphine may affect the threshold of DCGstimulation. On the other hand, Kiser and German (10) observed that escape behavior induced by stimulation of various brain sites was suppressed by 15 mg/kg morphine, and catalepsy was simultaneously induced by that dose. Moreau et al. (12) reported that the depressant effect of morphine when administered into the ventral tegmentum in the operant escape behavior induced by the DCG-stimulation was unlikely to be due to the impairment of gross motor activity (morphine conversely provoked a behavioral activation). Graeff and Filho (15) observed that voca- Combined effect of morphine with naloxone on operant escape behavior induced by DCG-stimulation. Each column represents a mean % change (± S.E.) of the DCG-stimulation threshold compared to the pre-administration test. Measurements were performed at 1, 2, 4 and 24 hr after intraperitoneal administration of 10 mg/kg morphine. Naloxone, 10 mg/kg, i.p., was administered 1 hr before the DCG-stimulation xx threshold test . v, vehicle group; m, morphine group; m + n, morphine + naloxone group; x P < 0.05; lization was induced by peripheral averse stimulation such as foot-shock, but not by intracranial averse stimulation. Morphine does not have such a strong action on escape behavior for brain averse stimulation, suggesting that DCG-stimulation may not be considered the same phenomenon as sensory pain. In the present study, the morphine-induced increment in the DCG-stimulation threshold was observed at 2 mg/kg and higher doses, but rapid running as an averse symptom was not suppressed. If DCG stimulation itself would be the same as sensory pain, morphine should suppress simultaneously the lever pressing for escaping the DCG stimulation and the rapid running induced by the stimulation. This indicates that the opiate system may be involved in certain aspects of operant escape behavior induced by the DCG-stimulation, but this involvement is not a direct one. Recently, Ableitner and Herz (16) demonstrated that small doses of a selective x-opioid agonist, U-50,488H, dose-dependently attenuated the response to noxious stimulation such as heat and pressure, suggesting the involvement of x-opioid receptors in the aversive response. So, the application of a selective xopioid receptor agonist or its antagonist in the experimental series should clarify whether opioid receptors participate in the DCG-stimulation induced behavior.
2018-04-03T00:33:36.749Z
1991-01-01T00:00:00.000
{ "year": 1991, "sha1": "19a40bd649ec28f51a37e12c674fe5ffe15de1e6", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/55/1/55_1_169/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "83ef284b043ee4b3cfb88c1ffba89cccf71ee89e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
229348604
pes2o/s2orc
v3-fos-license
Fear of COVID-19 Higher among Food-Insecure Households: A Model-Based Study, Mediated by Perceived Stress among Iranian Populations Objective: The COVID-19 pandemic is a crisis accompanied by multiple psychological consequences (including fear of COVID-19) and threatens the food security status of millions of people. This study aimed to examine the association between fear of COVID-19 and food insecurity, mediated by perceived stress. Method : This cross-sectional study was conducted among 2871 Iranian participants (18-80 years), recruited through the Social Media during the COVID-19 epidemic. The demographic and socio-economic information questionnaire, Household Food Insecurity Access Scale (HFIAS), COVID-19 fear scale (FCV-19S), Cohen's Perceived Stress Scale (PSS-14) and Perceived Social Support Questionnaire (MSPSS) were used in data gathering. Descriptive and analytical analyses were done using SPSS 22.0 and Amos 22.0 was used for structural equation modeling (SES). Results: Food insecurity has significant positive direct and indirect (mediated by perceived stress) correlations with fear of COVID-19 (P < 0.05). It was also shown that perceived social support could negatively relate to fear of COVID-19 through the pathways of food security status or perceived stress (P < 0.05). Among women, the presence of a child under 5 had a significant direct association with fear of COVID-19 (P < 0.05). Conclusion: Food insecurity was associated with more perceived fear of COVID-19 among the studied population. The crisis caused by COVID-19 highlights the need to increase social resilience through developing and implementing appropriate strategies to prevent and mitigate social costs (whether physical, psychological, or nutritional). More than a year and a half ago, the first cases of COVID-19 were reported in Wuhan, China. This disease can lead to death through causing chronic dysfunction of the lungs. It became a global pandemic due to high contagion rates (1). According to global statistics on the World meters' website, more than 368 and 6 million people (until January 28, 2022) have been infected by COVID-19 in the world and Iran, respectively (2). People's mental health was also severely affected by the pandemic for reasons such as fear of getting infected, socio-financial disruptions, lockdowns, and so on (3). Fear of COVID-19 is a mental health disorder (4), accompanied by different psychological consequences like anxiety, depression (5), stress (6), and suicide in some severe cases (3,7,8). There are studies that have examined the fear of COVID-19 among samples of the Iranian population. In the study conducted by Varasteh et al., fear of COVID-19 was recognized as one of the main reasons why nurses quit their jobs during the epidemic (9). In another study which was conducted on patients with multiple sclerosis, the greater score of fear of COVID-19 was associated with more symptoms of anxiety and depression (10). Pregnant women and their husbands were another vulnerable group to fear of COVID-19 in Ahorsu et al. 's study (11). The COVID-19 pandemic threatens the food security status of millions of people all over the world due to its negative impacts on the global food system. The number of people with severe food insecurity is projected to nearly double due to the impact of COVID-19 by the end of the year 2020. Also, the number of malnourished children will increase due to increased wasting and stunting (12). Damage to the food value chain has also been reported during an epidemic in Iran (13). Food insecurity has negative effects on health, and may lead to conditions such as obesity and non-communicable diseases (NCDs) (14), which are in turn associated with higher mortality and morbidity among COVID-19infected people (15,16). It is also associated with other mental health conditions, including perceived stress (17). Perceived stress and fear of COVID-19 may lead to other serious mental health disorders, such as anxiety and depression (18). The purpose of the current study was to examine how food insecurity may be associated with fear of COVID-19 (a mental disorder which developed during the pandemic). Although studies have been conducted on the fear of COVID-19 in the areas of food packaging (19), fast food consumption (20), or food supply (21), but the fear of COVID-19 scale has not usually been used. Since mental health and food security have been severely damaged during the epidemic, examining their relationship can provide policymakers with evidence on how to adopt the strategies. This study was performed to examine this relationship, using structural equation modeling, mediated by perceived stress. This nationwide web-based cross-sectional study was conducted on 2871 Iranian people (18-80 years), from all 31 provinces; the proportion of geographical population distribution was taken into account. It ran from August to September 2020. The study inclusion criteria included: being adults (18 years and older), living in Iran and being interested to participate in the study. Participants were excluded from the study if they were under 18 years of age. Study procedure The invitation to the questionnaire was a text that included research objectives and inclusion criteria. It stated that participants would not be asked for identity information and could be excluded from the study whenever they felt uncomfortable answering questions. They were also told that their information was confidential and would only be analyzed by the researcher. The questionnaires were uploaded to a site with the address (https://porsall.com/), and distributed by popular social media platforms (namely Instagram, WhatsApp Messenger, and Telegram Messenger). It has been shown that the use of social media in Iran has an increasing trend and includes a significant population (about 40% based on the Digital 2020 website), and is a safe way to gather data in the situation of the COVID-19 epidemic. Three researchers were directly involved in data collection. Invitations were distributed on social networks through groups, channels and pages with different topics. If participants were willing to participate in the research, they would enter the questionnaire page by clicking the link mentioned in the invitation. It took about ten minutes for each person to complete the questionnaire, and there was no time limit. Each participant could answer the questionnaire only once. Study tools Demographic and Socio-Economic Information The demographic and socio-economic information of participants (including age, sex, household size, educational level, employment status, family residence status and household monthly income) were obtained via the online questionnaire. The participants were also asked about the presence of vulnerable people in the household. These included: pregnant woman, children under 5, elderly people (over 65), and people with NCDs (such as cardiovascular disease, diabetes, cancer, endocrines disorder, etc.). COVID-19 Fear Assessment COVID-19 fear scale (FCV-19S) is a 7-item questionnaire that assesses the Fear of COVID-19 among participants, and is rated on a 5-point Likert scale (strongly disagree = 1 to strongly agree = 5). The total score is obtained by summing the scores of all individual items (score range 7-35); and the higher the score, the more severe the fear. This tool has also been used in various studies around the world (23). The reliability and validity of the Persian version of this questionnaire was previously examined by Kwasi Ahorsu et al. Perceived Stress Assessment Cohen's Perceived Stress Scale (PSS-14) is a self-report instrument that is used to assess perceived stress among a study population. This questionnaire contains 14 items that examine the levels of thoughts and feelings of the person during the past month, and is scored on a 5-point Likert scale (Never = 0; Almost Never = 1; Sometimes = 2; Fairly Often = 3; Very Often = 4). Questions 4-7, 9-10 and 13 are scored in reverse. In this scale, the minimum perceived stress score is 0 and the maximum is 56. The higher the score, the more the perceived stress (24). The internal reliability for the current questionnaire was previously measured by Qazvini et al. and was found to be acceptable (Cronbach's alpha = 0.73) (25). Perceived Social Support Assessment In the current study, the Multidimensional Scale of Perceived Social Support Questionnaire (MSPSS) was used in order to assess the perceived social support. This 12-item scale consists of three subscales, examining the perceived social support from three sources (family, friends, and others), and is scored on a 5-point Likert scale (strongly disagree = 1 to strongly agree = 5). The total score is obtained by summing all the individual item scores. The minimum and maximum perceived social support scores are 12 and 60, respectively. Cronbach's alpha of 0.93 was obtained for the Persian MSPSS questionnaire by Salimi et al. (26). Statistical Analysis The Results The mean age of participants was 32.99 ± 8.31 years, and many of them were employees (24.7%), with associate or bachelor's degree (47.1%). The detailed characteristics of the participants are presented in Table 1. Since the frequency of women in the raw data was much higher than men (82.8% vs. 17.2%), data weighting was applied by sex, according to the latest census of the Statistics Center of Iran (51% men, 49% women). In the current study, socio-economic variables (including household residence status, monthly income, participant's educational level, and employment status) were analyzed by the Principal Component Analysis (PCA) method [Kaiser-Meyer-Olkin test (KMO) > 0.5 and Bartlett test < 0.001]. A socio-economic status variable was developed and split into three levels. Based on the results, more than half of the participants (55.2%) were from food-secure households, and about 6.5% were reported as severely food-insecure. For subsequent analyses, the third and fourth groups (food-insecure with moderate and severe hunger) were merged. The Pearson correlation showed a significant association between fear of COVID-19 and quantitative variables, including age (r = 0.05, P < 0.05), total perceived stress score (r = 0.37, P < 0.001), and total social support score (r = -0.05, P < 0.05). However, household size was not an important predictor of fear of COVID-19 (r = 0.01, P > 0.05). The analysis by T-test showed that the mean score of fear of COVID-19 was significantly higher among women (P < 0.001); in households with a child under 5; and in patients with NCDs (P < 0.05) ( Table 2). The association between the fear of COVID-19 score and household food security status indicated that the higher the degree of food insecurity, the higher the score of fear of COVID-19 (P < 0.001). However, the ANOVA test showed no significant association between fear of COVID-19 and the socio-economic status (P > 0.05) ( Table 2). In the next step, a linear regression model was used for predicting fear of COVID-19, by including the significant variables (P < 0.05) identified by Pearson correlation, T-test and ANOVA test. Results showed that age, sex, perceived social support, perceived stress, food security status and the presence of a child under 5 were significant predictors of fear of COVID-19 (P < 0.05) ( Table 3). Finally, the structural equation modeling (SES) of fear of COVID-19 was constructed using the significant Iranian J Psychiatry 17: 4, October 2022 ijps.tums.ac.ir 404 variables. Figures 1 and 2 show the pathways by groups (sex), for men and women, respectively. The proposed model (by sex) includes three exogenesis variables (age, perceived social support and the presence of a child under 5), hierarchical mediator variables (food security status and perceived stress), and an endogenous variable (fear of . The results of the model indicated that all pathways are significant in both sexes, except for having a child under 5, which was significant only in the female group (Table 4). There was also a significant sex difference between fear of COVID-19 and food security status pathways (P < 0.05), as it was seen to be stronger among men. A comparison of the between-pathways coefficient by sex also showed a greater inverse association between age and fear of COVID-19 among men. The standardized total, direct and indirect effects are provided in Table 4, and confirm the indirect effect of food insecurity on fear of COVID-19, in terms of perceived stress. Acceptable fit indices were obtained for the model, which are presented in Table 4 (end of table). Discussion The current study was conducted among an Iranian online population, in order to determine the association of food insecurity with fear of COVID-19, mediated by perceived stress, in structural equation modeling. The results indicated the presence of both direct and indirect associations between food insecurity and fear of COVID-19. It was also shown that perceived social support could be associated with fear of COVID-19, through food insecurity or perceived stress. Investigating the relationship between food insecurity and mental health disorders is an ongoing topic of interest to researchers. It has been shown that depressive symptoms, anxiety and stress are higher among foodinsecure households (27,28). Food-insecure households usually consume low quality diets, which in turn, are associated with poor mental health (29). Worries about family food sources can also be stressful (30). In the current study, food insecurity was significantly associated with higher perceived stress and fear of COVID-19. The COVID-19 pandemic has created conditions that put additional stress on people (31). Taylor et al. showed that concerns about meeting household needs, due to the social and economic disruption following the COVID-19 outbreak, can be associated with perceived stress (32). In another study conducted by Rehman et al. among Indians during the COVID-19 pandemic, it was shown that an insufficient food supply during quarantine was associated with greater mental distress, including perceived stress (33). The COVID-19 pandemic has left many people with job losses, reduced income, (34) and increased food prices (12), raising concerns about the growing prevalence of food insecurity. It has been shown that these economic problems in households are associated with insufficient food intake, both in terms of amount and quality. However, families with persistent income or enough savings were seen to be less affected during the quarantine period (35). Food insecurity also makes people more vulnerable, both physically (36) and mentally (37). In a study conducted by Kelly et al., the death risk from Ebola was 18.3 times higher among food-insecure patients (36). A sufficient and nutritious diet plays an important role in stimulating an appropriate immune system response to COVID-19 (38). Stress reduction, nutritious diet, adequate levels of vitamin D in the blood, and adequate physical activity are other important factors associated with better immune system function (39). Long-term fear and stress due to COVID-19 can also alter the body's neuro-endocrine-immune system, which may lead to an increase in other diseases (40). Thus, in order to reduce the effects of food insecurity on various aspects of health, policymakers and planners need to develop both short-and long-term strategies to increase the community's resilience to such shocks. Inevitably, this requires cooperation and partnership at the global, national, and local levels. According to the results of the current study, perceived social support related to fear of COVID-19 via different pathways: direct and indirect. The indirect effect of social support was such that, as the score increased, perceived stress decreased. This finding is consistent with a study conducted by Ye et al. among college students in China (41). It has been shown that social support is also an important factor that is negatively associated with perceived stress and anxiety among healthcare workers during the pandemic (42,43). Social support has played an influential role on mental health and wellbeing during the pandemic, but unfortunately, it has suffered due to social distancing practices (44,45). Access to technology (and its functions in maintaining social relationships), however, may be able to mitigate feelings of loneliness and mental health problems (45). Another indirect pathway that links perceived social support to fear of COVID-19 is through its positive impact on food security. This finding is consistent with studies by Ashe (46) and Nascimento Dos Santos (47). People with higher social support are more likely to be wealthy, and less likely to experience food insecurity (48). Limitation The study had some limitations, including the lower participation of older people (over 65). Therefore, convenience sampling, which was used in data gathering, may be accompanied with some bias and be less generalizable (5). In order to address these limitations, this study utilized a relatively high sample size, a proportional approach to geographical distribution, and data weighting. Conclusion In addition to threatening people's physical health, the pandemic threatens food security and mental health. Based on the results of the present study, food insecurity has significant direct and indirect (mediated by perceived stress) associations with fear of COVID-19. The crisis caused by COVID-19 highlights the need to increase social resilience through developing and implementing appropriate strategies to prevent and mitigate social costs; whether physical, psychological, or nutritional. This study showed that increasing food security resilience can play a key role in achieving these goals. Further studies can examine the trend of psychological disorders, such as fear of COVID-19, in socio-economic vulnerable groups through nutritional intervention, including providing food baskets or financial assistance.
2020-12-23T02:08:29.464Z
2020-12-22T00:00:00.000
{ "year": 2022, "sha1": "b357a0a3191645dcc08649275960037ed27e736d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/ijps.v17i4.10689", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b257d5cbaccfb1f54f98252493ffdcf9b516ea7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
363690
pes2o/s2orc
v3-fos-license
Continuous Levels-of-Detail and Visual Abstraction for Seamless Molecular Visualization Molecular visualization is often challenged with rendering of large molecular structures in real time. We introduce a novel approach that enables us to show even large protein complexes. Our method is based on the level-of-detail concept, where we exploit three different abstractions combined in one visualization. Firstly, molecular surface abstraction exploits three different surfaces, solvent-excluded surface (SES), Gaussian kernels and van der Waals spheres, combined as one surface by linear interpolation. Secondly, we introduce three shading abstraction levels and a method for creating seamless transitions between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation of a cluster of atoms with constant shading and without contours provide the context. Thirdly, we propose a hierarchical abstraction based on a set of clusters formed on molecular atoms. All three abstraction models are driven by one importance function classifying the scene into the near-, mid-and far-field. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts. Introduction Molecular visualization today is challenged by molecular dynamics (MD) simulations with the requirement of displaying huge amounts of atoms at interactive frame rates for the visual analysis of binding sites.Simulated data sets do no longer consist of only one moderately sized macromolecule, but instead of molecular systems representing complex interactions, for example, a phospholipid vesicle membrane together with proteins anchored in the membrane (Figure 1, right).One can easily obtain data sets where tens or hundreds of thousands of atoms are animated throughout a series of 1000 time-steps. To analyse a binding site, a special visual representation is most popular among molecular biologists known as the solvent-excluded surface (SES) [Ric77].This representation directly conveys information whether a solvent of a certain size is able to reach a particular binding site on the surface of the macromolecule.While this repre-sentation is valued by the molecular biology domain, it is also expensive to compute.To achieve interactivity with the scene, biologists sacrifice information provided by SES and investigate molecules with blobby Gauss kernel representations [Bli82], or with a simple space filling approach.The latter one, for example, can be represented very quickly by impostor-based sphere splatting, but it does not answer precisely whether a solvent can bind at a specific location to a macromolecule.Another research question is how to abstract the molecular surface further, beyond representing each single atom.An example of where such an abstraction would be essential is a Powers-of-Ten zooming interactive environment where the user can zoom-in onto a single atom, or zoom out to see a cellular-level structure of the same entity.At the cellular-level it is completely out of the question to render each atom of a molecule and a hierarchical abstraction would be needed.In the search for the appropriate solution we turn to the visual crafts for inspiration, which have been already successfully applied on molecular visualization [vdZLBI11].Illustrators sometimes take a different approach to visually abstracting molecules from details. Instead of modifying the molecular representation into an entirely different molecular abstraction (such as transition between space filling representation and ribbons showing β-sheets and α-helices), they effectively use the perceptual principles of object constancy to depict structures that are too far away to recognize the details, through a simplified representation of that object.In this way the illustrators' manual creation process is speedup while at the same time also resulting in a more convenient visualization for the viewer, whose cognitive processing related to object constancy autocompletes the simplified visual representation with an object instance.A beautiful utilization of this approach can be seen on Winsor McCay's artwork of 'When Black Death Rode' shown in Figure 2, which was exemplified by the professional scientific illustrator Bill Andrews [And06]. To address the molecular visualization challenge delineated above we propose to employ a seamless level-of-detail rendering scheme, in the same way as illustrators approach rendering of scenes containing multiple instances of the same object, and taking advantage of the object constancy perceptual principle.As a general rule, closest to the viewer we aim at providing a maximum of relevant information related to the structure and binding sites.We also utilize the level of detail scheme to guide the viewer to relevant information in the spirit of focus + context visualization techniques.The most detailed molecular surface representation is the SES representation, where every atom (except hydrogens) is rendered to form the molecular surface.Farther away from the viewer, we smoothly change the visual representation to an approximation of SES through Gaussian kernels.Structures farthest away from the view are represented by simple sphere splatting.When the individual atoms are no longer discernible, we employ hierarchical clusterings, where the atoms at a particular spatial location are grouped into super-atoms, which enables lower memory requirements and faster rendering performance.The use of these three levels of detail are motivated by the cognitive zones of the viewer: focus, focus-relevant and context zone (Figure 1).Generalizing the concept leads us to the definition of a 3-D importance function that can be based on the distance measure from a molecular feature, not only as a distance from the camera. Nevertheless, the question that remains unanswered is how we can preserve smoothness in detail-level transitions.Smoothness in transitions is an important requirement as an abrupt change in levelof-detail will become a salient artefact that will involuntarily attract the attention of the biologist.To tackle this problem, we propose to utilize an implicit surface representation, where we can seamlessly blend from one surface representation to another one.The seamless illustration-inspired level-of-detail scheme for molecular systems based on implicit surfaces is the main contribution of this paper.Additionally, the scheme fulfills the focus and context model, where both levels are blended via the seamless transformations.While illustrative representations have been investigated in the context of molecular visualization earlier, they have never been investigated within the context of a level-of-detail scheme. The contributions of this paper can be summarized as follows: We propose a novel visualization approach that increases the overall rendering performance by utilizing a level-of-detail concept applied via hierarchical abstraction, surface abstraction and shading c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. abstraction.We build upon our earlier work [PRV13] on seamless molecular abstraction and extend it with respect to several aspects.Most notably, we present a method for hierarchical abstraction, which goes beyond the level of atomic detail.For this purpose we hierarchically cluster the entire molecular structure at various detail levels. Related Work Our approach builds on several aspects of previous work on molecular visualization, in particular with respect to choosing appropriate visual representations, methods for interactive rendering and levelof-detail techniques. Visual representations Tarini et al. present a real-time algorithm for visualizing molecules with the goal to improve depth perception [TCM06].By combining ambient occlusion and edge-cueing together with graphics processing unit (GPU) data structures, they achieve interactive frame rates for molecules of up to the order of 10 6 atoms.Based on this representation, the authors report an improved understanding of the molecule structure.While we exploit different representations mainly in order to allow for efficient rendering, Lueks et al. combine different representations of a molecule in a single view in order to support understanding of different abstraction levels [LVvdZ*11].By allowing the user to control the seamless transition between different molecule representations, these can be viewed in a combined manner and thus reveal information at different degrees of structural abstraction.The abstractions which are combined are based on previous work presented by van der Zwan et al. [vdZLBI11].The authors classify molecular representations based on their illustrativeness, structural abstraction and spatial perception.By giving the user control over these three parameters, s/he can change the depiction of a molecule.Thus the possible representations largely resemble known molecular representations widely used in text books.The illustrativeness presented by van der Zwan et al. is achieved by combining different rendering styles.Similar to the work done by Tarini et al. [TCM06], they also experiment with ambient occlusion techniques.In contrast, Weber presents a cartoon-style rendering algorithm for protein molecules, which exploits GPU shaders to generate interactive pen-and-ink effects [Web09].In the work of Cipriano and Gleicher [CG07], spatio-physico-chemical properties are used to generate a simplified representation that conveys the overall shape.This approach, like many of the presented illustration models, goes back to the original work done by David Goodsell [Goo09], who has developed a simplistic, but expressive style for representing molecules through space filling.His approach combines ambient occlusion with cel-shading and silhouettes in order to illustrate residuals.This illustration approach has for instance been recently adopted by Falk et al. [FKE12], and it also inspired the creation of the renderings shown in this paper. Interactive rendering Besides recent efforts dealing with the visual representation of molecules, a lot of work has been dedicated to increase the over-all rendering performance.For instance, Sharma et al. present an octree-based approach, which allows billions of atoms to be rendered interactively by exploiting view-frustum culling [SKNV04].A combination of probabilistic and depth-based occlusion algorithms is used during rendering to determine the visible atoms.More recently, Grottel et al. have investigated different data simplification strategies which also incorporate culling [GRDE10].In particular, they take into account data quantization, video memorybased caching and a two-level occlusion culling strategy.Lampe et al. focus on the visualization of slow dynamics for large protein assemblies [DVRH07].To represent these large-scale dynamic models, they also use a hierarchical approach where the topmost layer represents residues as the high-level building blocks of a molecule.For each residue only orientation information is sent to the GPU, where the generation of the individual atoms is performed on-the-fly.Since SES represents the most advanced representation of molecular surfaces, which allows the molecule interactions and evolution to be studied, some effort has also been dedicated to improving the rendering of these fairly complex structures.Parulek and Viola propose an SES representation based on implicit surfaces [PV12].By exploiting constructive solid geometry operations on these surfaces, they obtain implicit functions which locally describe a molecule's surface.As their ray-casting-based rendering of this representation requires no pre-processing, they are able to vary SES parameters interactively.Frey et al. focus on MD simulation data [FSG*11].In order to speed up rendering of these data, they reduce the amount of particles by focusing on those considered as relevant for the visualization.In contrast to our technique this resembles a data reduction approach instead of a data simplification approach. Level-of-detail techniques Level-of-detail approaches have a long history in computer graphics [LWC*02].Most techniques for rendering molecular data use surface simplification methods for generating different LODs [KOK06].Lee et al. [LPK06] visualize large-scale molecular models using an adaptive level of detail (LOD) technique based on a bounding tree.Fraedrich et al. [FAW10] sample only the visible particles in the scene into perspective non-uniform grids in view space.These optimizations result in low computation times even for large data sets.They render the isosurfaces using GPUbased ray-casting.Krone et al. [KSES12] use a view-independent volumetric density map representation and generate a surface representation for rendering using a GPU implementation of the Marching Cubes algorithm similar to the work of Dias et al. [DG11].The work of Bajaj et al. [BDST04] incorporates a biochemically sensitive level-of-detail hierarchy into the molecular representation and uses an image-based rendering approach.Although presented in the context of visual data mining in document collections, the H-BLOB method by Sprenger et al. [SBG00] is of relevance as it uses is a hierarchical clustering and visualization approach based on implicit surfaces.Our approach maintains an implicit representation throughout the pipeline and uses it for rendering directly.We use a hierarchical data representation scheme which forms, together with visual representation, and surface representation, a 3-D abstraction space.This provides us with fine-grained control over the different representational and seamlessly adjust the level of abstraction during interactive visualization. Methodology Motivated by the need for visualization of large molecular systems, we propose a seamless visual abstraction scheme which provides continuous transitions from the computationally expensive, but most relevant visualization technique, to the fastest representation which is suitable for representing the context.The key component of our approach which enables this seamless transition is an implicit surface representation on which all the visual abstractions are based on.We define three different levels of visual abstraction, with overlapping transition zones: a near-field, a mid-field and a far-field.The field boundaries are defined by an importance function, t(p).Besides the distance from the viewer used as our primary example, the importance function can be thought of as a distance measure from an interesting molecular feature (e.g. a cavity) or from a region of interest, interactively specified by the user (e.g mouse cursor location) [PRV13].Our LOD visual abstraction consists of three distinct categories: hierarchical abstraction, surface abstraction and shading abstraction. The first level of abstraction is concerned with whether the molecule is represented directly by atoms, or whether the atoms are grouped into clusters, that is, superatoms, which are then represented by a ball of a larger radius covering the volume of the grouped atoms (Figure 3, right).Our previous work [PRV13] included only the atomic level.Section 4.3 discusses the generation of this hierarchy. The second category is the visual abstraction of surfaces.The most domain-relevant visual representation is the SES.Based on this representation the molecular biologists can decide whether a specific binding site is accessible to a solvent or not.The intermediate visual abstraction level is based on a Gaussian kernel representation that approximates the SES and is often used in analysis of molecular surfaces despite its lower expressive value with respect to the binding sites [KFR11].This visual abstraction is a compromise between rendering performance and expressiveness.The last level of the proposed visual abstraction scheme is a space-filling approach where individual atoms are represented by spheres.This is the fastest representation to render, however, its main usefulness is in providing a more gross structural context rather than providing a useful information about a local molecular detail (Figure 3). The third category is concerned with the visual abstraction of shading.Together with geometry, we abstract the details in shading in the following way.For conveying shape detail, we employ a local diffuse shading model.For conveying relative depth, ambient occlusion is used.Ordinal depth cues are communicated with contour rendering and the figure-ground ambiguity is resolved by silhouette rendering.This scheme is motivated by the workflow that David Goodsell, an acknowledged molecular scientist and illustrator, employs in molecular illustrations [Goo09].We additionally provide a detail level with local shading.While Goodsell's illustrations have equal amount of visual cues for the entire molecular system, we have a specific distribution of visual cues for each level of detail.The figure-ground separation, which uses silhouette and ambient occlusion as a relative depth cue, is used for all abstraction levels.The near-and mid-field levels additionally convey structural occlusion with contour rendering as an ordinal depth cue.The near-field conveys the shape and therefore uses diffuse shading, while the other two levels are represented with a constant shading, abstracting from atomic details.An example incorporating all abstraction levels is shown in Figure 3.The overall molecular rendering is performed by means of a ray-casting method where each ray is incrementally c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. processed, thereby allowing us to evaluate corresponding molecular and shading models. Molecular Visual Abstraction The main reason for choosing an implicit representation is that it enables us to easily form a smooth transition, or a blend, between different types of surfaces.For instance, when two implicit functions f and g overlap in space, a simple way to generate a seamless transition between them is via linear interpolation: h = (1 − t)f + tg.This preserves the continuity of even two different representations, which is a necessary property in order to achieve a seamless transition between different molecular models.This property would be very hard to achieve with any boundary representation, especially on a real-time basis.We propose a set of abstraction levels which are aligned with visual processing, but our framework can also easily handle additional levels.In our work the interpolation parameter t is interpreted as an importance value t = t(p) that varies with the position p in the scene.In our demonstrations, we use the distance from the camera as the importance function, t(p) = ||eye − p||.We specify borders for all three areas (near-, mid-and far-field) using t(p) ≤ t 0 ≡ near-field, t 0 < t(p) ≤ t 1 ≡ mid-field and t(p) > t 1 ≡ far-field.The length of the transition area is controlled by t d , which defines the blending interval between two distinct molecular surface representations.Thus, when a point p lies only in one area we can evaluate a single implicit function, while for the overlapping areas we need to evaluate both functions and combine their result by linear interpolation. Surface abstraction We assume a set of atoms defined as C = {(c 1 , r 1 ), . . ., (c n , r n )} and introduce the three implicit functions each defining the molecular model for one of the three intervals. SES representation To represent SES by the means of implicits, we take as a basis the approach proposed by Parulek and Viola [PV12].The method for evaluating the implicit function has cubic complexity O(n 3 ).The final implicit function evaluates an exact Euclidean distance to the surface, although only to the distance R from the iso-surface of SES representation.One of the advantages of the proposed method is the flexibility of varying the parameters during rendering, for example, atoms participating in SES representation or the solvent radius R.This is the main reason why this method is incorporated into our pipeline; it enables us to vary the length of the near-field easily.This representation is the one that is most computationally expensive and it makes sense to apply it only when studying inter-atomic cavities in detail.Therefore, although in principle applicable to all hierarchical abstraction levels, it is only meaningful to utilize it in the near-field focal region. Gaussian kernel representation For the second, mid-field level of surface abstraction, we utilize the Gaussian model, which is widely used as an approximation of the SES model.It smoothly blends the density field generated by the atoms and also forms a seamless transition between the SES and sphere models.The utilization of the Gaussian kernel for implicit modelling was used for the first time by Blinn [Bli82] to describe the electron density function of atoms by summing the contribution from each atom as follows: F gauss (p) = T − i b i e −a i d 2 i , where d i represents the distance from p to the centre of atom c i , b i represents the blobbiness, a i describes the atom radius and T defines the electron density threshold.We adopted Blinn's model and specified the parameters a i and b i as were introduced by Grant and Pickup [GP95]: b i = R 2 , a i = − ln r 2 i /2b i and T = 0.5. van der Waals sphere representation Let us define a set of implicit functions defined as {f 1 , f 2 , . . ., f n }, where each f i (p) = r i − ||p − c i || represents an atom c i with the corresponding van der Waals radius r i .The implicit function defining the union of spheres can be written as F spheres (p) = max{f 1 (p), f 2 (p), . . ., f n p}, where the maximum operator represents the union term [Ric72].In order to render the iso-surface of F spheres solely, we actually do not need to evaluate the intersection of the ray and the function by a root finding method.Instead, rendering can efficiently be performed by ray-casting the spheres directly and storing the closest depth values to the camera in a depth buffer.Therefore, even if the function evaluation still has O(n) complexity, the entire rendering pipeline can be optimized by drawing all the spheres in parallel, while the atomic operations evaluate the depth buffer.Moreover, the rendering performance can be increased by utilizing the sphere billboard technique [DVRH07].To form a smooth blend between the van der Waals spheres and the Gaussian kernels representation we only need to evaluate F spheres in the transition area t(p) ∈ [t 1 − t d , t 1 ].The sphere billboarding technique is employed when t(p) > t 1 . We utilize linear interpolation between the representations inside transition zones, while the remaining zones only require a single representation to be evaluated.Our approach allows all three levelof-detail areas and their lengths to be modified in real time.We choose linear interpolation as it represents a simple, intuitive and efficient solution.More sophisticated approaches, for example, variational methods [TO99] or extended space mapping [SP98] provide several parameters to fine-tune the shape of the final interpolation, but are relatively expensive to evaluate and not yet suitable for real-time rendering applications. Shading abstraction Our shading model employs a set of visual abstractions that selectively enhance shape and depth information.The shading scheme is inspired by the approach presented by David Goodsell's artwork [Goo09].We use his system of visual cues, that is, constant shading, contour and depth enhancement, which he employs in molecular illustrations, although applied only on our sphere representation.We apply these visual cues in a focus and context manner, where the focus is represented for the interval t(p) < t 0 .In the remainder of this section, we discuss the application of the aforementioned visual cues according to all three level-of-detail areas.In the near-field t(p) ∈ [0, t 0 ], we employ a local diffuse shading model (DM) in combination with the constant shading model (CM), which is applied in accordance with the t(p) value.This enables us to create much smoother transitions to CM.In the transition zone t(p) ∈ (t 0 − t d , t 0 ], we interpolate the shading model such that the DM continuously disappears towards the end of the transition area. In the mid-field and far-field zones, t(p) ∈ [t 0 , ∞), we employ constant shading model.The reason for applying the CM in the mid-field is that the Gaussian model conveys lower accuracy for the solvent shape than the SES.Thus, by using CM we are able to visually decrease the surface discrepancies between the two models (Figure 1).Besides the shading we incorporate silhouettes and contours into our visualization.We employ the approach of Kindlmann et al. [KWTM03] to generate contours of uniform thickness using the fast view-dependent curvature approximation of Krüger et al. [KSW06].Furthermore, we preserve the contours Figure 5: A hierarchy based on spatial clustering is created using a bottom-up approach.The hierarchy is combined with the surface and shading abstractions using a seamless transition model. for near-and mid-field but neglect it in the far-field.The reason behind discarding the contours in the context area defined by spheres is that they do not fully emphasize the inter-spherical space, that is, just enhancing the spherical shape.In the second transition area, we scale the contour predicate to make the contour disappear continuously. The silhouettes are generated with respect to the background of the rendered molecule, that is, all the pixels that do not belong to the molecule are considered background.Afterwards, in image space, we perform edge detection on the binary texture where 1 represents molecule and 0 means background.The silhouette is preserved for all three zones.This was chosen to imitate the Goodsell's approach and, additionally, to enhance the overall shape of the molecule.As the last step in our rendering pipeline we add screen space ambient occlusion based on the method proposed by Luft et al. [LCD06].The ambient occlusion is, similarly to the silhouettes, applied to all three zones. Hierarchical abstraction The visual representations discussed so far are based on access to the original resolution of the whole data set represented by individual atoms.Unfortunately, this limits the visualization to data sets that can fit and be streamed into memory in a timely manner.These representations are necessary when a structure is explored in detail, that is, by being able to go down to the atom level.While this is often desirable for the structures being directly in focus, or near the focus field, structures being far outside the focus do not need to convey this detailed information.Still, it is important that the overall structure of the molecule is conveyed and that large-scale features are preserved.Moreover, since state-of-the-art molecular visualization techniques [KBE09, LBPH10, PV12] exploit almost instant surface generation from the initial set of atoms prior to rendering the surface, it is necessary to enhance the rendering of large molecular and even cellular scenes using a new data simplification scheme as well. c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. To achieve this new level of abstraction, we again drew inspiration from David Goodsell's representations.By condensing the visualized information to the most essential visual elements, he is able to communicate the important features without introducing additional clutter.As the dominant visual elements in his representations are silhouettes and surfaces having a flat appearance, we have designed our hierarchical abstraction such that we can reduce a molecule to these elements.One alternative to achieve such an abstraction would be to use primary or secondary structures analysis.However, this would not generalize to other kinds of molecules, for example, lipids.Therefore, we propose to exploit location-based clustering, which enables support for a wider spectrum of molecules while generating a hierarchy of nested surface representations.By using a location-based clustering these nested surface representations will have a high degree of conservation with respect to the outline of the molecule (Figure 4).In the following, the initial set of atoms, C = {(c 1 , r 1 ), . . ., (c n , r n )}), becomes a set of spherical elements.Such an element can be either an atom or a cluster described again by a centre c i and a radius r i .More importantly, the function evaluation procedure is the same for all three surface representations.While the presented approach is generalizable, it can also be integrated with the implicit surface abstractions in order to create a seamless visual transition for the user. In order to form the hierarchical representation of the particles, we use a bottom-up approach.The hierarchy generation process therefore starts with performing a spatial clustering on the original atom data.After the clustering is complete each formed cluster is represented by a sphere with a radius r, which bounds all the particles within the cluster, and the cluster centre c, which is the centre of gravity computed from cluster members.The next level of detail is created using the clusters from the previous level of detail as input and raising the error threshold by a factor of two.This process continues until a maximum number of levels of details have been created or only a single cluster remains.Each particle/cluster will therefore have a single parent, but can have multiple children as illustrated in Figure 5. Several clustering methods have been studied, where the computational complexity was considered crucial.We analysed the following clustering algorithms: DB-SCAN, k-means, hierarchical and the Affinity Propagation (AP) technique [Llo82,FD07,Mül13].As a notable result, we found out that applying a density-based clustering scheme does not perform well on molecular data sets.The main reason is that molecular objects inherently lack any significant variation within the atom density distribution.Therefore, applying the DB-SCAN often produces only a single cluster for the entire molecule.Moreover, by testing various molecules, it became clear that qualitatively the AP algorithm performed best for the tested data sets.The AP algorithm was presented by Frey and Dueck [FD07], having formed clusters more uniformly and with a lower error bound than any other clustering algorithms.However, the biggest drawbacks of the AP algorithm is its computation complexity, O(n 2 ).Indeed, AP is by far the slowest algorithm in our test group.Even though the cluster coverage between neighbouring levels is far better than using the remaining techniques, when the atom count is more than 10 K, creating already a single cluster level takes tens of minutes.Similarly, k-means also provides a computationally expensive so-lution, which prohibits its application on large molecules.On the other hand, the fast hierarchical clustering method, proposed by Müllner [Mül13], offers a very fast solution with a high-quality cluster coverage at the same time.A comparison of applying the three clustering algorithms is depicted in Figure 6.Generation of five hierarchical levels takes 30 923 (ms) for AP, 23 508 (ms) for k-means and 676 (ms) for hierarchical clustering. Rendering and Performance Analysis Our rendering pipeline consists of several steps (Figure 7).In the first one, we traverse the cluster hierarchy in top-down manner to retrieve all the clusters/atoms that are about to be used for the molecular representation and visualization.Starting from the highest level, we evaluate whether a cluster C is directly used in the visualization or whether it is required to recursively evaluate its child nodes (clusters or atoms in the leaves).The evaluation criterion that decides whether a cluster C is going to be added to the display list is defined by function where l C is the hierarchy level of C, l max is the highest available hierarchy level and t 1 represents the far-field depth.When a cluster meets the criteria, it is added to the display list (Figure 8).The hierarchy traversal is performed on the CPU side, as a frame preprocessing, before the display list is sent to GPU.Here we have not found any performance drop even for larger hierarchical trees. In the second step, we render clusters/atoms, stored in the display list, as spheres with an increased cluster radius that defines their area of influence.This area is defined by means of solvent diameter 2R, that is, each cluster is rendered as a sphere with its cluster radius increased by 2R.The reasoning why to choose the solvent diameter as an area of the atom influence is described by Varshney et al. [VBW94].Moreover, we do not perform sphere ray-casting, but instead quickly splat spheres using billboarding [TCM06]. Instead of displaying these spheres, we store them in the socalled A-buffer.The theoretical framework describing the A-buffer was presented by Carpenter in 1984 [Car84].Essentially, A-buffer is a linked list of fragments generated for every pixel separately using atomic operations on the GPU.We define one global atomic counter that serves as the head pointer to the linked list.This counter is increased by one every time when a new fragment is generated in the fragment shader.Each fragment record consists of the entry and the exit depth of a rendered cluster, and the cluster id.The fragment record is then stored in the shared image at the location addressed by the global counter.It is noteworthy to mention that similar approaches for rendering molecules, defined by blobby objects and iterative blending, were presented by Szecsi and Illes [SI12] and Parulek and Brambilla [PB13], respectively. In the third step, before the actual ray-casting, we sort the fragment records by entry depth.This allows us to easily step along those clusters during subsequent ray-casting.Sorting is performed using CUDA, as it proved to be substantially faster (more than a factor of 4) than a fragment shader implementation in our experiments.Thus for each image pixel (ray), we obtain a list of clusters that influence the function evaluation along the ray in ascending order. In the fourth step, the scene is rendered.Here the ray is cast for each image pixel, where we generate an input 3-D point p based on the entry depth of the first sphere at the pixel location and the projection matrix.Afterwards, we employ a sphere tracing algorithm [Har94] that processes the ray in a stepwise fashion until the last sphere exit depth is reached or we hit the iso-surface, that is, |F | ≤ .The selection of can be used to either increase the surface detail or to improve the rendering performance.When a point on the ray is in the area where no sphere of influence is presented, the point is automatically shifted to the first unprocessed sphere along the ray, that is, the next one in the linked list.This allows us to perform empty space skipping very efficiently.1).Clusters are represented as spheres that are rendered into the A-buffer.The ray-casting is performed through sphere tracing algorithm.In the end, we compute screen space ambient occlusion. Here we describe the performance analysis, where the lengths of individual fields across the molecule are varied.We show that the user has the possibility to alter the fields to either get more molecular details with decreased frames per second (FPSs) or vice versa. While a comprehensive evaluation of the performance of our method with respect to all parameter combinations (varying lengths of all three fields, the length of the transition area and iso-surface precision parameter ) is not feasible, we demonstrate its performance using several indicative examples.For the hierarchical abstraction, we utilize the fact that the performance is linearly dependent on the amount of clusters used.For instance, in Figure 1, when using just 20% (b) and 8% (d) of spheres compared to full atom count, (a) and (c), the performance increases almost 2× and 3×.Therefore, we focus our performance analysis rather on the surface abstraction when using the full atom count.We introduce evaluation based on several examples of molecules of various sizes where we alter the lengths of near-, mid-and far-field, while choosing a fixed size for the transition area as well as the precision parameter.We setup t d = 4R and = 0.05R, where R is the solvent radius.The performance measurements are performed on a workstation equipped with two (2 GHz) processors and 12.0 GB RAM and with the GPU, NVIDIA GeForce GTX 690. It is important to mention that for each frame we perform all the steps presented in Section 5.One of the biggest advantages of our real-time implicit function evaluation is the possibility of varying the function parameters anywhere in space, while preserving an interactive system response.To generate a suitable description of the performance based on the lengths of three fields, we store all FPS values for each distribution of fields.Afterwards, we employ ternary plots displaying a coverage of the three areas in barycentric coordinates.The colours, from yellow to red, encode the achieved FPS.For simplicity, we use relative length of fields expressed in percentage of how much of the molecule participates to each field; for example, t 0 = 1/3 and t 1 = 2/3 represents equally distributed fields over the molecule, which is represented by the central point in all four plots.This evaluation method is applied to four molecules (Figure 9), Aquaporin (1852 atoms) (a), proliferatic cell nuclear antigen (12 555 atoms) (b), phospholipase bound the lipid membrane (34 490 atoms) (c), asymmetric chaperonin complex (58 674 atoms) (d). Results and Limitations We demonstrate our technique on several molecules of various sizes.We employ the Protein Data Bank (PDB) file format, which stores the molecular information and atom positions. A typical demonstration of our technique is when the lengths of fields vary over the molecule and we fix the fields boundaries t 0 and t 1 and perform interactive zoom-in towards the molecular centre.Such an example is displayed in Figure 10 using the full atom count and also the hierarchical representation.Notice that on each zoom level there are some visual differences, but the higher cluster levels apply only when a molecule moves away from the viewer, where the visual discrepancies are even more suppressed.On the other side, in this example and for comparative purposes the non-clustered and clustered versions are depicted in the same size.In the figure, only 60% of spheres were employed for the rightmost visualization and 30% of spheres for the leftmost compared to the spheres/atoms contained within the molecule. Through our LOD concept we are able to boost the rendering performance of molecular models by 5× to 10×, while keeping the most detailed SES representation for the closest parts of the molecule from the camera.Additionally, when applying the hierarchical representation, we get even up to 20× the frame rate compared to full SES representation.All three surface representations are evaluated on-the-fly during ray-casting, which provides us with a great flexibility with regards to either enhancing the performance or the details for dynamic data sets. The utilization of hierarchical abstraction brings two major limitations.The first one is the actual surface precision when using the full atom count compared to exploiting the cluster hierarchy.Here, our shading abstraction helps to hide the most of the surface dissimilarities (Figure 10).Nevertheless, to compute the error quantitatively, we would need to firstly evaluate the most suitable parameters for the clustering method, for example, distance metric and stopping criteria, to reduce the error there first.The cluster error increases as we move up in the hierarchy.Nevertheless, the highest levels are usually employed only when depicting contextual molecular parts being farther away from the viewer. The second limitation of utilizing the hierarchical abstraction is the requirement of performing the sequential clusterings.This has to be done for each new structure modification repetitively.For molecules containing a few thousands of atoms, formation of 5 to 10 hierarchical clusterings can take up to 1 s.While for larger molecules (molecular systems) this can take up to a minute.For example, forming five levels for the lipid-protein complex (Figure 1) took 20 s, while generation of five levels for asymmetric chaperonin complex (58 674 atoms) takes 80 s (Figure 11).Nevertheless, we can already see the potential of our approach for visualizing mesoscopic whole-cell simulations [FKE12].Here a cluster hierarchy can be formed in the pre-computation step for all acting molecules.Another potential solution to perform a clustering on dynamic structures, is to exploit a fast GPU-based bounding volume hierarchies (BVH).For instance, Bitner et al. introduce a GPU-based solution to update a BVH tree to minimize the overall cost function [BHH13], which in our case can be based on one of the abstraction levels. We have demonstrated our method to biologists and a scientific illustrator, where we acquired a feedback about the overall visual quality and possible extensions of the proposed technique.Firstly, the illustrator was pleased with the results and the originality of the proposed concept.On the other hand, it was suggested to improve the contour rendering for the SES portion of the model.Here the main issue he raised was that the contours can appear jaggy which is due to C 1 discontinuities on the iso-surface of the SES model.Such discontinuous areas are also hard to track via the sphere tracing algorithm, which we also employ for the contour predicate.Additionally, the problem may be amplified by the fast curvature approximation we employ and a more costly scheme could help to overcome it.Overall, however, these issues were not seen as critical.Furthermore, we were suggested to incorporate additional silhouettes into the final visualization to clearly delineate boundaries between distinct molecules in compound systems.While not the focus of this paper, we found that this is an important note to be considered in our future work.Domain experts found the achieved visuals original and helpful, mainly due to the interplay between the visualizations and the precision.Furthermore, they suggested to apply the proposed method to more application-oriented scenarios. Summary We have proposed a novel approach for visualization of molecular surfaces.Our approach is capable of rendering large protein complexes interactively, while rapidly reducing the amount of displayed primitives, and at the same, keeping the visual appearance similar to the original data.Our method utilizes the level-of-detail concept by means of three different molecular surface models, SES, Gaussian kernels and van der Waals spheres combined in one visualization.Moreover, we introduced three shading levels that are aligned with the three surface models.For the realization, we took an inspiration from illustrations showing densely populated scenes with similar objects (spheres model with almost no detail), which are smoothly interconnected with highly detailed structures (SES model with full details) through the visual abstraction (Gaussian kernels model with fading out details).Finally, we proposed a new hierarchical abstraction that approximates the molecular atoms with a set of clusters that are employed in the final visualization. The importance function that represents the choice of the surface, shading and hierarchical models is based on the distance from the camera.We showcased how this can be used effectively to increase the rendering performance, even for large molecules, by interactive specification of level-of-detail boundaries.The entire rendering pipeline is performed on-the-fly.We introduced an LOD shading scheme with respect to all three fields individually.We preserved a seamless transition of depth, figure and shape visual cues using interpolation of shading and model schemes.A figure-ground ambiguity is solved via the utilization of the silhouette.The silhouette also keeps the entire molecule, even divided into distinct fields, perceptually unified. Figure 1 : Figure 1: Two molecular examples, Tubulin and phospholipase bound to lipid membrane, demonstrating utilization of our seamless visual abstraction.We employ three different surface representations [solvent-excluded surface (SES), Gaussian kernels and van der Waals spheres], their corresponding shading abstractions (diffuse shading and contours, constant shading with contours, constant shading without contours) and hierarchical representation.The application of individual levels is based on the distance to the camera; that is, the closest surface is based on highest surface, shading and hierarchical levels while the farthest are displayed via the lowest ones.In the presented examples we achieved 5× to 10× speed-up as compared to the full SES representation (a and c), and 10× to 20× when additionally applying hierarchical abstraction (b and d). Figure 2 : Figure 2: Object constancy employed in visual arts by Winsor McCay 'When Black Death Rode'. dimensions and enables us to flexibly c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. Figure 3 : Figure 3: Left: The organization of the three surface and shading levels according to importance function t(p) defined by the increasing distance from the camera.In the overlapping zones, the representations are merged using linear interpolation.The molecule is displayed with the full atom count, 1852 atoms.Right: The extraction of cluster hierarchies based on t(p).As the distance from the camera increases, the clusterings are retrieved from the higher hierarchical levels representing bigger clusters.An illustration shows exploitation of hierarchical abstraction containing 784 clusters, that is, 42% compress ratio. c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. Figure 4 : Figure 4: Four hierarchical levels on immunoglobulin.The first level represented by the full atom count (a), 12 530 atoms.The second level (b), approximates the atoms in the first level by a set of 6990 clusters, which represents 55.7% elements of the full atom count.The other levels, (c) and (d), contain 3122 (24.9%) and 1576 (12.5%) elements. c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. Figure 6 : Figure 6: Comparison of three clustering algorithms.The top row presents the first three levels of k-means clustering, the second row demonstrates affinity propagation method and the third row stands for fast hierarchical clustering.The percentage represents the ratio between the cluster render list size and the full atom count.Note that the ratios are different due to characteristics of the employed algorithms providing dissimilar clusters. Figure 7 : Figure 7: An illustration of the rendering pipeline.The formation of the cluster display list is determined by Equation (1).Clusters are represented as spheres that are rendered into the A-buffer.The ray-casting is performed through sphere tracing algorithm.In the end, we compute screen space ambient occlusion. Figure 8 : Figure 8: An example of two display lists.During the cluster tree traversal, all the nodes that fulfill Equation (1), are added to the list.By zooming out outwards the molecule, the red display list becomes more reduced and abstracted than the green display list.c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. Figure 9 : Figure 9: Ternary plots showing performance analysis evaluated on four distinct molecular structures.The analysis is based on the lengths of individual fields (SES, near-field; Gauss, mid-field; spheres, far-field).(a) Water channel (Aquaporin).(b) Proliferatic cell nuclear antigen.(c) Phospholipase bound the lipid membrane.(d) Asymmetric chaperonin complex.Note that the achieved FPS are, in the case of the camera-based importance function, directly proportional to the lengths of each areas; that is, prolongation of the near-field leads to decreasing FPSs on the other side, contraction of the far-field increases FPSs. c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd. Figure 10 : Figure 10: Comparison of zooming in towards the molecule (proliferatic cell nuclear antigen) performed using the full atom count (12 555 atoms, top) and the hierarchical representation (bottom).The display list contains from left to right: 3716, 4527, 5292 and 7516 clusters. c 2014 The Authors Computer Graphics Forum published by John Wiley & Sons Ltd.
2017-04-12T00:33:07.908Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "bb7414bc5b0188e0c6607092026658e0dfe9cc47", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cgf.12349", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "34a82ef490aea30503bd7033bd633679bbada4b3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
239456949
pes2o/s2orc
v3-fos-license
Epigenetic interplay between methylation and miRNA in bladder cancer: focus on isoform expression Background Various epigenetic factors are responsible for the non-genetic regulation on gene expression. The epigenetically dysregulated oncogenes or tumor suppressors by miRNA and/or DNA methylation are often observed in cancer cells. Each of these epigenetic regulators has been studied well in cancer progressions; however, their mutual regulatory relationship in cancer still remains unclear. In this study, we propose an integrative framework to systematically investigate epigenetic interactions between miRNA and methylation at the alternatively spliced mRNA level in bladder cancer. Each of these epigenetic regulators has been studied well in cancer progressions; however, their mutual regulatory relationship in cancer still remains unclear. Results The integrative analyses yielded 136 significant combinations (methylation, miRNA and isoform). Further, overall survival analysis on the 136 combinations based on methylation and miRNA, high and low expression groups resulted in 13 combinations associated with survival. Additionally, different interaction patterns were examined. Conclusions Our study provides a higher resolution of molecular insight into the crosstalk between two epigenetic factors, DNA methylation and miRNA. Given the importance of epigenetic interactions and alternative splicing in cancer, it is timely to identify and understand the underlying mechanisms based on epigenetic markers and their interactions in cancer, leading to alternative splicing with primary functional impact. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-021-08052-9. Background Cancer is a complex disease that is caused by alterations in the genome and epigenome. The alterations in cancer are different in each person as the tumor accumulates additional changes occur. As a result, the genetic and epigenetic changes in the same tumor could be different among diverse cells. Precision medicine is an emerging approach to the treatment of cancer by developing targeted therapies taking into account patients' environmental, lifestyle and genomic variabilities [1] . To apply a precision medicine approach to cancer, the fundamental understanding of genomic and epigenetic abnormalities that cause carcinogenesis and drive its progression is essential. Understanding the epigenetic abnormalities is very challenging, as various epigenetic machinery interacts with each other in an integrated manner to maintain global expression pattern [2]. Thus, many largescale collaborative initiatives have been undertaken to generate large multi-omics datasets in cancer like The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC), and multiple data integration methods have been developed to understand multi-omics markers associated with clinical outcomes [3][4][5][6][7][8][9][10][11][12][13]. Many abnormalities in cancer are caused by epigenetic changes in DNA methylation and microRNA (miRNA) [14,15]. Previously, we identified interactions with methylation and miRNA that were associated with gene expression and survival outcome [13]. However, methylation and miRNAs are also known to play a role in isoform usage in cancer [16]. In another study, we looked at the effects of alternative splicing (AS) on miRNA binding sites in bladder cancer to conclude that understanding transcript isoforms is essential to understand gene regulatory mechanisms mediated by miRNA [17]. Alternative splicing is an underlying contributor to biological complex and differences. Each gene in eukaryote cell is composed of two distinct blocks of sequences, exons and intron. As exon is the region encoding segments of the protein, exon is included, and introns are removed by alternative splicing mechanism during transcription. Some exons are also selective, which means that some exons may also be removed from the nascent mRNA, leading to a different combination of exons in the final transcript and are also implicated in a variety of human diseases [18][19][20].~95% of human genes are alternatively spliced. Conservative estimates of AS show that at least 50% of exons are alternatively spliced [21]. That is, most genes can each produce an entire array of potentially unique proteins. Even if the same genes are actively transcribed in two different cells, their proteins can be different depending on how those genes are spliced. As the regulatory molecules (i.e., miRNA and methylation) are a type of epigenetic factors affecting gene expression [22], gene regulation and biological complexity may be more complicated when alternative splicing interact with these regulatory molecules (i.e., methylation and miRNA). Cancer genes are down-or up-regulated by the methylation status in promoter regions, hyper-methylations in promoter regions of oncogenic genes and hypo-methylated in promoter regions of tumor suppressor genes, respectively [23]. Due to the nature of GC contents; higher in exon compared to intron and higher in constitutive exon compared to alternative splicing exon, methylation may be differentiated across exons and introns by splicing status [24][25][26]. Furthermore, hypo-methylated intron has been shown to be more retained in breast cancer patients [27]. In the purpose of integrating methylation with genetic regulation, EpiMethEx [28] is one of the well-developed tools that directly associate methylation with transcript isoform. In addition to methylation, splicing occurring in 3′ UTR may affect regulatory effect of miRNA [29]. When exon encompassing miRNA binding site is skipped (i.e., exon skipping event) or partial exon is alternatively spliced (i.e., 5′ or 3′ splice site event), given mRNA maybe not be repressed by miRNA [17]. Reversely, inclusion of new exon or intron (i.e., retained intron event) may potentially provide additional miRNA binding site resulting in a reduced amount of mRNA product [17]. In our study, we seek to study epigenetic interactions (i.e., miRNA and methylation) in gene regulation through alternative splicing. We classified the interaction into two terms, synergistic and antagonistic in conjunction with alternative splicing status. We then evaluated whether the differences in isoform expression resulting from epigenetic interactions are associated with survival ( Fig. 1). Thus, in this study, we put forward a method to identify the effects of methylation and miRNA interaction associated with isoform expression and its further association to survival. Results and discussion Methylation and miRNA interaction associated with isoform expression After applying the likelihood ratio (LRT) test on the full and reduced model, 136 out of 2,561,305 combinations were found to be significant (Bonferroni adjusted p-value < 0.05). Altogether, there were 61 unique isoforms, 105 methylation probes and 51 miRNAs across 56 genes. The number of samples varied across each combination, with a minimum of 294 and mean of 388.3, due to missing values. All the significant combinations, number of samples, beta values from the full model, correlation between isoform, methylation, miRNA pairs, and Cox regression p-values are provided in Table S1. The distribution of the direction of effect determined by beta values is shown in Table 1. It can be noted from Table 1 that 59 methylation probes have a negative direction of effect, and 77 methylation probes have a positive direction of effect, holding 43%. However, in the case of miRNA about 76% (103/136) of miRNAs have the negative direction of effect, indicating most of them downregulate isoform expression. Additionally, Amuran et al. compiled a list of 106 miRNAs associated with bladder cancer by reviewing the literature and about 71% (97/ 136) miRNAs in the combinations were present in the list of miRNAs deregulated in bladder cancer [30]. Patterns of methylation and miRNA interaction associated with isoform expression We stratified the interaction pattern according to methylation probe's location; promoter, gene body, and 3'UTR. 44 methylations (19 unique transcripts), 66 methylations (32 unique transcripts) and 10 Fig. 1 Overview of the study. The diagram illustrates various steps in the study -formation of isoform, miRNA and methylation pairs, data quality control, LRT and survival analysis methylations (8 unique transcripts) were located in the promoter, gene body and 3'UTR regions, respectively. In general, 3′ UTR is considered part of gene body, but we separated 3′ UTR region from gene body as miRNA can directly interact with methylation in 3′ UTR region only, as miRNA binds to 3'UTR region. The interaction is first defined as a methylationdominant or miRNA-dominant (See Methods). As shown in Fig. 2B, interactions in promoter region (i.e., a location of methylation probe) was more likely to have miRNA-dominant regulation. Notably, very few interactions had methylation -dominant cases (less than 10%). However, interestingly methylation exerted dominant regulation effect in gene body. We then further stratified the interactions, synergetic-or antagonistic-effect. Interestingly, the antagonistic effect was observed little more than synergetic in the promoter and 3′ UTR region. On the other hand, a higher number of synergetic effect combinations were observed in the gene body (Fig. 2C). Differential isoform expression and association with survival outcome We were interested in understanding how the low/high expression of methylation and miRNA expression in pairs significantly associated with isoform expression altered the isoform expression. So, we split samples were split into LL and HH groups. To further understand the implication of these changes on survival of the patients we performed Kaplan Meier survival analysis between the groups. The differential isoform expression between HH and LL (2-group test) was significant for 100 Table 1 Distribution of direction of effect of methylation, miRNA and interaction term. (+) for synergistic and (−) for antagonistic effect combinations (out of 136) at (t-test P < 0.05). Further, the differential expression between 4 groups HH, LL, HL, and LH (4-group test) were also examined, and 98 combinations were significant at (ANOVA P < 0.05). In addition to that, 126 combinations were significant in either 2-group test or 4-group test, and 76 combinations were significant in both 2-group test and 4-group test. The results show that for most of the combinations that have significant interaction between methylation and miRNA, the differential expression can be observed between HH, LL, HL, and LH groups. Subsequently, the samples were split into HH and LL groups to determine if there is any difference in survival of patients between the groups. Cox regression was run on 136 significant combinations. Out of 136 combinations, 13 were associated with survival outcome at Cox p-value threshold < 0.05 (Table 2). Of the 13 combinations that were significantly associated with survival outcome, isoforms from 11 combinations were differentially expressed between HH and LL groups and the other two isoforms between the LL, HH, LH and HL groups. The isoform expression was significantly higher in LL group in 7 of the 13 combinations and lower expression in the four remaining combinations ( Fig. S1-S13). Case study: CAV1, TGFBR3, and RND3 Figure 3 shows plots for isoform ENST00000341049 in gene CAV1. CAV1 is known to be associated with highgrade bladder cancer as an oncogenic membrane protein, and its overexpression is known to be associated with bladder cancer progression [31,32]. As observed in Fig. 3c, the isoform has significantly higher expression in the LL group (Fig. 3a, red points) than in the HH group ( Fig. 3a, cyan points). Since higher expression of CAV1 is associated with cancer progression, the survival rate should be lower for LL group. As anticipated, the survival rate was significantly lower (Cox p-value < 4.9 × 10 − 3 ) for LL group (Fig. 3b). That is, disruption in regulation of methylation (i.e., cg04474049) and miRNA (i.e., hsa-let-7c-5p) may contribute to bladder cancer progression. One of the isoforms, ENST00000212355 (gene TGFBR3) from a combination associated with survival, is targeted by miRNA -hsa-let-7c-5p, which is known to be a tumor suppressor and acts by downregulating TGFBR3 post transcriptionally [33] (Fig. 4a). Moreover, TGFBR3 knockout is known to reduce tumor size. From the interaction plot in Fig. 4b, it can be observed that the isoform expression is low when methylation and miRNA are both low. However, when methylation is higher (+ 1 sd), the isoform expression decreases with an increase in miRNA expression, but the slope is comparatively smaller. Thus, the LL group has lower isoform expression than the HH group as seen in Fig. 4c. Consequently, LL group has a higher survival rate as compared to the HH group (Fig. 4d). The other isoforms ENST00000425042 and ENST00000263895 of HID1 and RND3 respectively are part of combinations that are associated with survival. HID1 and RND3 are known to be downregulated in various cancers [34,35]. The loss of function of HID1 is known to be associated with the development of cancer [34]. Consistent with the literature, it was observed that the LL group has a significantly lower expression of isoform ENST00000425042 and also a lower rate of survival (Fig. S5). The group which has significantly higher isoform expression (t-test P < 0.05) c The group which has higher survival rate (Cox regression P < 0.05) RND3 is known to be downregulated by its target miRNA, hsa-miR-200c-3p, in the combination. The downregulation of RND3 leads to higher expression of CCND1, which can lead to oncogenesis and tumor progression [35]. Besides, some of the genes could also show oncogenic properties in cancer. Two other genes that are part of combinations associated with survival, PMEPA1 and THBS2 are known to be upregulated in cancer [36,37]. Moreover, PMEPA1 knockout is known to impair tumor growth, and THBS2 overexpression is known to be associated with vascular invasion, advanced primary tumor status and nodal metastasis [37]. Fatty acids play an important role in cancer cells, as cancer cells need large amounts of fatty acids to grow [38]. Thus, fatty acid metabolism is involved in cancer progression. Two of the genes, ACOT7 and SCD5, with isoforms ENST00000377855 and ENST00000319540 respectively, are part of the "Biosynthesis of unsaturated fatty acids" pathway (KEGG 2019 Human). The pathway was also significantly enriched (p-value = 0.00011 and adjusted p-value = 0.035) based on the enrichment test run using genes of all isoforms from combinations that were associated with survival, using Enrichr [39,40]. Methylation and miRNA interaction patterns Many different interaction patterns of miRNA and methylation associated with isoform expression were observed. Especially, more miRNA dominant interactions were observed in promoter region, and more methylation dominant interactions were observed in gene body region. In fact, methylation within promoter region alters gene expression by affecting binding of transcription factors, and its regulation prior to that of miRNA. In other words, mRNA expression may be susceptible to be regulated by miRNA which is a next step of the methylation regulation. That is, the basis of this knowledge may contribute to more observation of dominant miRNA regulation with interactions with methylation within promoter region. Unexpectedly, in methylation within gene body, we found more methylation dominant interaction. The Methylation within gene body is known to relate to splicing processing, cause a temporal pause of the transcription process that help correct splicing [25]. Splicing regulation is very complex and occur generally at the mRNA processing after gene expression regulation [41]. Thus, these methylations may affect mRNA expression regulation more constantly than methylation in promoter. Although we separated methylation in 3′ UTR from gene body to understand patterns when interaction of miRNA and methylation occurred in the same location, we did not find distinct characteristics. However, it may be caused by a small number of interactions in the case. In addition, there was also different patterns of synergistic and antagonistic effect between methylation in promoter and gene body (Fig. 2B). As we discussed above, we observed the uneven distribution of the interactions across gene regions (Fig. 2B). To verify if this difference may be due to the unique enrichment of underlying distribution of methylation in certain gene regions for interactions with miRNA or not, we counted the number of underlying methylation probes in each gene region; promoter, gene body, and 3′ UTR. The methylation was most counted in gene body region (325,147 probes), which is followed by promoter (205,175 probes) and 3′ UTR region (26,228 probes) (Fig. S14), in which the underlying distribution can be biased by the length of each region: gene body is the longest in length. Taken together, we found that the 3′ UTR has the smallest number of the underlying methylation but the most enrichment of the interaction with miRNAs, suggesting that methylations interacting with miRNA may be enriched in 3′ UTR which they are colocalized. Out of 13 significant combinations associated with survival, seven combinations were synergetic interaction and remaining six combinations were antagonistic interactions. Particularly in synergetic combinations, three combinations had positive methylation and miRNA correlation, and the remaining four combinations had negative methylation and miRNA correlation. Figure 5 summarizes all the combinations into synergetic and antagonistic categories. Additionally, if the four groups were divided based on synergetic and antagonistic effect, it can be observed in Fig. 5 that most of the isoform expression between groups LL, HH, LH, and HL are similar in the same group. For instance, SGCD_ ENST00000435422, HID1_ENST00000425042 and TGFBR3_ENST00000212355 have synergetic effect with methylation and miRNA being positive. The isoform expression between LL, HH, LH, and HL groups are similar. Specifically, all three had lower isoform expression in LL group and higher isoform expression in other groups. The opposite also holds true in case of 2nd group with negative synergetic interaction where isoform expression is higher in LL group for all four combinations as compared to other groups. As we showed that a combination of methylation and miRNA could provide an improved knowledge of the genetic regulation underlying bladder cancer and the methylation in this study and miRNA pattern is a unique characteristic across cancer types [42,43], our approaches could be expanded to other cancers if the matched data (methylation and miRNA) is available. Conclusion In this study, we considered the use of TCGA bladder cancer data to show epigenetic interactions between methylation and miRNA associated with survival in bladder cancer. The method used successfully identified 136 significant methylation and miRNA interactions that were associated with isoform expression. Further, out of 136 significant interactions, 13 were significantly associated with survival. Further, it also observed that a greater number of miRNA dominant interactions were observed in the promoter region whereas, a greater number of methylation dominant interactions were observed in the gene body. Additionally, isoform expression patterns Synergetic and antagonistic interactions. The blue arrow shows the correlation between miRNA and isoform and yellow arrow shows correlation between methylation and isoform. The x-axis for the line plot represents miRNA expression and y axis represents isoform expression. The lighter of the 3 lines is methylation − 1 SD, the dark solid line is +1SD and the medium one is the Mean methylation. The boxplot shows plot of groups LL, HH, LH and HL from left to right were also observed with synergetic and antagonistic interactions. This shows that miRNA, methylation and isoform expression data can be used to study interactions between methylation and miRNA that are associated with survival outcome. Further, this study also shows different categorizations and patterns of interaction at various sites. The findings in this study could elucidate some of the complex epigenetic mechanisms involved in carcinogenesis and cancer progression, which could aid in the development of new targeted therapies and be valuable when applied to precision medicine. Dataset and quality control The Cancer Genome Atlas (TCGA) data was obtained from Xena browser (http://xena.ucsc.edu). Xena browser provides pre-compiled datasets derived from NCI's Genomic Data Commons (GDC) public resource for further bioinformatics analysis [44]. The normalized isoform expression data using RSEM FPKM was downloaded from the TCGA Pan-Cancer (PANCAN) section of Xena browser. Further, the clinical data was also downloaded from the PANCAN section, as it contains the latest updated survival data. However, the methylation and miRNA data were obtained from TCGA Bladder Cancer (BLCA) section. The subset of BLCA patients was obtained for the isoform expression data using sample IDs from the clinical file. After excluding samples from "Solid tissue normal" as defined by TCGA sample type code "11", there were 407 BLCA samples with isoform expression data, 415 with methylation data and 410 with miRNA data. Additionally, five samples were removed from clinical data because of missing, age at diagnosis, survival time, AJCC pathologic tumor stage, or histological grade, resulting in 409 samples. Consequently, the common samples between all four datasets were extracted, leading to a total sample count of 399. The clinical demographics for 399 samples are shown in Table 3. Further quality control (QC) steps were applied to each dataset. Initially, before QC, there were 198,619 isoforms, 485,577 methylation probes and 2588 miRNAs. Only isoforms that have FPKM threshold ≥0.1 in more than 50% of samples and genes with at least two transcript isoforms were selected for the analysis. Further, methylation probes with all 'NA' values were removed, and miRNAs with expression value missing in more than 75% of samples were removed. Finally, 67,627 isoforms, 396,065 methylation probes, and 706 miRNAs passed the QC criteria. Table 4 shows the data types, platform and number of features for each data type after QC. To better interpret the interactions, the isoform expression, methylation and miRNA expression values were centered by subtracting with respective mean. Additionally, any isoform, methylation and miRNA expression values that were not in the range [Q1-3 × IQR, Q3 + 3 × IQR] were considered outliers and were removed. Further information and links to the files used from Xena browser are listed in "Availability of data and materials" section. Isoform, methylation and miRNA binding sites Most of miRNA target predictions are dependent on sequence similarity between mature miRNA and 3'UTR of mRNA. These predictions have enormous false-positive rate due to lack of experimental validation. Thus, in order to obtain comprehensive and reliable miRNA target site information, reducing the false-positive rate, we combined miRNA target predictions from databases generated using two different prediction methods and one experimentally validated data set: 1) TargetScan (release 7.0) [45], 2) MicoRNA.org [46] which computationally predicted miRNA target sites based on conserved complementarity and the miRanda algorithm between targets of miRNAs and mRNAs, respectively and 3) miRTarBase that identified relations between miRNA and mRNA based on experimental validations through reporter assay, western blots, and etc. [47]. In particular, we only included predictions with high confidence scores (alignment score ≥ 120 and binding energy ≤ − 7.0) of miRanda algorithm from the MicoRNA.org database in this study. We carried out data quality control in three steps as follows: first, we tabulated miRNA and mRNA pairs using miRTarBase. Second, for these pairs, we obtained genomic coordinates of mRNA-target sites in 3′ UTR by matching miRNA IDs (i.e., hsa-miR-199) with TargetScan Methylation and miRNA interaction To identify the interactions between methylation and miRNA with respect to isoform expression, two linear models were used. The full model consisted of a linear combination of methylation, miRNA and an interaction term, whereas the reduced model only contained a linear combination of methylation and miRNA. However, both models were adjusted for the same covariates -age at diagnosis, gender, AJCC pathologic tumor stage, and histological grade. The covariates were obtained from the clinical dataset and, their distributions are shown in Table 3. Specifically, the full model was defined as: isoform-expression~methylation + miRNA + methylation * miRNA + covariates and the reduced model was defined as: isoform-expression~methylation + miRNA + covariates. The significance of the interaction was determined by applying LRT between the full and reduced model. Further, the LRT p-values were adjusted for multiple testing using Bonferroni correction, and any combination with Bonferroni corrected p-value < 0.05 was considered significant. Categorization of miRNA and methylation pairs based on methylation probe location As shown in Fig. 2A, for the significant miRNA and methylation pairs, we divided them into three categories according to the location of methylation probes in intragenic regions, promoter (upstream 2000 bp), gene body (exons and introns), or 3'UTR. Then, for each category, we defined miRNA, methylation, or both dominant regulation of isoform expression based on the correlation coefficient value between either of miRNA or methylation and isoform expression. In other words, as shown in Fig. 2A, an absolute correlation coefficient value of miRNA greater than 0.3 and methylation less than 0.3 with isoform expression was considered as miRNA dominant regulation; on the other hand, the correlation coefficient value of miRNA less than 0.3 and methylation more than 0.3 with isoform expression was considered as methylation dominant. In addition, if an absolute correlation coefficient value of both miRNA and methylation have more than 0.3 or less than 0.3 with isoform expression, it was defined as strong effect and weak effect respectively. Determination of synergetic or antagonistic effect of methylation and miRNA interaction on mRNA expression We defined as a synergistic or antagonistic effect of methylation and miRNA pairs on isoform expression based on correlation coefficient values. Synergistic effect on isoform expression is the case when the both has the same direction of correlation with isoform expression (i.e. positive/positive or reverse/reverse correlation), whereas antagonistic effect is the case when they have different directions (i.e., a positive/reverse). Survival analysis and differential isoform expression To further investigate differential isoform expression and difference in survival of the patient groups with high/low methylation and miRNA expression, the samples were split into nine groups based on methylation and miRNA expression values together. The groups were created by splitting the methylation data into three quantiles and miRNA data into three quantiles, as shown in Fig. 2A. The two extreme groupsa group with high methylation and high miRNA expression (HH) and group with low methylation and low miRNA expression (LL) were selected to run overall survival analysis. The survival analysis was run using Cox regression, adjusting for covariates -age at diagnosis, gender, AJCC pathologic tumor stage, and histological grade. Any combination with cox regression p-value < 0.05 was considered significant. Further, the differential isoform expression was also analyzed between HH and LL group using the t-test. Additionally, differential isoform expression between HH, LL, LH, and HL groups were also analyzed using ANOVA. Any p-value from t.test and ANOVA below 0.05 was considered significant.
2021-10-23T06:16:58.023Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "f6e779527c34ba469fce2f539cbba9eb1db56ae4", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-021-08052-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "729c2a155519597eaf98176f28bb8dd74a811a73", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
105299979
pes2o/s2orc
v3-fos-license
TRANSIENT COMBUSTION ANALYSIS OF IRON MICRO-PARTICLES IN A GASEOUS OXIDIZING MEDIUM USING A NEW ITERATIVE METHOD This paper presents analytical solution to the transient combustion analysis for iron micro-particles in a gaseous oxidizing medium using Daftardar – Gejji and Jeffari Method (DJM). Also, parametric studies are carried out to properly understand the chemistry of the process and the associated burning time. Combusting particle thermal radiation effect and its linear density variation with temperature are applied. The generated analytical solution obtained by DJM are verified with an efficient numerical (fourth order Runge – Kutta, RK4) scheme. Results show that DJM is an efficient scheme for the problem. Also, the parametric study performed in this work shows that as the heat realized parameter and the surrounding temperature are increased, the combusting particle temperature increased rapidly until an asymptotic behaviour is attained. This work will be useful in solving to a great extent one of the challenges facing industries on combustion of metallic particles such as iron particles as well as in the determination of different particles burning time. Introduction The combustion of metallic particles and accurate prediction of the burning period of the combustibles have been the major challenges facing the metal processing industries. Combustible dusts which approximates gaseous oxidizing media require an accurate knowledge of their explosion hazards in order to make their importance to different industrial application applaudable. As a result, researchers have ventured into better understanding and modeling the particle and dust combustion. In a recent study, Sun et al. [1] investigated the behaviour of iron particles when simultaneously combusted and suspended in air. In their work, they considered the combustion zone and behavior of each iron particles by employing a powerful high-speed photomicrograph with which they were able to determine which iron particle combust at the combustion zone without gas phase flame. They finally established a relationship between the burning time and diameter of iron particles for moderate particle diameter. Haghiri and Bidbadi [2] applied the principle of flame propagation to study the dynamic behavior of a two-phase mixture which consists of micro-iron particles and air by considering the effect of thermal radiation. They obtained results which show that the considered thermal radiation plays a significant role in the improvement of vaporization process and burning velocity of organic dust mixture, compared with the case where this effect is correspondingly neglected. Hertzberg et al. [3] examined combustible metal dust explosion limits as well as the influence of external agents such as temperature and pressure. They argued that when this combusting particles are properly conditioned and monitored, their merit will be greatly felt especially in industrial applications. Different analytical approaches have been widely employing to obtain close form solution to nonlinear engineering problems such as the tremendous work done by Hatami et al. [4]. In their work, three weighted residual methods were applied to properly predict the transient combustion of iron micro-particles. They concluded that least square method gives the best result when verified with numerical method. He [5] - [9] applied Homotopy perturbation method to different nonlinear engineering problems. He also presented different approaches of improving the scheme in order to handle some strongly nonlinear engineering and mathematical problems. Saedodin and Shahababaei [10] applied homotopoy perturbation method (HPM) to study and analyze heat transfer in longitudinal porous fins while Darvishi et al. [11] and Moradi et al. [12] and Ha et al. [13] utilized differential transformation method (DTM) and homotopy analysis method (HAM) to obtain close form solution to the natural convection and radiation in a porous and porous moving fins with temperature dependent property and internal heat generation. Sobamowo et al. [14] applied homotopy perturbation method to analyze convective-radiative porous fin with temperature-dependent, internal heat generation and magnetic field. They presented interesting results and the validation of their work proves the efficiency of the scheme. Also, Yaseen et al. [15] developed an exact analytical solutions of Laplace equation by using DJ method. They employed the iterative method in the treatment of the Laplace equation considering both Diritchlet and Neumann conditions with their results compared with some existing iterative methods. Motivated by the previous works as mentioned above, this paper introduces an analytical method whose close form solutions will be used for predicting and realizing the temperature history of iron particle during and after burning, so DJM is applied. The method agrees excellently with numerical Forth order Runge -Kutta method with minimal error as compared to those of previous works; hence the verification of the schemes. Problem Description and Governing Equation Consider an iron spherical particle, Fig. 1 which is combusted in the gaseous oxidizing medium as a result of high reaction with oxygen which acts as an oxidizer. The assumptions used includes: (c) lumped system assumption is applied; (d) the spherical particle combusts in an ambient medium; (e) interactions with other particles is neglected; (f) forced convection effect are neglected; (g) constant thermo-physical properties for the particle and ambient gaseous oxidizer; (h) particle surface is assumed to be gray; (i) the surface reaction rate is treated as temperature independent with a constant convection heat coefficient; (j) Kirchhoff' slaw is invoked, hence the surface absorptivity (α) and the emissivity (ε) at a given temperature and wavelength are equal; (k) particle density varies linearly with temperature as: (l) ignition temperature is used as the initial condition (T (t=0) = T i ). Methods of Solution Due to the nonlinear terms in Eq. (9), it is very difficult to develop a closed form or an exact analytical solution to the nonlinear equation. Therefore, the common practice is to make recourse to numerical method. However, in recent time, several semi-or approximate analytical methods have been developed to solve nonlinear equations. In this present study, the nonlinear equation in Eq. (9) is be solved analytically using Daftardar -Gejji and Jeffari Method (DJM). Basic Principle: Daftardar -Gejji and Jeffari Method (DJM) As pointed previously, the Daftardar -Gejji and Jeffari Method (DJM) is an approximate analytical method for solving differential equations. However, a closed form series solution or approximate solution can be obtained for non-linear differential equations with the use of DJM. The basic definitions of the method is as follows. Consider an equation with a functional of the form with g (t) being a known function and N (A (t)) representing the non-linear component of the general form A (t). It is desired to obtain a series solution of the form The nonlinear operator N of Eq. (11) may be written in a decomposed form as The general form as shown in Eq. (11) may then be written as Hence, with a general solution of the form Method of Solution: Daftardar -Gejji and Jeffari Method (DJM) Recall that the nonlinear governing equation as shown in Eq. (9) may be expressed as Grouping the coefficients, the above equation becomes: re-representing the coefficients, we have where The leading term is obtained from the initial dimensionless ignition temperature as The remaining term of the series solution will be obtained by applying the DJM principle in Eq. (15) on the nonlinear Eq. (9) as shown below. Since the combustion model is: Integrating both sides to obtain the dependent variable for the L.H.S, Using the initial condition, Now, as described in Eq. (15), Using the initial condition, Making necessary substitution, we have Similarly, τ. The resulting series solution is which in expanded form becomes: Results and Discussion Fig. 2 depicts the verification of the analytical scheme used with a numerical forth order Runge -Kutta. The scheme, DJM ascertain a good agreement with the numerical method. In order to visualize the accuracy of the scheme, a super-imposed plot which shows the temperature profile of a 20µm combusting iron particle is inspected as shown in Fig. 2 together with Table. From the figure, it is evident that DJM shows a good agreement with the Numerical scheme and as such is efficient for the problem in concern. Fig. 3 and Fig. 4 depict the effect of the combusting particle diameter on temperature profile and burning rate using DJM. From the graphs, it can be easily seen that particle diameter have evident influence on the temperature profile. A particle with 60µm diameter was observed to possess a higher temperature profile which means that an increase in the combusting particle diameter causes a corresponding increase in the temperature profile as well as the burning time. As a result of this evident impact, the particle diameter may be used as a controlling agent in reducing the hazardous effects that normally propagate from iron particle combustion. Fig. 6 depict the influence of ϵ 1 and ϵ 2 on the temperature profile. From the figures, it can be seen that increasing ϵ 1 and ϵ 2 decreases the combustion temperature with this effect more pronounced with ϵ 2 . The decrease in combustion temperature with a corresponding increase in ϵ 1 and ϵ 2 is as a result of an increase in the radiation heat transfer term in the combustion particle. Fig. 8 depict the influence of the heat realized parameter and the surrounding temperature on the combustion temperature. From the plots, we can conclude that increasing the heat realized parameter and the surrounding temperature increases the combustion temperature. This increase is significant for the heat realized parameter variation than that of the surrounding temperatures except for high values of surrounding temperature.
2019-04-10T13:12:17.596Z
2018-09-26T00:00:00.000
{ "year": 2018, "sha1": "27724c6fe3b470f496be13b978b0f1bab8934847", "oa_license": null, "oa_url": "https://jcem.susu.ru/jcem/article/download/162/130", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a1255f0dc9aec9b19efdfa77f1ce1355fe5bef34", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
238087948
pes2o/s2orc
v3-fos-license
ON THE CHALLENGES OF MOBILITY PREDICTION IN SMART CITIES The mass of data generated from people’s mobility in smart cities is constantly increasing, thus making a new business for large companies. These data are often used for mobility prediction in order to improve services or even systems such as the development of location-based services, personalized recommendation systems, and mobile communication systems. In this paper, we identify the mobility prediction issues and challenges serving as guideline for researchers and developers in mobility prediction. To this end, we first identify the key concepts and classifications related to mobility prediction. We then, focus on challenges in mobility prediction from a deep literature study. These classifications and challenges are for serving further understanding, development and enhancement of the mobility prediction vision. INTRODUCTION In the recent past, the appearance of smart cities and internet of things (IoT) systems along with new technologies (e.g., mobile networks-MN, sensor networks), and new tools (e.g., smartphone), has led to an impressive growth of amount data and information produced. This amount of data tends to multiply wildly in the case of smart cities which include at the same time these new concepts, technologies and tools. Indeed, a smart city features the utilization of information and communication technology (ICT) infrastructure, human resources, social capital, and environment resources for economic development, and high quality of human life. Thus, analysis and mining of sensed data from dynamic cities is an important step towards making a city smart (Pan et al., 2013). Among the flowing data in such a city, those linked to individual movements (mobility data) are very interesting for a large community of researchers and developers, especially in the mobility prediction field. Predicting a mobile user location is an inherently interesting and challenging problem in several domains such as the development of location-based services, personalized recommendations, suspicious target tracking, intelligent transportation and mobile communication systems (MCSs). For example, in MCSs, location prediction has received increased attention driven by applications in location management, call admission control, smooth handoffs, and resource reservation for improved quality of service (Samaan, Karmouch, 2005). However, predicting mobility requires the availability of a large amount of data from very heterogeneous sources, especially when it comes to a smart city. Two levels of collection can be distinguished in the data storage location. The first level concerns the data acquisition by a system or by an application from a mobile device, such in the case of the Mobile Crowd Sensing and Computing (MCSC) paradigm or even with the use of a recommendation application on a smartphone. In this case, the data are stored in the mobile device or in an external storage place related to the application. The second level is related to a collection of data from a storage device used by a system, such as MCSs, smart cards management systems, etc. In this case, the data, in particular mobility data, are stored in specific equipment (e.g., HLR 1 , VLR 2 , sensor nodes, etc.) and are collected directly from this equipment. However, because of the privacy concerns of this type of data, they may be subject to constraints and conditions when accessing to those data. A major challenge is to access and recover the. In addition, data are often stored in a raw state such as log files (Zheng et al., 2010). So, before being exploited, data stored in log files must be transformed into other formats like GPX or PLT format. Once data is collected and stored, prediction requires a model coherent with the prediction's application domain and able to provide the best mobility prediction in terms of accuracy, cost, etc. A prediction model can be produced based on one of the usual techniques dedicated to prediction such as Markov chains-MCs (Amirrudin et al., 2013a;Qiao et al., 2015), Machine Learning-ML (Ozturka et al., 2019), Bayesian Networks-BNs (Dash et al., 2015), and data mining-DM techniques (Mcinerney et al., 2013), where most of them are based on learning from previous data to predict user mobility. Although the mobility prediction has been the subject of several research works which sometimes gave acceptable results, in terms of precision (Amirrudin et al., 2013b;), and cost, certain issues remain open and accept new contributions. In this paper, we aim to focus on the main concepts related to the mobility, data required for mobility prediction and on related works on mobility prediction. We also aim to disclose issues allowing researchers and developers to orient themselves towards open questions on the mobility prediction. The remainder of this paper is organized as follows: Section 2 presents the basics of mobility prediction. Section 3 gives an overview on the mobility prediction works. In section 4 we propose a classification for mobility prediction. In Section 5 we focus on a set of challenges related to mobility prediction. And in Section 6 we give a conclusion and some perspectives. DEFINITIONS AND KEY CONCEPTS In this section we introduce the key points of mobility prediction. We start by a definition and a classification of mobility. Then we focus on the data required for mobility prediction. Mobility In general, the definition of mobility can be obtained from a dictionary or an encyclopedia. In Larousse 3 editions, for example, the definition can be translated as follows: "A Character of what is susceptible to movement, of what can move or be moved, change place, function." In the context of communication networks, we carry over the definition given by Samuel Pierre in his book (Samuel, 2003): "In the domain of communication networks, mobility can be defined as the ability to access, from any place, all the services normally available in a fixed and wired environment such as a home or an office. These services include, among other things, the possibility of conducting a telephone conversation while driving a car, being reached from a traditional telephone or an IP address anywhere in the world, or receiving e-mail, faxes or voice messages while traveling abroad." From our viewpoint, in the context of a smart city, mobility can be defined as any movement an entity undergoes over time, where an entity can be an object or an individual (a person). This movement can vary from a simple action (move the hand, take a seat, get up...) to a real trip (walk from a point A to point B). In reality, in such a case, mobility is a relative notion in the sense that it is closely linked to the size of the environment in which we want to define it. For example, if the environment consists of a city, the mobility is considered as being the movements from a departure point to an arrival point. However, when the environment is restricted to a limited space (house, bedroom…), the mobility concerns elementary actions taking place in this space. The movements can also be repeated over time either in the same way or in different ways: we therefore speak about random or regular movements. Mobility classification Based on the size of the environment in which the movement takes place, on the way in which the movement is carried out and on works dealing with movement, we distinguish two types of mobility classification: according to the distance travelled and according to the nature of the movement (see Figure 1). Classification according to the distance travelled (environment size): based on the environment size, we distinguish: extended mobility and restricted mobility. Extended mobility is characterized by movements spanning a long distance such as the movement of an individual from his home to his workplace. Restricted mobility is characterized by movements taking place in a limited area (an office, a bedroom, etc.). Most of the research works dealing with mobility, in particular mobility prediction (Jiang et al., 2016;Liou, Huang, 2005) are directed towards extended mobility. However, some 3 https://www.larousse.fr/dictionnaires/francais/mobilité/51890 works which are interested in restricted mobility exist (Almeida, Azkune, 2018). In a smart city, restricted mobility can be related to connected objects (IoT). Classification according to the nature of the movement: refers to the way in which the movement recurs. Here, we distinguish regular mobility and random mobility. Regular mobility concerns movements reproducing in the same way (e.g. taking the same path to get to work). In contrast, random mobility concerns either new movements or movements that do not reproduce in the same way (for e.g., taking two different paths to get home). The number of works directed towards regular mobility (Nadembega et al., 2014) is greater compared to the number of works dealing with random mobility (Liu et al., 1998), or even compared to those treating both types of mobility (Liu, Maguire, 1996). Also, for mobility prediction, the accuracy is better in the case of regular mobility (Jiang et al., 2016). Random mobility is, therefore, a real challenge in the area of mobility prediction. Mobility prediction data Basic prediction models rely on historical mobility data to provide predictions (Anisetti et al., 2011). Other models rely on contextual data, in addition to historical mobility data (Abu-Ghazaleh, Alfa, 2009). Another category includes models that only use contextual data (Samaan, Karmouch, 2005). However, little works has been done in this latter category. In this section, we aim to discover provenance, storage and exploitation of mobility data. Also, we define contextual data, and give an overview about their use for mobility prediction. Mobility data: provenance, storage and exploitation The study of any mobile object or individual's mobility requires mainly a set of data related to its location at different moments. In a smart city, the mass of mobility data circulating in the city is constantly increasing. However, before being used, these data, coming from different sources (mobile devices, vehicles, magnetic cards, sensors), must be collected and processed using specific tools, techniques and technologies (GPS, RFID, etc.). Data provenance In (Pan et al., 2013), the main data sources have been grouped into four categories, namely mobile devices (smartphones, laptop…), vehicles equipped with global positioning system (GPS) devices, smart cards (bank cards and transport cards), and floating sensors. Other sources exist in a smart city, such as cameras installed in the city, providing a big quantity of videos. Acquiring data from these different sources differs depending on the source. The most used techniques and technologies are GPS, WiFi, GSM and radio frequency identification (RFID). Based on (Pan et al., 2013), we summarize in Table 1 Data storage Data can be often collected from specific equipment constituting part of the systems architecture to which the different data sources are linked. For example, mobile devices are necessarily linked to a mobile network, and their locations data are stored in specific equipments (HLR, VLR, etc.). Also, in the case of sensors, belonging to a specific sensor network, the location is stored in the sensors themselves, in the base stations or even in the processing centers. Because of the huge amount of data, Edge computing solution and Artificial Intelligence techniques are to be considered. Data exploitation Collected data are initially in a raw state. In order to use them, they must be transformed and saved in an exploitable format. For example, in the GeoLife project (Zheng et al., 2010), mobility data related to individual's trajectories are generated in the form of GPS logs and are transformed and saved in PLT files. The main fields recorded in a GPS log 4 file are: longitude, latitude, altitude, current date and time of day. The data in the GPS log files are saved in NMEA 5 format. To use them, they must be converted to other formats such as GPX or PLT format using tools, software or even websites, such as the Logcat utility, the GPSBabel software or the GPS Visualizer website. Also, to exploit data, consideration of data models (e.g. Fiware data model) is very important in the context of a smart city. Figure 2 illustrates the steps for producing usable data. Contextual data The context has been defined in several ways. The definition most often cited in the literature is that given in (Abowd et al., 1999): « Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, object that is considered relevant to the interaction between a user and an application, including the user and applications themselves » Zimmermann et al (2007), propose a new definition specifying five categories of contextual information. Moreover, Dey's definition is proposed in the context of interaction between a user and an application. However, nowadays with smartphones, the system is able to give relevant information to the user without explicit interaction. According to Zimmermann, « Context is any information that can be used to characterize the situation of an entity. Elements of description of this context information fall into five categories: individuality, activity, location, time, and relations ». Figure 3 depicts an entity as well as its different contextual categories according to the vision of Zimmermann et al. When it's about mobility, the context concerns any information susceptible to inform about individual's movements or which influences their movements. In this perspective, the individual's movements depend on several parameters such as profile (age, profession, preferences, etc.), time, location, environment (weather information, information on road traffic, etc.), or again means of travel (on foot, by car, public transport, etc.). For example, to get to work, a person can take different paths under different conditions (rain, obstacle, etc.). Thus, to predict the individual's movements, considering these conditions is necessary in order to improve the prediction accuracy. Contextual information can be obtained from a variety of sources. Benouaret (2017) distinguishes three types of contextual information: explicit, implicit and inferred. For explicit sources, context information is already included in the data or directly requested from the user. The most obvious example here corresponds to a user registration on a system, which provides personal information. For implicit sources, information is obtained from the data or the environment in which a user is situated without explicitly asking him for this information. For example, we can get the geographic situation of a user using an application installed on their smartphone. Concerning inferred sources, information is obtained using data exploitation and exploration methods such as data mining techniques. OVERVIEW OF THE RELATED WORKS The establishment of a prediction model or algorithm represents a key point for mobility prediction. In this section, we summarize in Table 2 some mobility prediction works carried out based on Markov models (standard and hidden MCs), BNs (standard and dynamic) and ML techniques (Artificial Neural Networks ANN, Deep learning DL, Recurrent Neural Networks RNN). We give also the main conclusions obtained in the cited works. Table 2 is established on the basis of a study described in (Zhang, Dai, 2018), summarizing some works dealing with mobility prediction. The following criteria are considered: objective, technique used, movement type, context consideration, precision, and complexity/costs generated (calculation time, memory space…). The "technique used" and "precision" criteria are reported from (Zhang, Dai, 2018). The remaining criteria are new and are useful to the identification of mobility prediction issues. From our study of the works cited in Table 2, we retain the following: a-All works are oriented towards the application of mobility prediction to mobile networks (except Qiao et al., 2015). b-Markov models (standard and hidden) are widely used in the field of mobility prediction. Standard MCs are simple and easy to implement (Zhang, Dai, 2018) but their performance, in terms of precision, are often subject to certain constraints (transition matrix's values (Amirrudin et al., 2013a), movement type (Jiang et al., 2016) …). The hidden Markov models are also efficient in terms of accuracy (Zhang, Dai, 2018) (about 53% in (Lv et al., 2014), sometimes exceeds 80% in (Qiao et al., 2015) and even greater than 90% in (Amirrudin et al., 2013b)). However, their complexity can increase with the increase of the number of hidden states and the size of the history (Lv et al., 2014), like in Ultra-Dense mobile Networks (UDNs), because of the complexity of transition matrix which considers hidden and observable states (Zhang, Dai, 2018). c-Standard or hidden Markov models-based approaches often use a Clustering to determine the regions of interest. In (Gambs et al., 2012) and (Jiang et al., 2016) authors used DJCluster Algorithm; in (Qiao et al., 2015) authors used a clustering analogous to DBSCAN (Zhang et al., 2009). d-Bayesian networks gave good results in terms of prediction accuracy, which has sometimes exceeded 75% (Dash et al., 2015). However, with the dense deployment of small cells (UDNs), the cell environment would be more complex, which would make it more difficult to build a BN (Zhang, Dai, 2018). e-Neural networks are known by well-studied algorithms and are famous for their adaptive and self-organization characteristics. However, their algorithms are also known for their amazing computational complexity, especially in case of many hidden layers, which require a lot of learning time to adjust the weight of neurons. In addition, when using ANN in the field of MCSs (in particular UDNs), the procedures of acquiring user positions are influenced by the deployment of this type of network, knowing that position is a paramount parameter for ANN in case of mobility prediction (Zhang, Dai, 2018). Certain works based on the NNs provided acceptable results such as the model proposed in (Liou, Huang, 2005) and (Wickramasuriya et al., 2017) in which the precision exceeded 95% and reached 98% respectively. In contrast, the results provided by other works are not very satisfactory compared to the costs generated by such a technique, like the model proposed in (Parija et al., 2013) that allows to provide results only for regular movements. f-Deep learning is one of the ML models based on in-depth learning. Currently, DL is present in several fields, notably in the medical sector (image processing and classification), in the computer-assisted surveillance field (facial recognition) and in mobility prediction (Ozturka et al., 2019). In turn, the models based on DL have also shown acceptable results such as in (Ozturka et al., 2019). In the next section we aim to propose a classification of mobility prediction according to our study (state of the art) on the works dealing with mobility prediction. MOBILITY PREDICTION CLASSIFICATION With a lack of a reference classification of prediction models and standard criteria on which the classification of mobility prediction is based, we propose a classification based solely on existing prediction work. For that, we define the following four criteria: the use of history, the technique used, the use of deduction rules and the data type used. According to these criteria, we propose the following classifications ( Figure 4): Figure 4. Mobility prediction classification Classification according to the use of history: we distinguish historical-based models (Boc et al., 2011;Abu-Ghazaleh, Alfa, 2009), and knowledge-based models (Samaan, Karmouch, 2005). Historical-based models are models that use the historical data for predicting mobility. Knowledge-based models use other data such future planning, future goals, etc. to predict mobility. Classification according to the use of deduction rules: we distinguish direct prediction models and indirect prediction models. Direct prediction models are models that use the data of the person concerned by prediction. Most of the works cited in this paper are direct prediction-oriented. Indirect prediction models are models whose prediction is based on deduction rules, such as profile similarity (Chamek et al., 2012). Classification according to the data type used: we distinguish mobility data-based models (Anagnostopoulos et al., 2012), and contextual data-based models (Göndör et al., 2013). Mobility data-based models are those using only mobility data in order to predict mobility. Contextual data-based models are models whose prediction takes into account other data, qualified as contextual data (such as, environment information and means of travel), in addition to mobility data. In the next section, we focus on the challenges of mobility prediction by identifying six issues obtained from our analysis of works dealing with mobility prediction. MOBILITY PREDICTION CHALLENGES IN SMART CITIES In this section, we aim to present open questions, related to the mobility prediction, which we considered important according to our study on the mobility prediction. These challenges either concern issues that are already treated but not resolved correctly or those which are not treated yet. Also, these challenges are organized based on trajectory size (all trajectory or a segment only), movement type, context consideration, application domains, evaluation of models and privacy of users data. Challenge 1 -Predicting a trajectory or a segment of a trajectory Prediction of a single transition, a sequence of a trajectory or even an entire trajectory of a mobile individual depends mainly on the use of historical data of their mobility. Indeed, some works (Anisetti et al., 2011) have opted for the use of these data, while others (Lytrivis et al., 2011) have not based on these historical data. Historical-based works are precise but suffers from high overhead costs linked to constant monitoring requiring a more detailed analysis of the history using data mining and knowledge discovery techniques. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W2-2020, 2020 5th International Conference on Smart Data and Smart Cities, 30 September -2 October 2020, Nice, France Non historical-based works allow to predict only the final destination and not the trajectory towards this destination (Nadembega et al., 2014). These works don't require historical mobility data, but requires other essential data, such as contextual data, and can also generate very high costs linked to an immense amount of data which must be collected and processed (e.g. preferences, final objective, and planning). Also, other works (Anagnostopoulos et al., 2012) have proposed models that consider the use of historical mobility data and current network conditions (this is a kind of hybridization between the two types mentioned above). These models suffer from some limitations. They offer either shortterm predictions (one transition or two at most), or a long-term prediction with very significant additional costs linked to historical data and processing. Thus, the challenge here is to be able to propose a long-term prediction model at the best cost (which ensures the best ratio: long-term prediction/costs incurred). Challenge 2 -Movement type Exploring the literature on mobility prediction works allowed us to conclude that most of works performed on the mobility prediction consider regular movements of individuals. In (Samaan, Karmouch, 2005) the authors claim that most of the existing approaches assume that the user travels according to a previously known pattern with regularity. Works based on the assumption of regularity of movement offers acceptable results (between 50% and 70% precision) (Jiang et al., 2016;Lv et al., 2014…), even very satisfactory sometimes (up to 90% accuracy) (Amirrudin et al., 2013b;Liou, Huang, 2005), in terms of prediction accuracy. These works were tested even with random movement. According to authors of these models, prediction accuracy decreases when the regularity of the movement decreases. In other words, these models provide poor results when the movement is random. Although the assumption of regularity of movement is valid in many cases due to people's daily routine, the assumption that an individual's movements may be random should not be excluded. For example, a tourist who visits a country for the first time can take several different paths, especially during the first days after his arrival. Certain works, such as (Liu et al., 1998) consider the random movement. They proposed models answering this type of movement. To the best of our knowledge, the number of these works remains negligible compared to the models dealing with regular movement. Also, by analyzing the results of the last cited work, we note that the prediction precision is not satisfactory, although the authors claim to have obtained good results. Another category of works, like (Liu, Maguire, 1996), consider both regular and random movement at the same time. The results are acceptable only for regular movements. Thus, a major challenge is to be able to propose a model making it possible to predict mobility of a user who habitually moves in a random manner (tourist, transporter, etc.). Challenge 3 -Context-based prediction Most of the proposed works in the field of mobility prediction such (Ozturka et al., 2019;Wang et al., 2019) do not take into account contextual information, or, in the best of cases, consider only some standard information, such as day of the week (work day or weekend day), date, time, speed and profile (preferences, goals,…) (Dash et al., 2015;Chamek et al., 2012;Du et al., 2011). For example, in (Chamek et al., 2012), the profile is considered, in (Dash et al., 2015), only time is considered, in (Du et al., 2011), time (hour of the day) and day of the week are considered. The second category of works gave better results in terms of precision. Besides this standard information, there is other information related to more important parameters which directly influence the individual's movements but which, to our knowledge and so far, are not taken into account, except for the work described in (Göndör et al., 2013) which considers meteorological information. The main parameters having a direct impact on the movements of individuals relate to environmental (meteorological) information, means of travel and road traffic. If we take the example of the individual who goes to work, in rainy weather, probably this individual will take a path different from his habitual path (a shortcut) to get to work. Also, if he moves by walking, he can take a different path than when he moves by car or by bicycle. From the previous example taken from our daily life, it is very clear that contextual information play an important role in predicting mobility and that certain information are more important than others. To reinforce this view point, certain recent works, such as (Wang et al., 2019), dealing with the problem of mobility prediction envisage in their perspectives to take into account contextual information, in particular meteorological information. Thus, the challenge here consists in proposing a solution which considers contextual information, in particular meteorological information (weather), information related to the means of travel and those related to road traffic. However, collection of this type of data is in itself another challenge. In addition, it is very difficult to consider all of these types of contextual information at the same time. Challenge 4 -Application domains Several applications of mobility prediction have been cited in the literature. According to (Wang et al., 2019), prediction can be applied to personalized recommendations, suspicious target tracking and intelligent transportation. Prediction also plays a big role in the field of mobile networks (Samaan, Karmouch, 2005;Liu et al., 1998…). According to (Samaan, Karmouch, 2005), the importance of mobility prediction techniques can be seen both at the network level (handoff management, resource allocation…) and service level (pushed online advertising, mapping/route guidance…). Also, in (Gambs et al., 2012), the authors proposed an extended prediction model having several potential applications such as the evaluation of geo-privacy mechanisms, the development of location-based services anticipating the next movement of a user and the design of location-aware proactive resource migration. Although the fields of mobility prediction application are numerous, to the best of our knowledge, all the works have been directed towards the application of mobility prediction in the field of MCSs, such in mobile networks (Ozturka et al., 2019;Nadembega et al., 2014), except very little works, like the work carried out in (Almeida, Azkune, 2018) in which the mobility prediction (for elementary actions in a limited space) has been applied to detect behavioral abnormalities for elderly people. Therefore, the challenge here consists in proposing solutions (models, algorithms...) for mobility prediction which are applicable to other fields such as personalized recommendations systems. On the other hand, although the prediction models proposed are oriented towards mobile networks, some of these models do not support certain types of networks such as UDNs which currently represent a new trend, such as 5G networks. UDNs are characterized by their high number of cells (base stations) (Zhang, Dai, 2018) and their management complexity (Samarakoon et al., 2016). This increases the complexity of certain models and prediction algorithms considerably (Zhang, Dai, 2018). Consequently, these models are not highly recommended to be applied to this type of networks because of the costs involved. Indeed, in the case of Markov chains-based models, the complexity of the transition matrix (main parameter in Markov models (Amirrudin et al., 2013a)) increases with the growth of the number of cells, in particular in case of hidden Markov model. Also, in the case of neural networks-based models (characterized by their computational complexity (Zhang, Dai, 2018)), complexity increases in case of a large number of cells (Zhang, Dai, 2018). Thus, proposing a solution that supports this type of network with lower complexity (cost) constitutes another challenge. Challenge 5 -Evaluation of models Evaluation of mobility prediction solutions can be done using simulations (Ozturka et al., 2019;Wang et al., 2019) often based on datasets containing data about individuals' movements. However, on one hand, these datasets may already be existing and the data they contain may not be adequate for the proposed solution, as for context-based solutions. On the other hand, some datasets set up for evaluations (Samaan, Karmouch, 2005;Göndör et al., 2013) are not very consistent in terms of the amount of data they contain (Göndör et al., 2013). In addition, datasets must often be divided into two parts: a first part for learning and a second part for validation which is based on the comparison between the second part of the dataset and the prediction results. For this, the datasets must be consistent enough so that they can be divided; otherwise, the evaluation may not be 100% correct (Göndör et al., 2013). A major challenge consists in the use or creation of a dataset which is the most complete regarding to the data necessary for the proper functioning of the proposed solution. Another way of evaluation consists of a real evaluation (Chon et al., 2013), based on the participation of people who are available for real-time interaction (feedback), allowing to compare the results of mobility prediction with the real future individuals' movements. This approach allows for a more reliable evaluation (Göndör et al., 2013). To realize this kind of evaluation, the challenge consists in ensuring the participation of a large number of people. One way to achieve this objective can be the realization of a mobile application (Göndör et al., 2013) which is used by a group of people, and which allows the creation of a dataset by recording data necessary for prediction. These same people must also participate in the evaluation of the prediction results of their movements. Challenge 6 -Confidentiality and privacy of users data Mobility prediction solutions are mainly based on the historical data of individuals' movements, on their profile data, on their social relationships or even on their schedules. These data are directly linked to the privacy of individuals and need a certain degree of confidentiality. Indeed, the disclosure of personal data related to an individual's movements or their schedules can have a negative influence on this individual. In other words, anyone who has information about an individual at disposal can harm the daily life of this individual by going from a simple tracking action to physical damage (theft, assault, etc.). Thus, confidentiality in the field of mobility prediction mainly concerns the confidentiality of the historical data of individuals as well as the information which can be deduced from this data. In (Pan et al., 2013), the authors emphasized the confidentiality aspect of the trace data of individuals moving in a smart city and concluded that the disclosure of personal identity could occur during the collection, publication and use of trace data. With regard to collection, localization techniques may record user or device ID and cause risks. Location by GPS is more secure than GSM, WiFi, Bluetooth and RFID because centric servers do not need to know device IDs. Regarding the publication, personal identity could be inferred from published locations, in spite of having been removed/anonymized from the trace data records. In terms of usage, traces may expose unwanted privacy information to personalized services and applications. In such case, an anonymizing proxy is to be trusted to store, manage, and protect user locations, and to communicate between applications and users. The challenges are to keep fidelity of data for applications meanwhile protecting privacy (Pan et al., 2013). CONCLUSION In this paper, we've presented some key points on mobility prediction and related challenges, in order to give a guideline for the readers for new contributions. The first part of the paper was devoted to key concepts of mobility prediction through an overview about the mobility concepts and classifications, the data required for mobility prediction and some works dealing with mobility prediction. The second part focuses on a set of challenges, that are an open issues related to mobility prediction. These challenges were extracted from our deep literature study on the mobility prediction works. The second part can be very useful for a large community of researchers and developers because, to the best of our knowledge, there is no work that groups all these challenges at the same time. From this work, we have noted that, as mobility data, contextual data are very important to predict mobility. Also, some issues such as predicting random mobility, based-context prediction, a long-term prediction and definition of appropriate datasets for evaluation of models are always open to new contributions. Mobility prediction is applied only to MNs domain. Consequently, it is interesting to apply it to other domains (e.g. recommendation systems). Also, no work has shown its interest for the confidentiality of individual mobility data. So, it would be interesting to explore this crucial point. In the future, we plan to propose a solution (model, method or algorithm) that tackles the main challenges mentioned.
2020-10-28T18:33:43.438Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "735349637812696d9815a49074ffa9f8ef34bc88", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIV-4-W2-2020/17/2020/isprs-archives-XLIV-4-W2-2020-17-2020.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4584a5dd2442f7291ae664d7dd327e39f99bf99b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
57758456
pes2o/s2orc
v3-fos-license
The role of implicit perceptual-motor costs in the integration of information across graph and text Strategies used to gather visual information are typically viewed as depending solely on the value of information gained from each action. A different approach may be required when actions entail cognitive effort or deliberate control. Integration of information across a graph and text is a resource-intensive task in which decisions to switch between graph and text may take into account the resources required to plan or execute the switches. Participants viewed a graph and text depicting attributes of two fictitious products and were asked to select the preferred product. Graph and text were presented: (1) simultaneously, side by side; (2) sequentially, where the appearance of graph or text was triggered by a button press, or (3) sequentially, where the appearance of graph or text was triggered by a saccade, thus requiring cognitive effort, memory, or controlled processing to access regions out of immediate view. Switches between graph and text were rare during initial readings, consistent with prior observations of perceptual “switch costs.” Switches became more frequent during re-inspections (80% of time). Switches were twice as frequent in the simultaneous condition than in either sequential condition (button press or saccade-contingent), showing the importance of perceptual availability. These results show that strategies used to gather information while reading a graph and text are not based solely on information value, but also on implicit costs of switching, such as effort level, working memory load, or demand on controlled processing. Taking implicit costs into account is important for a complete understanding of strategies used to gather visual information. Introduction A fundamental principle governing visuomotor activity is to achieve the desired goal while minimizing the costs associated with planning or executing the actions (Shenhav et al., 2017;Wolpert & Landy, 2012). Costs may be specified explicitly, for example, as rewards or penalties attached to different possible outcomes (Constantino & Daw, 2015;Dean, Wu, & Maloney, 2007;Trommershäuser, Maloney, & Landy 2008;Wolfe, 2013). Often, however, the costs that affect the choice of strategy are implicit. For example, implicit costs may include the time required to complete the task, even when minimization of time is not a requirement (Araujo, Kowler, & Pavel, 2001;Moher & Song, 2014). Costs may also include the demands made on working memory or other limited cognitive resources (Ballard, Hayhoe, & Pelz, 1995;Epelboim & Suppes, 2001;Hayhoe, 2017;Leider & Griffiths, 2017). The present study examines the role of the implicit costs associated with the actions needed to perform a high-level visual task, namely, reading a graph and its accompanying text. The importance of implicit costs was recently discussed by Shenhav et al. (2017). They argued that the mental effort required to plan actions or to make decisions is ''inherently aversive or costly'' (p. 102), where mental effort was defined broadly to include the management or control of limited pools of cognitive resources. According to their view, limits in the capacity for effortful, controlled mental operations lead to preferences for strategies that reduce the expenditure of mental effort, even when such strategies do not lead to measured improvements in the performance of the task. Previous studies of the strategies used during visuomotor tasks have supported the view that implicit costs, such as time or effort (mental or physical), are taken into account when selecting the strategy. Moher and Song (2014), for example, studied reaching movements made to a target in the presence of a nontarget. When the separation between the target and the nontarget was small, reaches were initiated earlier and trajectories contained midcourse corrections. Increasing the separation led to the opposite pattern, namely, longer latencies and fewer corrections. Moher and Song (2014) suggested that the strategies took into account that corrective movements would have higher biomechanical costs and take more time when separations were large. Analogous results to these have been obtained in oculomotor tasks that elicit the so-called ''center of gravity'' saccades, which are short latency saccades that land near the center of a configuration of a target surrounded by one or more nearby distractors (Coëffé & O'Regan, 1987;Findlay, 1982;He & Kowler, 1989;Ottes, Van Gisbergen, & Eggermont, 1985). Coëffé and O'Regan (1987) offered a rational basis for center-of-gravity saccades by showing that targets surrounded by nearby distractors could be localized sooner with a rapidly initiated saccade to the center of the configuration, followed by a correction, than with a single, more accurate, longer-latency saccade (also, Cohen, Schnitzer, Gersch, Singh, & Kowler, 2007;Wu, Kwon, & Kowler, 2010). Previous work has also shown that implicit costs in the form of time, mental effort, or demand on limited processing resources influenced the planning of gaze shifts during different types of visual tasks. During visual search, for example, local cues that provided the searcher with useful information about the location or the value of targets were found to be neglected when taking the cue into account would have prolonged fixation times or risked diverting processing resources away from other aspects of the search task (Araujo et al., 2001;Hooge & Erkelens, 1998, 1999Navalpakkam, Koch, Rangel, & Perona, 2010;Wu & Kowler, 2013). By contrast, cues woven into the semantic content of display, which presumably can be noticed without adding to processing time or processing load, influenced scanning strategies (Koehler & Eckstein, 2017;Malcolm & Henderson, 2010;Neider & Zelinsky, 2006;Torralba, Oliva, Castelhano, & Henderson, 2006;Wu, Wick, & Pomplun, 2014). Other studies showed that during visual problem-solving tasks, increasing the time or effort required to plan or carry out the gaze shifts, either by increasing the distance between critical locations (Ballard et al., 1995), or by creating long artificial delays (750 ms) between the arrival of gaze at a location and the appearance of visual information (Kibbe & Kowler, 2011), led to less reliance on eye movements, and a greater reliance on memory to retrieve the contents of previously examined locations. The studies reviewed above suggest that understanding strategies of visual information-gathering requires considering the implicit costs of planning or executing the actions. In keeping with previous arguments (e.g., Kool, McGuire, Rosen, & Botvinick, 2010;Monsell, 2003;Shenhav et al., 2017;Wolpert & Landy, 2012;), we define implicit costs broadly to include time or motor effort, as well as the use of controlled, rather than automatic, processing. The current understanding of the role of implicit costs in information gathering is limited because many prior studies of the strategies used to gather visual information from visual displays used actions (gaze shifts across the displays) that were relatively automatic and effortless. Thus, these studies focused solely on the benefits due to the visual information gained (Eckstein, 2011;Epelboim & Suppes, 2001;Najemnik & Geisler, 2005;Renninger, Verghese, & Coughlan, 2007;Semizer & Michel, 2017). The present study examines strategies and the role of implicit costs, when more effortful and demanding actions were required. Present study The present study examined the role of implicit costs connected to both motor effort and perceptual availability during the performance of a high-level and frequently encountered visual information-gathering task, namely, reading a graph and its accompanying text. The present study differs from most prior research on the role of implicit costs connected to visual information-gathering in that the prior work used tasks that operated over relatively short time scales (several seconds) and had singular, compact goals, such as finding one or more specified targets (e.g., Araujo et al., 2001;Ballard et al., 1995;Coëffé & O'Regan, 1987;Constantino & Daw, 2015;Hooge & Erkelens, 1998, 1999Kibbe & Kowler, 2011;Navalpakkam et al., 2010;Wolfe, 2013;Wu & Kowler, 2013). By contrast, many of the information-gathering tasks people perform routinely require thinking and interpretation. Such tasks-and reading graphs is a good examplemake demands on controlled processing and rest on accumulating and integrating information over relatively long time scales (Carpenter & Shah, 1998). With such higher-level task demands, there are two distinct and opposing ways of viewing how the implicit costs (e.g., time or mental effort) connected to planning or performing the actions could influence strategies. First, the need to develop a coherent interpretation of the visual display may dominate, and thus dictate what visual information is sought and when it is sought, regardless of the costs associated with the actions needed to acquire the information. Alternatively, and in accord with the arguments of Shenhav et al. (2017), the resources needed to interpret the displays may compete with the resources required to gather the information, thus encouraging a strategy in which more effortful or resource-consuming actions are avoided. The goal of the present study is to find out whether and how strategies of switching between viewing a graph and its accompanying text are influenced by changes to the types of actions required to switch between these two sources of information. Reading a graph is a demanding task requiring the accumulation of, and memory for, selected details to make decisions about the meaning of the material (Carpenter & Shah, 1998;Michal & Franconeri, 2017;Shah & Freedman, 2009). Graphs, such as those encountered in books, websites, or journal articles, are typically accompanied by some descriptive or explanatory text, where readers are expected to develop their own strategies of examining both the graph and text, including decisions about when to switch between them. Switches between modalities or attended features can be costly and aversive, even in relatively simple visual tasks (Kool et al., 2010;Monsell, 2003). Thus, any switches between graph and text may carry implicit costs even when the actions required to make the switch, such as gaze shifts between simultaneously visible regions, are fairly effortless and relatively automatic (Ross & Kowler, 2013;Wang & Pomplun, 2012). Costs attached to making the switches may be greater when the graph and text are available sequentially rather than visible simultaneously because additional motor actions (mouse clicks or page turns) are required with the sequential presentations, thus increasing the effort level (motor or mental). Sequential presentations also add to the implicit costs because of the greater demands on memory and on controlled, deliberate decisions when accessing a target region that is out of view (Funahashi, 2014;Wang, Cohen, & Voss, 2015). The present study varied the way in which switches between views of a graph and its accompanying text were carried out. Two kinds of factors were investigated: motor effort (gaze shift vs. button press) and perceptual availability (simultaneous vs. sequential presentations). Motor effort was studied because (1) prior work has shown that motor actions (mouse clicks) are more effortful than gaze shifts during a visual search task (e.g., Kibbe & Kowler, 2011) and (2) many common situations involving graphs and text require motor actions beyond a simple and relatively automatic gaze shift (mouse clicks or page turns, for example) to switch between them. Perceptual availability was examined because any differences between switches mediated by a gaze-shift vs. a button-press could be due either to the increased effort in planning or carrying out the action, or to the additional load on working memory or controlled processing required to access material that is currently out of view. Perceptual availability was manipulated by comparing performance in a condition where the graph and text were presented simultaneously, side by side, with a condition, termed eye-contingent, in which the graph and text were presented in the same locations, but visible only when a saccade was made into the relevant region. The task required reading bar graphs that depicted the value of two fictitious products along two different attributes to decide which product was preferred. This task was chosen because it required extensive inspection and interpretation, while minimizing the variability in performance due to prior specialized knowledge on the part of the viewer. The overall spatial layout of the graphs (four bars arranged in two groups) was kept the same in all trials. Other aspects of the content, to be described in Methods, were varied across trials so as to motivate a detailed inspection and to avoid stereotypical strategies. Decisions about when or how often to view the graph or the text were made by the viewer and, given that the decision reported at the end of the trial represented a preference, there were no formally correct or incorrect answers. There were no explicit incentives to favor one region (graph or text) over the other, and viewers could have formed a preference solely by inspecting either by itself. Strategic options could range from lengthy inspection of either the graph or the text with little or no switching, to a feature-by-feature inspection that entailed frequent switching. This research design, which gives substantial control of strategy to the viewer, is similar to that used in prior work on the role of costs (e.g., Araujo et al., 2001;Ballard et al., 1995;Coëffé & O'Regan, 1987;Kibbe & Kowler, 2011;Kool et al., 2010;Moher & Song, 2014) and allows opportunity for any influences of either motor effort or perceptual availability to become apparent through examination of the frequency and timing of the switches between graph and text. The main goal of the study was to determine whether and how the perceptual motor conditions influenced the choice of strategy. A second goal was to use the observed pattern of switches under the three perceptual motor conditions to infer aspects of the strategies used to integrate information across the graph and text. Eye movement recording Movements of the right eye were recorded using the monocular EyeLink 1000 (SR Research, Osgoode, Canada), tower-mounted version, sampling at 1000 Hz. A chin rest was used to stabilize the head. Viewing was binocular. Subjects There were 22 student subjects from Rutgers University, 11 tested in Instruction 1 and 11 in Instruction 2 (see Procedure section for definitions of instructions). All had normal vision and were naïve to the purpose of the experiment. An additional seven subjects were tested but data were not analyzed because: (1) at least 30% of the data were lost during the trials, mainly due to frequent blinks or periodic occlusion of the pupil by the eyelid (five subjects); (2) use of the buttons to switch between the graph and the text in the Button Press condition did not begin until late in the experimental session (two subjects), suggesting that they may not have understood the instructions. Testing was in accordance with the Declaration of Helsinki and approved by the Rutgers University Institutional Review Board. Stimulus display Stimuli were displayed on a Dell U2413 LCD monitor (refresh rate 60 Hz; Dell, Round Rock, TX) viewed from a distance of 60 cm. Stimuli were displayed within a 1,280 3 1,024 pixel (28.28 3 22.58) region of the screen. Stimuli were presented in a fully lighted room, allowing the boundaries of the display region to be seen at all times. Displays consisted of a bar graph and a paragraph of text. Examples are shown in Figures 1 and 2. Graphs contained four colored bars on a white background. The bars compared the values of two fictitious common household products along two different attributes, with values of the two attributes shown on the left and right Y-axes, respectively. Lettering (legend and axis labels) was black. The stimulus configuration was varied in several ways to increase unpredictability and encourage a thorough inspection of the material. Bars were grouped in pairs, either according to the products or to the attributes. In the case of grouping by attribute ( Figure 1) the labels on the X-axis under each pair indicated the name of the attribute, and the color of the bars indicated the product. In the case of grouping by product (Figure 2), the labels on the X-axis under each pair indicated the name of the product, and the color of the bars indicated the attribute. In addition, the relative merits of the two products on each attribute either were (Figure 2) or were not ( Figure 1) in conflict, where a conflict meant that one item was superior to the other on only one of the two attributes. Twenty-four different bar graphs were generated, each with a different product pair. Then, four versions of each of these 24 graphs were generated, according to whether (1) the values of the attributes were either in conflict or not in conflict, and (2) the bars were grouped by product or by attribute. Text Each graph was accompanied by a paragraph of text, 264 to 387 characters in length (including spaces). Text was black 18-point monospaced Courier New font (width ¼ 13.6 pixels/character, 6 characters/8) on a white background. The text was displayed within a 555 3 760 pixel region (12.38 3 16.88). Each of the four versions of graph (see Graph Stimuli) could be accompanied by one of two types of text, redundant (Figure 1) or nonredundant ( Figure 2). Variation in the characteristics of the text, like the variations in the configuration of the graph, was implemented to discourage the use of the same stereotypical strategy of viewing graph and the text on each trial. Redundant text restated the information depicted in the graph. Nonredundant text provided information that differed in some way from the information depicted in the graph. This information was either irrelevant to the relative merits of the items, or added details that favored one of the products, or stated that the information depicted in the graph was outdated or contained an error. Note that the study was focused on the inspection strategies, and not on understanding the preferences, or whether the choice was ''correct.'' In at least half of the cases (conflict between attributes), there was no obvious ''correct'' choice, and in the remaining cases viewers could evaluate the attributes in any way they chose. Perceptual motor conditions Three perceptual motor conditions were tested, shown in Figure text were separated by a blank region 16 pixels (0.358) wide (see Figures 1 and 2). The graph was displayed either on the right or left side, randomly and independently chosen on each trial. Informal testing verified that the critical details of the text or graph (such as the words of the text, legends, or axis labels) could not be identified when the opposite region was fixated, and that saccades would be needed to inspect each region to discern the details. 2. Button Press: Graph and text were displayed sequentially, each located in the center of the screen. Subjects pressed a trigger button on a gamepad to display the graph or the text. 3. Eye-Contingent: Graph and text were displayed sequentially, as in the Button Press condition, and appeared either on the right or on the left side of the display, randomly chosen for each trial, in the same locations as in the Simultaneous condition. The appearance of the graph or text was triggered by online detection of the offset of a saccade into the right or the left side of the screen. The empty region contained no visual details except for the boundaries of the white display region itself. The delay between the offset of the triggering saccade and onset of the display, averaged across 24 representative trials, was 68 ms (SD ¼ 19) for all except the very first appearance of text, for which the mean delay was 121 ms (SD ¼ 13). Procedure Each subject was tested in a single 24-trial experimental session. Before testing began subjects were told they would be viewing a series of graphs accompanied by passages of text about two products. At the end of each trial they were asked to indicate which product they preferred. Two groups of subjects (11/group) were tested. The first group was told they should read the graph and text to determine their preference (Instruction 1). The second group was given instructions that did not contain the word ''read'' to avoid implying that the text was more important than the graph. They were told only to indicate their preferred product based on the display (Instruction 2). Subjects were told they could end the trial by pressing a button on the gamepad when they were ready to make the decision. Before testing began subjects were presented with three familiarization trials, one for each of the perceptual motor conditions, in order to illustrate the perceptual motor conditions and give the subjects ample opportunity to practice switching the display between graph and text using either a button press (Button Press condition) or a saccade (Eye-Contingent condition). A diagram indicating which button corre-sponded to the graph and which to the text was available throughout the experimental session. The calibration routine built into the EyeLink software was run before the start of each experimental session and again midway through. Before each trial, the number of the trial as well as a label indicating the perceptual motor condition was displayed. Subjects started the trial with a button press when ready. Then, as an additional check on the calibration, five crosses were presented for 5 s, one in the center and one in each corner of the display. Subjects were told to fixate the center cross, then look to each of the other four crosses in sequence, then back to the center cross. The crosses then disappeared, replaced by the critical display. In the Button Press condition, the button used to start the trial determined whether the graph or text appeared first. In the Eye-Contingent condition, the screen was blank until subjects fixated either side, at which point the graph or the text appeared. Thus, in the Eye-Contingent condition, subjects did not know which side corresponded to each until after the first saccade. Subjects were instructed to press a button to end the trial when they were ready to make the decision about their preferred item. The trial automatically ended after 2 minutes if the subjects did not choose to end it themselves (only three of the 528 total trials tested lasted the full 2 minutes). The fixation of the five crosses was repeated. Then, the subject indicated by button presses: (1) which item they preferred, (2) how confident they were in their choice on a scale of 1 to 4 (with 1 being least confident and 4 being most confident), and (3) which of the two attributes was more influential in their choice. These questions were asked to motivate the inspection of the graph and text. Analysis of whether fixated locations predicted the decisions were outside the scope of the present study. Design The perceptual motor conditions were assigned to each trial randomly using an algorithm that employed the following constraints: (1) Each subject was tested on eight trials for each of the three perceptual motor conditions. The order of testing trials with the different perceptual motor conditions was random. (2) The pairing between a given graph and a given perceptual motor condition was different for each subject. The pairing was done so that across subjects each of the 24 graphs was paired with a given perceptual motor condition at least once for Instruction 1 and at least once for Instruction 2. (3) Text conditions (redundant vs. not redundant, see Text Stimuli) were tested in blocks of six trials each, with the first block chosen at random. (4) Graph type (conflict vs. no conflict; grouping by product vs. grouping by attribute) and the side of the screen containing the graph were independently chosen at random on each trial. Analysis All analyses were carried out in Matlab (Math-Works, Natick, MA). The beginning and ending positions of saccades were detected offline by means of a custom-written algorithm employing a velocity criterion to find saccade onset and offset. The value of the velocity criterion (228/s) was verified empirically for individual observers by examining a large sample of analog recordings of eye positions. Portions of data containing blinks or episodes where the tracker signal was lost were eliminated. Trials in which lock was lost more than half the time were eliminated. Of the 528 trials tested (176/perceptual motor condition), four were eliminated in the Simultaneous condition, 11 in the Button press condition, and four in the Eyecontingent condition. The location of each fixation in the display was determined from the average position of the line of sight at the offset of saccade n-1 and the onset of saccade n. Each fixation pause was classified as being in the graph, text, or neither. Consecutive fixations of the same region were cumulated into visits. A visit was defined as a block of time inspecting either graph or text, where the block of time was composed of the accumulation of successive fixations in the same area, including the time during any gaze shifts, as well as any intervening blinks. Examination of individual trials showed that some contained brief (,1 s) visits at the beginning of the trial, followed by longer inspections (see Figure 6 for examples). Given that long visits occurred following these very brief initial visits, it was likely that the region viewed during these brief initial visits was not used for the purpose of gathering information. A single brief (,1 s) initial visit occurred in 45% of trials in the Simultaneous condition, 1% of trials in the Button Press condition, and 5% of trials in the Eye-Contingent condition. A total of 12% of trials in the Simultaneous condition, and ,1% in the Eye-Contingent condition, contained more than one initial brief visit. To avoid incorporating such brief initial views, and thereby inflating the number of visits to graph or text in the Simultaneous condition, the analysis of results for a given trial did not include these initial brief visits. Fixation pauses and visits were further categorized as falling into one of seven areas of interest (AOI's) shown in Figure 4. These AOI's were selected because each contained a different type of information relevant to interpreting the graph. Boundaries of the AOI's were defined manually (see Figure 4). Fixations landing in the narrow region between the graph and text, above the graph (such as near the title), or below the graph were classified as ''other.'' Results Effect of perceptual-motor condition on visits to graph and text Performance was compared across three perceptualmotor conditions: (1) Simultaneous: Graph and text were displayed side by side, requiring a saccadic eye movement to switch between them; (2) Button Press: Graph and text were displayed sequentially at screen center, requiring the press of a button to switch between them. This condition required more motor effort (a button press instead of a saccade), and reduced the perceptual availability of each portion of the display, since only one portion, graph or text, was presented at a time; (3) Eye-Contingent: Graph and text appeared sequentially in the same locations as in the Simultaneous condition, with the appearance of each triggered by a saccade into the corresponding region on the screen. The Eye-Contingent condition thus reduced the perceptual availability of the graph and the text due to the sequential aspect of the presentation, as in the Button Press condition, without adding motor effort beyond the saccade, as in the Simultaneous condition. The main result was that the pattern of visits to the graph and text depended on the perceptual motor condition. There were about twice as many visits/trial to graph and text in the Simultaneous condition than in either the Button Press or the Eye-Contingent condi- tions. This result can be seen in Figure 5 (left), which compares the mean number of visits/trial to the graph and to the text (means of subject means) for the three perceptual motor conditions for both instruction types (see Analysis section for the definition of a visit and the rule for designating the first visit of a trial). Analysis of variance (3 perceptual-motor conditions 3 2 instruction types) showed a significant effect of perceptual-motor condition, F(2, 40) ¼ 24.01, p ¼ 10 À7 , and no significant effect of the instruction type, (Procedure section), F(1, 20) ¼ 0.52, p ¼ 0.48. The pattern of results was the same when results were first averaged within each of the 24 different graphs (Graph Stimuli section) instead of within individual subjects (Supplementary Figure S1). Figure S1 also show that the average number of visits/trial in the Eye-Contingent condition was slightly, but reliably, larger than in the Button Press condition (paired t test: Eye-Contingent vs. Button Press; t(21) ¼ 3.4, p ¼ 0.0029). The average number of visits/trial in the Simultaneous condition was about twice that of the sequential conditions. This result suggests that while motor effort (gaze shift vs. button-press) was influential, the simultaneous versus sequential availability of the graph and the text played a larger role in determining the occurrence of switches. Figures 5 and Supplementary Average trial duration did not differ across the three perceptual motor conditions ( Figure 5, Supplementary Figure S1). Analysis of variance (3 perceptual motor conditions 3 2 instruction types) showed no significant effect of perceptual motor condition, F(2, 40) ¼ 1.06, p ¼ 0.36, nor instruction type, F(1, 20) ¼ 0.65, p ¼ 0.43, on trial duration. Figure 5 and Supplementary Figure S1 also show that time was apportioned about equally between graph and text for each of the three perceptual motor conditions. Finally, the confidence ratings attached to the choices of the preferred product, a measure that might have been sensitive to the value of the information acquired, were almost the same for the three perceptual motor conditions (mean confidence rating for the Simultaneous condition was 3.39 (SD 0.49), Button Press was 3.42 (SD 0.38) and Eye-Contingent was 3.47 (SD 0.40). The effect of perceptual motor condition on performance can be summarized by the mean rate of visits (number of visits per trial/trial duration), which was about two times greater in the Simultaneous condition than in the other two perceptual motor conditions ( Figure 5, Supplementary Figure S1). Effects of perceptual motor condition were once again significant, F(2, 40) ¼ 62.56, p ¼ 10 À13 , and effects of instruction type were not, F(1, 20) ¼ 1.33, p ¼ 0.26. In summary, the perceptual motor conditions affected how time was apportioned between the graph and the text, with the Simultaneous condition characterized by a higher rate of switching between the graph and the text than either the Button Press or the Eye-Contingent conditions. The perceptual motor condition did not affect the total amount of time devoted to inspecting graph and text, only how the time was apportioned into separate visits. The sequential availability of graph and text in both the Button Press and Eye-Contingent conditions played a greater role than motor effort in discouraging switching between the regions. Timelines Timelines were constructed to visualize the sequence of visits to graph and text for each perceptual motor condition. The timelines show the sequence and duration of the visits to the graph and the text for each trial, ordered from shortest to longest, for Instruction 1 ( Figure 6) and Instruction 2 ( Figure 7). As was the case in Figure 5, a ''visit'' was composed of the accumulation of successive fixations in the same area, graph, or text, including the gaze shift time, as well as any intervening blinks. Any visits to locations other than the graph or the text, including fixations in the blank region between the graph and the text, are shown in red. Blank regions of the timelines indicate that the visit prior to the blank contained a period of tracker lock lost greater than 2 s. All visits, including initial visits ,1 s (see Analysis section), are shown. Inspection of the timelines suggests two trends, which will be examined in more detail in the Visit durations section. First, most trials started with a visit of several seconds to either the graph or the text, after which gaze switched over for a visit of several seconds to the other region. Second, these initial long visits were often followed by shorter duration visits to the graph or the text. These shorter visits occurred more frequently in the Simultaneous condition. Figures 8 and 9 show distributions of the durations of the first three visits to graph or text for trials in which the first visit was to the graph (Figure 8) or to the text ( Figure 9). As noted earlier (Analysis section), analyses began with the first visit to graph or text that was longer than 1 s. Visit durations The first visit to either the graph or the text typically lasted several seconds, regardless of which region was visited first. These durations were long enough to allow extensive examination of the graph or the text. These findings suggest that the preferred strategy during the initial visit was to attempt to extract meaning from large portions of text or from entire graph, rather than a strategy of inspecting the graph and text jointly, feature by feature. There was also a suggestion of a small effect of order in that the initial visits to the text were shorter when the graph was visited first (Figure 8) than when the text was visited first (Figure 9) in all three perceptual motor conditions, suggesting that reading the graph may have either helped or supplanted in part the reading of the text. Figures 8 and 9 show how the perceptual motor condition affected the duration of the visits. In all cases the average durations of the visits were shortest in the Simultaneous condition. When the graph was visited first (Figure 8), there was a significant effect of perceptual-motor condition for each of the first three visits, shown in each row of the figure, Visit 1: F(2) ¼ 15.78, p ¼ 10 À7 ; Visit 2: F(2) ¼ 4.84, p ¼ 0.0089; Visit 3: F(2) ¼ 6.47, p ¼ 0.020. When the text was visited first (Figure 9), visits were also shortest in the Simultaneous condition; however, the effect of perceptual motor condition was significant only for the second visit, Visit 1:F(2) ¼ 2.25, p ¼ 0.107; Visit 2: F(2) ¼ 8.659, p ¼ 0.00023; Visit 3: F(2) ¼ 5.17, p ¼ 0.067. The distributions also show the trend that was apparent in the timelines (Figures 6 and 7), namely, after the first visit to the graph (Figure 8) or to the text ( Figure 9) the Which region was visited first? In the Simultaneous condition, the text was usually visited first (Table 1). In the other two conditions, preferences to visit graph or text first were about equal. Thus, the simultaneous availability of both regions disclosed a preference for the text. The tendency to visit the text first in the Simultaneous condition was significant only for subjects in Instruction 1 (t(10) ¼ 7.87, p ¼ 10 À5 ; Instruction 2: t(10) ¼ 1.71, p ¼ 0.12), where the word ''read'' was included in the instructions. In the Eye-Contingent condition the side of the display containing the graph or the text was not known until it was fixated. Thus, the behavior in the Eye-Contingent condition allowed an examination of the extent to which an extra saccade to view a preferred region would be made if the initial fixation landed on a nonpreferred region. To address this question, the proportion of trials viewers stayed with or switched from the region fixated first was examined for the Eye-Contingent condition. This analysis included any initial brief (,1 s) glances to either graph or text (Analysis section). Subjects rarely decided to switch from the region initially fixated in the Eye-Contingent condition. Switches occurred on 9% of the trials where graph was fixated first, and 1% of the trials in which the text was fixated first. This willingness to leave the choice of first location to chance in the Eye-Contingent condition is another indication of the influence of perceptual availability on the viewing strategy. Strategy of comparing graph and text What was the underlying strategy behind the transitions between graph and text? One possibility is that the strategy was dominated by a partitioning of the material, for example, reading only part of the graph or text in the initial visit, and then switching to the other region. Alternatively, it is possible that the initial visit was devoted to a more exhaustive inspection and follow-up visits were devoted to re-examining of previously seen regions of either the graph or the text. To distinguish these possibilities, the locations of visits within graph and within text were analyzed. A ''re-examination'' within the graph was defined as a revisit of one of the six AOI's (see Figure 4 for AOI's). A re-examination within the text was defined as rereading of all or part of a line of text. Figure 10 shows the average time spent viewing new areas (blue) and re-examining old areas (red) of the graph (top) or the text (bottom). Results are shown separately for each of the first three visits, and pooled over the entire trial. Each section of a bar represents the mean over subjects of the average time spent per trial viewing graph or text. The number of subjects on which each mean is based is shown above each bar. (Note that these numbers differ because some subjects had no trials with more than one visit to graph or text, typically, in the Button Press or Eye-Contingent conditions). Trials with more than three visits are not shown because there were not enough of such trials in the Button Press or Eye-Contingent conditions to allow meaningful comparisons across the perceptual motor conditions. There were two trends that were found in all three perceptual-motor conditions. First, a substantial portion of the time was spent re-examining previously seen material. The total time spent re-examining portions of the graph or the text, cumulated over all visits (bars labeled ''All'' in Figure 10) was about the same for all perceptual-motor conditions (means were 80% for Simultaneous, 77% for Button Press, and 78% for Eye- Contingent conditions). Analysis of variance showed no significant differences among the three perceptual motor conditions for the percent of time spend reexamining the graphs, F(2, 40) ¼ 0.191, p ¼ 0.83, or the Text, F(2, 40) ¼ 0.1713, p ¼ 0.84. The second major trend evident in Figure 10, again observed for each of the three perceptual motor conditions, was that the examination of new material occurred almost exclusively during the initial visit to graph or to text. New material was almost never examined after the initial visit. The time devoted to new text during the initial visit was about the same across all three perceptual-motor conditions, F(2, 40) ¼ 1.51, p ¼ 0.234. The time devoted to new portions of the graph differed slightly, but significantly, across the three perceptual motor conditions, F(2, 40) ¼ 13.86, p ¼ 10 À5 , with the time shorter in the Simultaneous condition than in the other two. The preference to examine new material during the initial visit, rather than during subsequent visits, shows that switches between graph and text, even in the Simultaneous condition, were avoided until completion of at least an initial inspection of one of the regions. Differences among the perceptual motor conditions were most apparent in the portions of the first visits devoted to re-examining material. The time devoted to re-examination differed across conditions both for the first visits to the text, F(2, 40) ¼ 6.087, p ¼ 0.0049), and first visits to the graph, F(2, 40) ¼ 13.95, 10 À5 . As can be seen in Figure 10, there was less time devoted to reexaminations in the Simultaneous condition than in either the Button Press or the Eye-Contingent conditions. This result, along with the fact that the total time spent re-examining old material was the same across conditions (see bars labeled ''All'' in Figure 10), show that the perceptual motor conditions affected the strategy of re-examination. Specifically, the same total time devoted to re-examination was spread across more frequent and shorter visits in the Simultaneous condition, and compacted into less frequent and longer visits in the Eye-Contingent and Button Press conditions. Strategies of viewing the graph To determine whether the perceptual motor condition had any major influences on the viewing of the graph, matrices showing the number of transitions between Areas of Interest (AOI's) (Figure 4) were constructed for all three perceptual motor conditions Figure 10. Mean time (6 SE) spent visiting new AOI's of the graph or new lines of text (blue) and re-examining previously visited AOI's or lines of text (red) for the first three visits and over the whole trial (All) for visits to the graph (top) and text (bottom). Results are shown for the Simultaneous, Button Press, and Eye-Contingent conditions. Numbers above each bar represent the number of subjects that contributed to the means. A subject was not included if they did not have any second or third visits to graph or text, respectively, in that condition. ( Figure 11). The most frequent transitions in all three conditions occurred between the labels of the axes and the bars, and between the bars and the legend, reflecting a strategy of frequent re-fixations of the referents (Carpenter & Shah, 1998). Discussion One approach to gathering information from a graph and its accompanying text would be to sample information solely on the basis of its value for interpreting the depicted material without taking into account the characteristics of the actions needed to gain access to the material. In cases where the required actions are fast, simple, or relatively automatic, as may be the case for most shifts of gaze across a visual display, the decision to base strategies solely on the information content represents a rational choice, one supported by research showing the role of the value of the information gained in determining the locus of gaze when searching visual displays (Eckstein, 2011;Najemnik & Geisler, 2005;Renninger et al., 2007;Semizer & Michel, 2017). The present study showed that strategies for gathering information from a graph and its accompanying text were not based solely on the information content, but were instead influenced by characteristics of the actions needed to gain access to the material. Switches between the graph and the text were less frequent when carried out by a button press rather than by a gaze shift. Switches were also less frequent when the graph and text were presented sequentially, rather than simultaneously, with the contents of each region revealed only after gaze landed within the region (termed the Eye-Contingent condition). In addition, switches were found mainly during the portion (;80%) of trials devoted to re-reading material. Switches, even in the case of simultaneous availability, were rare during the initial readings of graph or text. The perceptual motor conditions did not affect either the total amount of time taken to complete the task, the overall proportion of time devoted to the graph and the text, or the confidence of the judgments. Thus, the perceptual motor condition did not affect the choices about how much information to sample from the graph or the text. Rather, the perceptual motor condition affected choices about when to take these samples. The finding that motor effort and perceptual availability affected the rate of switching between the graph and the text is consistent with prior studies using simpler visual tasks that showed how adding to the implicit costs of planning or carrying out the required actions affected the strategies used to gather visual information (Araujo et al., 2001;Ballard et al., 1995;Hooge & Erkelens, 1998, 1999Kibbe & Kowler, 2011). The present findings extend these results to a task that requires considerable thinking and interpretation. Thus, the greater cognitive load attached to reading a graph and text, by itself, did not preclude a role for motor effort or perceptual availability. The greater cognitive load attached to interpreting the graph and text may have contributed to any inherent aversion to effortful switches because of a greater level of competition for access to limited processing resources (Shenhav et al., 2017). Switches during the initial view and during reexamination Carpenter and Shah (1998), in their study of eye movements during reading of graphs, distinguished two main phases of reading graphs: an initial inspection to obtain an overview of the graph, followed by repetitive scanning of key features (especially the referents) to develop an interpretation of the depicted material. We found that the rate of switches was different for these two phases. The first phase, obtaining the initial overview of either the graph or the text, contained almost no switches, even in the Simultaneous condition. Analysis of the durations of visits to new material and re-examination of old material ( Figure 10) showed that virtually all the viewing of new portions of the graph or text occurred uninterrupted within the first visit to each region. This finding is consistent with the findings in studies of perceptual switch costs, which reported increases in reaction time when the relevant perceptual features of the visual discrimination task changed from one block of trials to the next (Monsell, 2003). These increases in reaction time were attributed to the need for extra processing steps to change the mental set or task set to different features or attributes. Switches between graph and text may be accompanied by comparable difficulties due to changes in task set, leading to preferences to avoid switches (similar to Kool et al., 2010), at least during the initial readings of each region. Switches between graph and text began to occur during the re-examination of previously viewed material. The re-examination ( Figure 11) consisted of repeated fixations of details of the graph (Carpenter & Shah, 1998), as well as re-reading of portions of the text (Schnitzer & Kowler, 2006). Switches may have become more frequent during re-examination because reexamination may demand different types of mental resources than the initial inspection. For example, during re-examination strategies may be driven by the testing of hypotheses about content, rather than by an attempt to develop an initial understanding of what was depicted in the graph. Role of perceptual availability The sequential presentation of graph and text in both the Button Press and Eye-Contingent conditions led to a lower rate of switches between the regions than was found for the simultaneous presentations. We can view these findings as showing that simultaneous availability promotes switching, or, equivalently, that sequential availability discourages switching. We consider each viewpoint as follows. In the Simultaneous condition, the continuous presence of each region in the visual field may have encouraged or facilitated switching in a relatively automatic manner. For example, the visible eccentric region could have served as a constant cue or reminder that the region was available and might contain some potentially useful information. This view is similar to that proposed in Wang and Pomplun (2012) and Ross and Kowler (2012), who found that gaze was directed to displayed text even when viewing the text was not necessary for completing the task. In the case of the graph and the text in the current study, the eccentric region may receive some level of spatial attention, given its task relevance, which could increase the probability of a saccade (Gersch, Kowler, Schnitzer, & Dosher, 2009;Zhao, Gersch, Schnitzer, Dosher, & Kowler, 2012). At a neural level, continuous activation of a visible region in neural areas involved in spatial attention and saccades, such as FEF, LIP, or SC (Awh, Armstrong, & Moore, 2006;Basso & May, 2017;Gottlieb, 2007;Schall, 2004) could contribute to eliciting spontaneous shifts of attention or spontaneous shifts of gaze to the opposite region with minimal reliance on overt or effortful decisions. The relatively effortless shifts of attention or shifts of gaze to visible regions could also be encouraged by mechanisms that increase the probability of saccades to visible regions associated with higher levels of explicit rewards or with information that reduces uncertainty (Gottlieb, Hayhoe, Hikosaka, & Rangel, 2014). On the other side of the coin, the absence of an immediate visual representation of the graph or text during either of the sequential conditions, Button Press or Eye-Contingent, would mean that the conditions that facilitated effortless or automatic shifts of gaze were absent. Thus, additional processing steps would likely be involved, such as the retrieval of information about the type of content (text or graph) from memory, an expectation or prediction about what might be encountered following the switch, or an overt, controlled decision to make the switch. These types of operations fall under the broad category of executive functions that require management and monitoring by areas such as prefrontal cortex (PFC), anterior cingulate cortex (ACC), or hippocampus (Buschman & Miller, 2014;Funahashi, 2014;Miller & Cohen, 2001;Shenhav, Botvinick, & Cohen, 2013;Voss, Bridge, Cohen, & Walker, 2017). Wang et al. (2015), for example, proposed that links between PFC and hippocampus are involved in managing actions to access material not currently in view. The management of the additional processing steps required to direct an action to a region that is not immediately in view may have been sufficiently demanding of resources to discourage frequent switches between graph and text. Summary, conclusions, and implications Reading a graph and text to arrive at a coherent interpretation of the content is a resource-intensive task that is encountered in many real-world settings. The present study investigated the roles of motor effort and perceptual availability in the integration of information between a graph and accompanying text. We found that both motor effort and perceptual availability influenced the rate of switches between the graph and the text. Thus, strategies of gathering visual information in a task with a strong requirement for thinking and interpretation were not based solely on the information content. The results suggest that factors connected to planning or executing the switches, such as effort level, dependence on working memory, or reliance on controlled rather than automatic processing, were taken into account when developing strategies of gathering information needed to interpret the graph and text. These results raise a very basic question. People often plan actions to regions that are not perceptually available. We turn pages of books, click mice, or press keys to access new web pages, or we turn our heads to view what is behind us. In everyday life, many of these behaviors have to be carried out despite any extra processing steps that proved to be an obstacle in the present task. What factors increase our willingness to engage in effortful actions? Two types of factors are likely to be relevant. The first is the importance of the visual information. For example, turning the head to glance at a side-view mirror while changing driving lanes is clearly a case in which the information is vital. Another example would be reading graphs when the necessary referents, such as axis labels or legends, which are fixated frequently (Figure 11; also, Carpenter & Shah, 1998), are kept out of view unless revealed by a motor action. The importance of such task-critical regions might induce viewers to undertake the effortful actions to gain access to the material. Such decisions may be part of a mental cost-benefit analysis (Kool et al., 2010;Leider & Griffiths, 2017) that people may carry out continually, even on short time scales, during performance of visuomotor tasks. Whether the increased importance or the value of the information affects the decision criteria for trading-off costs and benefits, or changes the perceived or actual level of mental effort, is an interesting question for future research. The second factor that would promote a greater tolerance for effortful actions is the development of well-learned visuomotor routines (e.g., Land & Hayhoe, 2001;Sailer, Flanagan, & Johansson, 2005). When applied to cases in which actions must be performed to reveal regions that are currently out of view, the effort level accompanying the actions may be reduced due to the use of encapsulated or habitual patterns of movements. Finding that even actions as innocuous as mouseclicks or gaze shifts can affect the strategies of viewing graph or text suggests that people are continually evaluating the resource load when determining whether or when to seek out new visual information. These analyses may influence performance of a host of visuomotor tasks and be a major factor in determining natural behaviors. Keywords: saccadic eye movements, reading graphs, planning, visual search, switch costs, effort, executive function, reading, eye movement planning, cost-benefit analysis
2019-01-22T22:23:25.842Z
2018-12-03T00:00:00.000
{ "year": 2018, "sha1": "e2a19f843c5abf2da917aed7729017ad1b0ed204", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/18.13.16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2a19f843c5abf2da917aed7729017ad1b0ed204", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
14236475
pes2o/s2orc
v3-fos-license
A Strange Case of Left Bowel Ischemia after Right Hernioplasty We report the first observed case of a young man who suffered of large and unsuspected left bowel ischemia following an elective right open hernioplasty. A 54-year-old man had a 2-year history of right inguinal reducible mass and was admitted to hospital for an elective day case open inguinal hernioplasty for a direct right inguinal hernia. Apart from mild hypertension controlled with ACE inhibitor, he was medically fit and well. The patient was submitted to open tension-free mesh repair with polypropylene preshaped mesh with local infiltration anesthesia and additive sedation with midazolam. The local anesthesia and surgery were uneventful and he was discharged home on the same day as per day case protocol. He was readmitted about 12 h after discharge with a history of central and left lower abdominal pain with palpable mass, and distension and fever (38°C). After imaging and laboratory studies the patient was submitted to explorative surgery with the suspicion of left colonic ischemia. After intraoperative confirmation we performed standard left hemicolectomy. The postoperative course was uneventful; the patient was discharged in good general condition on the 7th postoperative day. Actually, the patient is in follow-up, with normal coagulation and hemochromocytometric pattern, asymptomatic for hypercholesterolemia and atrial flutter/fibrillation. Complications relating to bowel during open techniques of hernia repair are limited to two situations: the freeing of an incarcerated or strangulated segment of bowel and inadvertent laceration of large bowel in the presence of a sliding hernia. Following this strange case of colonic ischemia, a boolean Medline search (terms: hernia, complication, repair, groin, herniorrhaphy, hernioplasty, all major MESH subjects without language restriction) revealed no previous similar cases reported. However, to our knowledge, there is another trouble hypothesis: not causality but casualty. In conclusion, to our knowledge this is the first reported case of large left bowel ischemia following right open hernioplasty. We can conclude that the presence of a dolichocolon is an added risk factor for this rare and uneventful complication, but further investigations and case reports are necessary to estabilish the real causality. Introduction We report the case of a young man who suffered of large left bowel ischemia following an elective right open hernioplasty. After a careful review of the literature, we can conclude that this is the first such case to be described. Case Report A 54-year-old man had a 2-year history of right 3 cm inguinal reducible mass and was admitted to hospital for an elective day case open inguinal hernioplasty for a symptomatic direct right inguinal hernia, type IIIA according to the classification of Nyhus [1]. He was an ex-smoker who used to smoke 15 cigarettes a day but had quit smoking 24 months earlier. His ASA grade was II. Apart from mild hypertension controlled with ACE inhibitor, he was medically fit and well. The patient was submitted to open tension-free mesh repair with polypropylene preshaped mesh (Surgipro TM Mesh, Auto Suture TM , UK, Type 1 according to the classification of Amid) [2] with local infiltration anesthesia and additive sedation with midazolam (0.05-0.1 mg/kg i.v., in 20 ml of isotonic NaCl 0.9% solution). The local anesthesia and surgery were uneventful and he was discharged home on the same day as per day case protocol. He was readmitted about 12 h after discharge with a history of central and left lower abdominal pain with palpable mass, and distension and fever (38°C). Digital exploration of the rectal ampulla revealed normal stools. After 2 h, though his vital signs were normal, the abdominal pain persisted with cutaneous hyperesthesia in the left lower abdomen. His hematology and biochemistry results revealed leukocytosis (20,470 WBC, NV 4,000-10,000) and anemia (2,100,000 RBC, NV 4,000,000-6,000,000). His chest radiograph was normal, and a supine abdominal film showed a gas-filled transverse colon without any unusual features (however, a negative plain film should not exclude ischemia of the colon). Abdominal CT revealed dolichosigmoid with thickened wall, mesosigmoid hyperdensity with, in its context, 400 ml of mixed blood and clot collection and perisplenic effusion ( fig. 1). There was neither free air in peritoneal cavity nor air-fluid levels. As preoperative care, the patients was stabilized using i.v. fluids, antibiotics covering the colonic flora, nasogastric tube decompression, and bladder catheterization. Blood was available. With the strong suspicion of sigmoid ischemic infarction, the patient was submitted to explorative laparotomy. We confirmed the presence of massive mesosigmoid agglomerate of clot with signs of ischemic suffering of the serosal layer and, after rinsing of the peritoneal cavity, we performed left hemicolectomy and mechanical end-to-end colorectal anastomosis. We did not observe signs of bleeding or perforation in the peritoneal layer of right hernioplasty ( fig. 2, fig. 3). Postoperative hematology and biochemistry results were normal and the postoperative course was uneventful; the patient was discharged in good general condition on the 7th postoperative day. At present the patient is in cardiovascular and surgical follow-up, with normal coagulation and hemochromocytometric pattern, asymptomatic for hypercholesterolemia and atrial flutter/fibrillation. Discussion Much has been written on the complications of herniorrhaphy and hernioplasty and the modern-day hernia surgeon, whether a general surgeon or a hernia zealot, will have developed his or her own approach to preoperative consent, the type of procedure and anesthesia undertaken, and subsequent follow-up [3,4]. Actually, the complications of open groin hernioplasty are well defined (table 1). Local anesthesia allows surgery in unfit patients, rapid mobilization and ambulatory surgery and does not require expensive preoperative laboratory work-up, but requires a gentler touch, is limited by toxic doses of local agents and may require a second operator to administer sedation before and after analgesia [4]. Complications relating to bowel during open techniques of hernia repair are limited to two situations: (1) the freeing of an incarcerated or strangulated segment of bowel and (2) inadvertent laceration of large bowel in the presence of a sliding hernia. The site of strangulation is usually the superficial inguinal ring. In these cases, mortality has ranged from 6-23% [3]. The publication by Ryan [5] has conclusively supported the new attitude that opening of a sac is not necessary, that the high ligation of a sac is unnecessary, and that the countless, complicated maneuvers to reperitonealize bowel and abdominal cavity are confusing, possibly dangerous, and should be discarded in favor of simple reduction of the hernia sac and contents into the preperitoneal space. Insufficient blood perfusion to the colon may result from arterial occlusion by embolus or thrombosis (AMAE or AMAT), thrombosis of the venous system (MVT), or nonocclusive processes. Embolic phenomena account for approximately 50% of all cases, arterial thrombosis for about 25%, NOMI for roughly 20%, and MVT for less than 10%. Hemorrhagic infarction is the common pathologic pathway whether the occlusion is arterial or venous [6]. Injury severity is inversely proportional to the mesenteric blood flow and is influenced by the number of vessels involved, systemic mean pressure, duration of ischemia, and collateral circulation. The superior mesenteric vessels are involved more frequently than the inferior mesenteric vessels, with blockage of the latter often being silent because of better collateral circulation [7]. The occlusive syndrome of the inferior mesenteric artery (IMA) is uncommon and generally due to thrombosis from atherosclerotic disease. Formation of collateral vessels, from the colic branches of the SMA and from the hemorrhoidal arteries, branches of the hypogastric arteries, are often not effective to prevent an acute colorectal ischemia due to occlusion of the IMA at the origin [8]. Damage to the affected bowel portion may range from reversible ischemia to transmural infarction with necrosis and perforation. The injury is complicated by reactive vasospasm in the SMA region after the initial occlusion. Arterial insufficiency causes tissue hypoxia, leading to initial bowel wall spasm. This leads to gut emptying by vomiting or diarrhea. Mucosal sloughing may cause bleeding into the gastrointestinal tract. At this stage, little abdominal tenderness is usually present, producing the classic intense visceral pain disproportionate to physical examination findings [9]. In our opinion, on the basis of technique of hernioplasty and CT, there are two etiopathogenetical hypotheses: first, a volvulus with fulcrum on mesosigmoid, caused by manipulation of the right peritoneal cavity; second, a systemic effect of first puncture of local anesthesia (plexus blocking too deep). Also low-flow states cause peripheral vasodilation and shunting of the blood from the gut to the periphery; cases of patients without atherosclerotic disease of their mesenteric arteries having ischemia have been reported, but this is rare and occurs in patients with profound hypovolemic shock. Furthermore, digitalis has been found to cause vasoconstriction of both arterial and venous smooth muscle cells in the mesenteric vasculature: of patients with acute mesenteric ischemia, 20-30% have nonocclusive disease [10]. Other causes of nonocclusive mesenteric ischemia are use of vasopressive drugs, ergotamine, cocaine and digitalis, but our patient had neither use of these drugs nor referred postoperative hypovolemia [11]. Early diagnosis is important to improve survival rates, since diagnosis may be overlooked because of the vague nature of the patients' symptoms; in most cases of late or missed diagnosis, mortality from intestinal infarction is very high, from 60 to 90% [8]. Actually, in our opinion, the mechanism by which this patient experienced intestinal ischemia is poorly understood, but we are considering the opportunity to modify our informed consenst schedule after this event. Following this strange case of colonic ischemia, a boolean Medline search (terms: hernia, complication, repair, groin, herniorrhaphy, hernioplasty, all major MESH subjects without language restriction) revealed no previous similar cases reported. However, to our knowledge, there is another trouble hypothesis: not causality but casualty. Many other reports are necessary in the future to establish the real cause of this phenomenon. Conclusions To our knowledge this is the first reported case of large left bowel ischemia following right open hernioplasty. We can conclude that the presence of a dolichocolon is an added risk factor for this rare and uneventful complication, but further investigations and case reports are necessary to establish the real causality. This event will give us the opportunity to modify our informed consenst schedule.
2014-10-01T00:00:00.000Z
2010-02-03T00:00:00.000
{ "year": 2010, "sha1": "885f8fe9d78ad980c56c2181dfde52bff0d263dd", "oa_license": "CCBYNCND", "oa_url": "https://www.karger.com/Article/Pdf/260072", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "885f8fe9d78ad980c56c2181dfde52bff0d263dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258276770
pes2o/s2orc
v3-fos-license
Influence of lateral single jets for thermal protection of reentry nose cone with multi-row disk spike at hypersonic flow: computational study The main challenge for the advancement of current high-speed automotives is aerodynamic heating. In this study, the application of lateral jet for thermal protection of the high-speed automotives is extensively studied. The simulation of the lateral coolant jet is done via Computational fluid dynamic at high-velocity condition. Finding optimum jet configuration for reduction of the aerodynamic heating is the main goal of this research. Two different coolant jets (Helium and Carbon dioxide) are investigated as coolant jet and flow study and fuel penetration mechanism are fully presented. In addition, the thermal load on the main body of nose cone is compared for different configurations. Our results specify that the injection of lateral jet near the tip of spike is effective for thermal protection of main body via deflection of bow shock. Also, Carbon dioxide jet with lower diffusivity is more effective for the protection of forebody with multi-row disk from sever aerodynamic heating. Hybrid techniques have been recently investigated as new approach for the drag and heat reduction on the nose cone flying at hypersonic speed 62,63 . In this methodology, spike is joint with either fluidic and energy methods to improve the performance of classical technique of mechanical methods [64][65][66][67] . Although this approach seems very efficient, it is not considered as practical method yet. In fact, the usage of either fluidic and energy device for thermal load reduction is done in the laboratory and no real practical applications of this method was not reported. Since this hybrid method was new method, limited resources and articles have been presented in this topic. In this research, the usage of lateral jet for the cooling of the nose cone with multi-row disk (MRD) at highspeed flight is fully investigated (Fig. 1). The influence of jet location and condition on the cooling of the nose cone is investigated by the computational method. The highly compressible flow around the MRD blunt body is simulated and comprehensive flow analysis are presented to find the effective terms for thermal load management of the nose cone. The influence of coolant gas type is investigated by comparing carbon dioxide and helium jet in this investigation. Governing equation and numerical method This study applied RANS equations for modeling of the compressible flow near the nose cone with MRD device 68 . SST turbulence model is applied in the simulation of highly turbulent flow around the nose cone 69 . The flow is assumed ideal gas and species transport equation is also applied since the secondary gases of helium and CO 2 are used for the cooling in this hybrid technique. Computational fluid dynamic is applied for the simulation of flow around the nose while the coolant gas is released. This technique is popular for simulation of fluid in engineering problems 70,71 . The details of the main governing equations have extensively presented and explained in the previous articles and readers are referred to these resources 72,73 . Applied boundary condition related to the selected model is demonstrated in Fig. 2. Inflow is pressure farfield with M = 5.0, Pinf-2550 and Tinf = 221 K. Helium and carbon dioxide are chosen for as coolant jets with sonic condition at Ts = 300 K. Pressure outlet is extrapolated from the results of inside domain. The spike and main body is assumed wall with constant temperature of 300 K. The length of spike is equal to diameter of the main body 60 . Grid study as the main step for the computational fluid dynamic are done by producing different grids for our models. The number of grid in three directions are change to find optimum model in which results are independent from grid. Figure 3 demonstrated the schematic of produced grid for our model. Structured grid is used since it has more accuracy in the finite volume based approach. Table 1 presented details of grid studies. For grid independency analysis, four grid resolutions are generated and simulated in the first step. Comparison of the heat load on the main body are done for produced grids (Table 1) and it is found that fine grid with 1,628,000 cells. Results and discussion The comparison of experimental and numerical data with our results is done to perform validation. This step is important since it approves the correctness of applied method for the simulation of the chosen case. As presented in ref. 74 , the variation of normalized pressure along the nose agrees reasonably with other methods. The deviation of the archived results from other techniques is not more than 8% in the simple nose cone at supersonic flow. Streamline and coolant distribution for three lateral jets located on the stem of the spike are displayed in Fig. 4. The deflection of the main stream and the diffusion mechanism of Helium and CO 2 jet in these configurations are noticed in these models. The main effects of these jet locations are on the deflection of main stream while the www.nature.com/scientificreports/ circulation regime in these model is almost identical. Due to high penetration rate of helium, this gas deflects the bow shock with higher angles. The feature of the shock interactions for lateral injection system are demonstrated in Fig. 5. The main difference on the jet location on the spike is related to the interaction of the separation shock with barrel shock of coolant jet. In fact, this interaction results in deflection of the bow shock and limited the interaction of the separation shock to the main body. As jet location move to the tip of the spike, the angle of the bow shock becomes more and separation layer did not touch the main body. Therefore, the heat transfer decreases on the main body. www.nature.com/scientificreports/ The main difference of these coolant jet is associated with the shape and size of barrel shock and their effects on the bow shock is almost identical. To evaluate the strength of bow and barrel shocks, Fig. 6 demonstrates the temperature contour on the midplane for different lateral injection systems. When the lateral injection occurs in the vicinity of the main body, the hot region is nearby the tip of disks where the interaction of the bow shock with disk results in the high entropy region. As coolant injection move to the tip of the spike, the temperature region become restricted between barrel shock and the bow shock and this confirms the high power of bow shock. It is also found that the strength of deflection shock for helium jet is less than that of CO2 jet. Besides, as the coolant jet moves to the main body, more portion of the body is under impacts of the cool fluid. www.nature.com/scientificreports/ Figure 7 illustrates the three-dimensional feature of the coolant layer to disclose the diffusion of these two gases in different lateral injection systems. Based on the achieved contour, the diffusion of the helium into the main bow shock cases the fluctuation and a segment of coolant diverted into the main body of the nose cone. This effect is noticed on the heat transfer rate displayed in Fig. 8. The heat transfer rate on the disk and main body indicates the diffusion mechanism of the coolant and its effects on the heat load of the nose and disk. As expected, high heat transfer rate occurs on the tip of the disk and this is because of shock deflection. Effects of coolant location is also noticed on the heat transfer of the main body. Figure 9 demonstrates the effect of the different lateral coolant injection systems on the total heat load reduction on the main body and spike assembly. Obtained data indicates that the injection of CO 2 jet is more efficient than Helium for cooling of the body and spike assembly. In fact, this is due to the shield effects of the CO 2 gas since it has lower diffusivity than helium. Conclusion This study tries to investigate the importance of the lateral jet for the thermal management of the nose cone with MRD flying at hypersonic flow. Three-dimensional model is used for the investigation of the flow and heat transfer near the nose cone and spike assembly. Flow analysis and coolant gas distribution are compared for two coolant gas types of helium and carbon dioxide. The influence of the coolant gas on the compression shock and bow shock near spike and main body. Mechanism of cooling in different jet locations is also investigated to achieve the optimum configuration for the thermal load reduction of the nose cone. Our results show that deflation of the main blow shock by the coolant jet near the spike tip has great impacts on the reduction of aerodynamic heating. Data availability The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
2023-04-23T06:17:36.958Z
2023-04-21T00:00:00.000
{ "year": 2023, "sha1": "7539f5f382c0199617621ea02fbfc088289712de", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b23ee3d85501c9bb15a0f5884cdf70645635b677", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
265631347
pes2o/s2orc
v3-fos-license
Mimansa Principle of Interpretation : Numerous scriptures have been found that have been crucial to understanding the Hindu texts. These texts included the complex procedures for determining the true meaning of terms and expressions found in the Vedas and Puranic texts. The Mimansa is the most significant scripture among all others that contained the guidelines for this kind of interpretation. Hindu civilization and culture developed complex norms of interpretation even in their earliest days. Smrities were interpreted according to the guidelines provided by "Jaimini," the author of the Mimamsat Sutras, which were first intended for srutis. One could refer to Mimansa as the Dharmasasthras' stepping stone. This article deals with the various axioms and scientific nature of interpretation, how it differs from maxwell’s interpretation and how it applied in current scenario. Introduction: Jaimini established the Mimansa Rules of Interpretation, which are our customary guidelines for interpretation.Shabar, Kumarila Bhatta, Prabhakar, and others expounded the Sutras of Jaimini.Our great jurists, such as Vijnaneshwara (author of Mitakshara), Jimutvahana (author of Dayabhaga), Nanda Pandit, and others, would frequently refer to these Mimansa Principles whenever they discovered a contradiction between the numerous Smritis or any ambiguity, incongruity, or casus omissus within.The objective of Mimamsa is to provide guidelines for interpreting the Vedas, which are the oldest texts in Hinduism, as well as a rationale for the philosophical significance of observing Vedic rites.The Mimansa principles were originally developed for religious purposes, but because they were quite reasonable and logical, they later found application in other fields like as law, grammar, logic, and philosophy.Among the six Indian philosophical systems (darshans), the Mimansa system is essential to Vedanta and had a major effect on the creation of Hindu law. Axioms of interpretation: For the interpretation of shastras, six axioms of interpretation have been developed.These are 1.The Sarthakyata axiom, according to which each word and sentence needs to have a meaning.2. The Gauravah doshah, or Laghava axiom, which asserts that the construction that shortens and simplifies the meaning is preferred.3. The Arthaikatva axiom, which asserts that a word or sentence occurring in the same place should not have two meanings.A Vakyabheda is a flaw (dosh) that has this double meaning.4. The Gunapradhan Axiom, which says that a word or sentence that seems to express a subordinate notion should either be changed to reflect the major idea or ignored altogether if it conflicts with it.The saying, "bigger fish eats smaller fish" (matsyanyaya27) can be used as an analogy to demonstrate this idea.To shed insight into this, consider the fact that Yajurveda verses are usually said quietly and Samaveda verses are typically recited loudly.The Gunapradhan axiom was employed to determine that recitals must be made in a softer voice because they are required to be recited as part of the Yajurveda rituals.This helped to settle conflicts in some Yajurveda ceremonies, such as Agnyadhana (Primary), which involves the recitation of the lines of Samaveda (Accessory).This is so that the Accessory can fulfil its responsibility of ensuring that the Primary's goal is achieved, as it was created with the Primary in view.5.The Samanjasya Axiom, which says that every effort should be made to reconcile writings that seem to be at odds with one another.This idea has been used by Jimutvahana to resolve discrepancies between Manu and Yajnavalkya's texts about the succession rights.The Nashtasvadagdha Ratha maxim, which was utilised to resolve the discrepancies between the Manu and Yajnavalkya Smriti texts about self-acquired property and ancestral property, serves as one example of the Samanjasya axiom.It is based on a story in which two men set out on a voyage in separate horse-drawn chariots, and when a fire burst out, one man lost his horse while the other's chariot was destroyed by fire.So, utilising the last horse and chariot, they both completed their voyage together for mutual benefit.i As a result, it is well-established that contradictory provisions should, whenever feasible, be interpreted to complement one another since the court has an obligation to prevent "head-on clashes among the provisions of the statute."6.The Vikalpa axiom, which says that the rule more in accordance with fairness and usage should be accepted at one's discretion if there is a genuine and irreconcilable disagreement between two legal norms of equal force.Consequently, when a regulation is a higher legal standard as according to the Badha principles, one takes precedence over the other when comparing, for example, a Shruti and Smriti. Characteristics of mimansa school: Mimamsa schools are characterized by the following: • Emphasis is placed on the interpretation of Vedic texts such as the Samhita and Brahmana; • They contend that the Vedas contain the ultimate truth and are the source of all knowledge; • While performing rituals may help one attain paradise, understanding the rationale behind Vedic ceremonies is equally necessary; • One must comprehend this rationale in order to perform the rites properly and earn atonement; • A person's actions determine their strengths and weaknesses; • If their good deeds persisted, they would enjoy the pleasures of heaven; • However, they will be immune to the eternal cycle of life and able to break free from the never-ending cycle after they have atoned.• Purva Mimamsa is a Karma-Mimamsa system that studies Vedic teachings through Karma-Kanda ceremonies. • The Mimamsa school emphasises the necessity of performing a Yagya in order to receive material and spiritual advantages.• As a result, the philosophical foundation of the Vedas is provided by the Samhita (and Brahmana) sections.• This worldview placed a strong emphasis on the Vedic ritual aspect, which is performing Vedic procedures to achieve salvation. • The Brahmanas employed this strategy to maintain their authority over the populace and to maintain control over the social system. Scientific nature of Mimamsa principle: The division of concepts into categories and subcategories for simple understanding demonstrates the scientific and systematic character of the Mimansa principle.The Vakya principle, for instance, used the subcategories Adhayaahra and Anusanga to fill in words and expressions that were missing, and Upakarasha and Apakarsh to move clauses within sentences so that they could be understood clearly. Notably, there are contemporary interpretation guidelines that like Maxwell's, which allow for violence in certain scenarios, just as the statute does.According to the Supreme Court of India, "courts can sometimes supply words which have been accidentally omitted" in S.S. Kalra v. Union of India.ii In Tribhuwan Misra v. D.I.O.S the Saamanjasy concept of interpretation was applied in to reconcile two opposing division bench findings.This was done on the basis of the aphorism "lost horse and burned chariot" (Nasarhatasva Dagdhartha Nyaya). In Mahabir Prasad Dwivedi v. State the Anusanga principle of interpretation was applied in-depth to make the statute more democratic and equitable, something that could not have been accomplished with Western principles. In Vinay Khare v. State of U.P. the Allahbad High Court resolved the dispute over candidate selection by using the Laghava principle of interpretation and concentrating on the written exam rather than the inperson interview to reduce the possibility of bias, favouritism, and arbitrariness.The candidates received equal marks overall. Mimansa vs maxwell: There are two different legal interpretation theories: Mimamsa and Maxwell.While Mimamsa principles are a scientific system of interpretation that was developed in India from very early times, Maxwell's principles of interpretation are primarily used in Western law courts.Interpreting the law so that it can be successfully applied to a specific scenario before him is one of a judge's main responsibilities.The foundation of Maxwell's rules of interpretation is the notion that a statute's language should be interpreted normally and that the legislature's aim should be inferred from the words used. On the other side, the Mimamsa principles are more thorough and systematic.While Maxwell's concepts are limited to the interpretation of statutory law, they can also be applied to the interpretation of judgements.iii The Mimamsa principles are superior to Maxwell's principles of interpretation in two ways: 1. they are more comprehensive and methodical, and 2. they can be applied to the interpretation of judgements as well as statutes, whereas Maxwell's principles are limited to statutory law interpretation.Adhyahara is the term for casus omissus in Mimamsa.The adhyahara concept allows us to amend a legal document.Nonetheless, Maxwell's lack of further explanation and reference of the subcategories falling under the broad category of casus omissus illustrates the superiority of the Mimamsa principles over his concepts in this specific field.The usefulness of Mimamsa principles is not diminished by the fact that they occasionally produce diverse outcomes.Different outcomes are also produced using Maxwell's concepts.This merely serves to highlight the need for care when applying interpretation principles.Interpretation principles make good servants but bad masters.Just because something is foreign doesn't mean it has to be rejected.Westerners have a lot to teach us that is beneficial. Use of mimansa in current legal system: The meaning of the legal provisions has been investigated through the use of the Mimangsa Rules of Interpretation.After citing a "Shloka," the Supreme Court took one of these principles into practise.In the case of UP Bhoodan Yagna Samiti, UP V. Braj Kishore, the Supreme Court of India made the following observation: "In this country, we have a heritage of rich literature, it is interesting to note that literature of interpretation also is very well known."Many Shlokas that have been recognised for hundreds of years have articulated the fundamentals of interpretation.In Beni Prasad v. Hardai Bibi, Sir John Edge, the Chief Justice of the Allahabad High Court at the time, made reference to the Mimamsa concept.Similar to this, the Gunapradhan Axiom of the Mimamsa principle had been applied in Amit Plastic Industry, Ghaziabad v. Divisional Level Committee, Meerut to interpret section 419 of the UP Sales Tax Act.In the cases of M/s Ispat Industries Ltd vs. Commissioner of Customs and M/s Craft Interiors Pvt.Ltd vs. Commissioner of Central Excise, the Supreme Court recognised the significance of the Mimansa Rules of Interpretation. Conclusion: In summary, the Mimansa Principles provide a customary framework for interpreting legal texts, especially when it comes to Hindu law.The Mimansa Principles were initially developed for the purpose of understanding religious texts, but they are now seen as sufficiently reasonable and scientific to be used to the interpretation of contemporary laws and rulings.With a flexibility and reason lack from Western principles of interpretation, the Mimansa Principles offer a distinctive method for statutory interpretation.They are characterised as scientific and rational, with the goal of improving the democracy, equity, and reason of the law.The Indian legal system has recognised and applied the Mimansa Principles, proving their applicability in contemporary statute interpretation.When used by judges, they can be an effective tool for reshaping the law to make it more democratic, fair, and logical.The Mimansa Principles have a historical basis in religious practises, but they have progressively found application in other domains like as philosophy and law, demonstrating their flexibility and relevance in modern legal interpretation.In general, the Mimansa Principles give an alternative perspective to Western legal principles by providing distinctive and historically grounded method of interpreting legal texts within the framework of Hindu law.
2023-12-05T17:05:20.159Z
2023-11-25T00:00:00.000
{ "year": 2023, "sha1": "8c3dcc65b3d3acb2c6e94762b71fc204a96d4bfc", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2023/6/9322.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8b8e2a6f955812a21747c76ff5133dd951fc01f6", "s2fieldsofstudy": [ "History", "Philosophy" ], "extfieldsofstudy": [] }
5999613
pes2o/s2orc
v3-fos-license
Chapter 8 Human Papillomavirus Infection and Penile Cancer : Past , Present and Future Penile squamous cell carcinoma (PSSC) is an uncommon malignant tumor, which accounts for less than 1% of adult male cancers in North America and Europe, but is markedly higher in developing locations, such as Asia, Africa and South America, representing up to 10% of tumors in men. Human papillomavirus (HPV) infection has shown an important role in penile cancer pathogenesis. In 2009, a systematic review of published literature found that 40% of penile tumors were HPV-related, and that type 16 HPV was the most common subtype in this group (Backes et al., 2009). Another interesting relation between HPV infection and penile cancer is the finding that specific histological subtypes are associated with HPV infection. Penile carcinomas with basaloid differentiation and warty features have shown a strong association with HPV infection, with recent studies showing that HPV infection is present in 76% of basaloid tumors, while the presence in verrucous cancer was 24.5% (Backes et al., 2009). Introduction Penile squamous cell carcinoma (PSSC) is an uncommon malignant tumor, which accounts for less than 1% of adult male cancers in North America and Europe, but is markedly higher in developing locations, such as Asia, Africa and South America, representing up to 10% of tumors in men.Human papillomavirus (HPV) infection has shown an important role in penile cancer pathogenesis.In 2009, a systematic review of published literature found that 40% of penile tumors were HPV-related, and that type 16 HPV was the most common subtype in this group (Backes et al., 2009).Another interesting relation between HPV infection and penile cancer is the finding that specific histological subtypes are associated with HPV infection.Penile carcinomas with basaloid differentiation and warty features have shown a strong association with HPV infection, with recent studies showing that HPV infection is present in 76% of basaloid tumors, while the presence in verrucous cancer was 24.5% (Backes et al., 2009). The recent literature suggests that the oncogenic potential of HPV integration into host DNA genome and their ability to manipulate cell cycle regulators is responsible for the establishment and maintenance of HPV genomes in the squamous epithelium and HPV-related PSCC cancer, which will result in deregulated expression of oncoproteins such as E6 and E7.The oncoprotein E6 is known to induce degradation of the tumor suppressor protein p53 and the oncoprotein E7 binds to retinoblastoma protein (pRb).Thus, the oncoproteins E6/E7 allow cells to evade cell cycle checkpoints and to entrance in S1 phase of cell cycle, leading to disruption of normal cell cycle controls. Following cell division, infected cells leave the basal layer, migrate towards the suprabasal regions and begin to differentiate.Increased understanding of cervical pathogenesis has led to confirmation of HPV as an etiological agent for several cancers and consequently to the development of preventive vaccines targeting HPV antigens for the control of HPV-related cancers.HPV vaccine was developed as a result of the achievement of core technologies, that are able to produce virus-like particles (VLPs), which, in turn, are able to mimic the natural virus and elicit high-titers of virus neutralizing antibodies.With the progress through advanced stages of clinical trials and further exploration of combinatorial strategies, there is a great promise for significant advances also in the field of therapeutic HPV vaccine development, not only to cervical cancer, but other several malignancies related to HPV infection.Moreover, in this chapter we discuss the current status of HPV vaccines as well as the most common associated factors that might interfere on establishment of strategies that could control the HPV infections and the development of penile carcinoma associated to this infection. Penile cancer Penile malignancies are thought to arise from the accumulation of multiple mutations that may occur as consequence of progressive genetic instability.This intricate process of genetic instability may be caused by environmental factors, such as history of intense smoking, penile tears, phimosis, and poor genital hygienic habits.(Chaux & Cubilla 2012).In addition, a recent study conducted by Chaux et al. (2011) also described the poor education, penile chronic inflammation, genital warts and Human papillomavirus (related to number of sexual partners during lifetime) as environmental factors to malignant transformation. There is a worldwide geographic difference in occurrence of penile malignancies that could be caused by differences in socio-economic status, cultural and religious conditions (Bleeker et al. 2009, Chaux & Cubilla, 2012).The higher incidences are frequent in tropical or subtropical regions of Latin America, Asia and Africa but have uncommon incidence rates in Europe, Japan, USA and Israel (Cubilla 2009).Recently, were reported higher incidence rates in underdeveloped regions such as Africa, South America, and Asia (2-4/100000 inhabitants) as compared with North America (United States) and Europe (0.3-1/100000 inhabitants) (Chaux & Cubilla, 2012).Pow-Sang et al. (2010) also described the penile malignancies prevalence rate among different populations.Prevalence rates in developed countries as Israel (0.1/100 000) and United States (0.3-1.8/100 000) and interestingly, compared with underdeveloped countries such as Uganda (2.8/100 000) and Brazil (1.5-3.7/100000).Once again confirming the disease geographical difference and the influence of country development. Squamous cell Carcinoma represents vast majority of histological subtype of primary penile malignancies with heterogeneous features due to differences in morphology pathogenesis and prognosis (Hakenberg & Protzel, 2012;Stankiewicz et al., 2012;Syed et al., 2012).Knowledge of origin and progression of penile squamous cell carcinoma depends on an intricate relation between anatomy and histopathology. The anatomy of the penis is complex and has important implications to define predictive risk model and delineate the prognostic factors (Chaux & Cubilla, 2012).The same authors described 3 anatomical compartments in the penis (Glans, Foreskin and Coronal Sulcus) where the malignant neoplasms may be originated (Fig 1).However, the penile malignant neoplasms have a predilection to originate first on the Gland followed by Foreskin inner mucosa and lastly the Coronal Sulcus is rarely affected by neoplastic entity.The anatomy of the penis presents a pivotal role in tumor invasion and prognosis of cancer.Moreover, the TNM staging system is based, at least partially, on the commitment of these anatomical levels (Velazquez et al. 2010).The glans can be divided in 4 levels: squamous epithelium, lamina propria, corpus spongiosum, and corpus cavernosum (corpus spongiosum, and corpus cavernosum are subdivided by the tunica albugínea).Anatomical levels in the foreskin, like in glans, are divided in squamous epithelium, lamina propria, dartos muscle, and outer skin (Chaux & Cubilla, 2012). A previous study suggests that different tumor histological features could be based on anatomical site.This hypothesis is sustained by the histological differences among the urethral segments and their corresponding neoplasms (Velasquez et al. 2005 1).Macroscopic features of SCCUT range from endophytic to irregular exophytic masses, presenting white-to-gray coloration.However, the reddish pigmentation also can be observed (Chaux et al. 2010).Microscopilly, the SCCUT is similar to oral, vulvar and cervical Squamous Cell Carcinomas (Cubilla et al. 2001).SCCUT may vary from well-differentiated tumors to anaplastic entities.Other presented feature is keratinization, ranging from highly keratinized, presented in well-differentiated tumors, until scarce or minimal keratinization observed in anaplastic neoplasm.Chaux et al. (2010) Evidences from pertinent literature indicate the involvement of groin lymph nodes as the most relevant and unfavorable prognostic factors predicting cancer-specific survival in patients with penile squamous cell carcinoma.Numerically, in the same review, the 5-year cancerspecific survival rate for those presenting cN0 tumors were between 75% and 93%, compared with a 5-year cancer-specific survival rate for those presenting cN3 tumors ranging between 20% and 34%.There is a substantial decrease in the survival rates with N progression (Novara et al. 2007).2002) performed a study that aimed to investigate the p53 in Brazilian patients with PSCC to establish a new prognostic factor for lymph node metastasis and its possible influence on prognosis.This study observed the nodal stage as a factor that influenced survival (independent risk factors) in the univariate and multivariate analyses.Gunia et al. (2012) shown that p16INK4a is a good prognostic marker for penile squamous cell carcinomas, surpassing the prognostic impact of histologically confirmed koilocytosis.In their study, p16INK4a expression predicted better cancer specific survival rates.Furthermore, p16INK4a can be useful in differentiate subtypes of PSCC.According Chaux & Cubilla (2012), warty carcinomas tend to be p16INK4a positive, whereas giant condylomas and papillary and verrucous carcinomas are consistently negative. Lopes et al. ( Medical record analysis of 145 men with penile squamous cell carcinomas was performed to identify prognostic factors for lymph node involvement (Lopes et al. (1996)).The authors found that lymph node metastasis presents correlation with tumor thickness, lymphatic and vascular embolization.Interestingly, univariate analysis did not reached statistically significant values to pathologic stage of primary tumor, clinical lymph node stage (cN), and histological grade.However, histological grade may be considered an important prognostic factor in penile squamous cell carcinoma.In accord with Cubilla AL. 2009 these prognostic factors are predictive to the nodal spread, metastasis and tumoral dissemination. Currently, different methods are employed to grade Penile Squamous Cell Carcinomas.For instance, Akhter et al. (2011) uses the Broder's system as histological grade system in Squamous Cell Carcinoma.In the Broder's system Penile Squamous Cell Carcinomas is stratified in 4 grades levels based only in differentiation of the cells: Grade I (well differentiated) presenting <25% undifferentiated cells; Grade II (Moderately differentiated) presenting <50% undiffer- HPV Papillomaviruses are a family of pathogens that infect exclusively the epithelial tissues of amphibians, reptiles, birds and mammals (Franceschi, 2005).The viruses are grouped according to the anatomic site of infection and their preference for either cutaneous or mucosal squamous epithelium.The cutaneous types, or beta papillomaviruses, are usually found in the general population and cause common warts.In contrast, the alpha, or mucosotropic, papillomaviruses have been implicated in mucosal infections ( The multiplicity of functions of the small papillomavirus oncoproteins, E5, E6 and E7, continues to be studied through last decades, although there are several mechanisms well established.Specifically, more than a dozen protein-protein interactions between E6 and cellular proteins have been shown (Villa et al., 2002).Taken into a carcinogenic point of view, E6 and E7 ORF are considered to play the most important roles, encoding for oncoproteins that allow viral replication and the immortalization and transformation of the epithelial cell that host the HPV DNA (Doorbar et al., 1991). Proving the importance of p53 and pRb in cell cycle progression, the repression of HPV 16 E6 and E7 expression by dual shRNA transfection has been shown to be capable of restoring the p53 and pRb tumor suppressor pathways and activating apoptosis (Psyrri et al., 2009, Rampias et al., 2009).Thus, the demonstration of this tumor suppressor inactivation by the E6 and E7 HPV oncoproteins has provided a basic explanation for how the high-risk HPV types exert their oncogenic effects on cervical cells, and this explanation are under investigation to be related with other sites of HPV-infection.This is particularly important in penile cancerassociated HPV infection, whereas HPV16 seems to develop a pivotal role, and accounts for more than 60% of HPV-related tumors. HPV impact in squamous cell homeostasis Different from other viruses, HPV does not infect or replicate in antigen-presenting cells (APCs) of the epithelium nor induce cell lysis, which is a key escape mechanism to avoid that APC recognize and produce antigens derived from the virion, and alerts immune system.About more than 50% of infections present seroconversion in the patients, but the production of antibodies usually occurs only months after the initial infection (Vidal & Gillison, 2008).The life cycle of papillomaviruses is closely tied to the epithelial differentiation process.Infection occurs exclusively in squamous epithelial cells with a preference for the keratinocyte stem cell as the initial target of HPV infection, which will allow the maintenance of viral replication (Vidal & Gillison, 2008).The route of entry for HPV infection is microtraumas or small wounds in the skin or mucosal surface, which are particularly important in penile HPV-infection.These breaks in the epithelial surface allow the virus to access and persist in the nuclei of infected basal layer cells of the epithelium.Until now, no single receptor has been definitively identified and established as being responsible for HPV entry, although is believed that receptors closely related to wound healing might be preferential targets for HPV infection, such as α6 integrin and glycosamioglycan heparin (Vidal & Gillison, 2008). As most viruses, HPV uses the host cell DNA machinery to maintain the production of viral progeny.This mechanism of viral-induced cell growth is very well known and is analogous to other viruses that disrupt the control of cell growth (Hebner & Laimins, 2006).Following cell division, as the basal cells divide into squamous epithelial cells, HPV establishes its DNA genome in the host cell nuclei, replicates and reaches a high copy number.Infected cells then leave the basal layer, migrate toward the suprabasal regions and begin to differentiate.In the basal layer phase, the HPV genome is maintained at a low copy number, providing a type of stock of viral DNA for further use in cell divisions.At the same time, 'early' viral genes (E5, E6 and E7) are expressed, resulting in enhanced proliferation of the infected cells and their This provides an important microenvironment for cellular growth aberrations, and is particularly important in penile pre-neoplastic lesions.Several authors have reported a higher level of HPV detection in PIN, when compared to penile cancer, open field for a HPV importance in the development of tissue growth abnormalities, leading to a soil field for carcinogenesis.In this model, HPV would be an important co-factor in penile pre-neoplastic development. Due to all of his effects in cell growth and lack of cell cycle control, the formation of lesions such as PIN associated to other important factors in penile carcinogenesis (genera hygiene, phimosis, chronic inflammation, high number of sexual partners). Prevalence of HPV infection in penile squamous cell carcinoma and histological considerations In contrast with the high prevalence of HPV infection in cervix carcinomas, which may be detected in almost 100% of the cases, in penile carcinomas the detection is considerably lower, although it stills an important in penile pathogenesis.According to the current evidences, penile cancer can follow 2 distinct etiologic pathways: one is related to environmental factors, such as phimosis, smoking, poor personal hygiene and chronic inflammation; and other one is the HPV-related penile cancer (Rubin et Squamous cell carcinoma of the penis is currently divided in 12 subtypes.Each one of this subtypes shows distinctive outcomes, and this high number of subtypes makes its difficult to characterize the disease.About half of penile cancers are of the usual squamous histology, while the rest is divided through the special types. Basaloid carcinomas: represent 4-10% of penile tumors.Macroscopically, these tumors show an ulcerative aspect, presenting as a solid, firm invasively neoplasm, with necrosis foci.Microscopically, they present a nesting pattern, with each nest presenting a solid or central necrotic nest (comedonecrosis).Keratinization can be observed, although not pathognomonic.Cells presents as small, basofilic, basaloid, spindle or pleomorphic, with abundance of mitotic and apoptotic figures.Perineural and vascular invasions are often seen. Warty carcinomas: represent 7-10% of all cases.It can be described as verruciform tumors, with an exoendophytical appearance, although a rare non-invasive exophytic tumor may be found.Histologically, a classical condylomatous papilla is observed, with a arborescent pattern, a central fibrovascular core, and keratinized cells, with presence of superficial and deep pleomorphic koilocytosis.Different from giant condillomas, in warty carcinomas these cells are typically malignant.Also, as a differential diagnosis, low-risk HPV or negative p16INK4a status favors a condilloma diagnosis.Prognosis is often good, with no signs of nodal involvment, although it might be present in deep invasive warty carcinoma (Chaux & Cubilla, 2012). Verrucous carcinomas: represent 3-8% of the cases.Macroscopically are classically characterized by exophytic, verrucoid white lesion, with a clear base separating them from the stroma.Microscopically, they are acanthotic, papillomatous neoplasms, with a high degree of diferentiation.As most well differentiated tumors, they have a good prognosis, only presenting metastasis when they present areas with poor differentiation.However, if it presents large areas of undifferentiation, the tumor is classified as a mixed verrucous carcinoma, as the classical verrucous carcinoma is a classicaly well differentiated tumor. Papillary carcinoma: represent 9-10% of all cases.It is also a verruciform tumor, diagnosed after excluding the possibility of a verrucous or warty tumor.Macroscopically is observed as an exophytic large tumor, with a clear jagged interface with stroma.Microscopically, papillomatosis is observed and a low-grade histology is present.Different from verrucous carcinoma, acanthosis is not so prominent, and differently from warty carcinoma, there is no koilocytosis. They have an excellent prognosis with very infrequent metastasis. Sarcomatoid carcinomas: correspond to 1-3% of cases.Macroscopically, they are hemorrhagic and necrotic, or polypoid tumors.Microscopically, they can mimic several sarcomas, like leiomyosarcomas, osteosarcomas, or fibrosarcomas.They are observed as tumors with two different cellular presentations, with the presence of epithelial and spindle cells.They are typically located in glans, not in corpora cavernosa, and may present foci of associated penile intraepithelial neoplasia.Immunohistochemical stains with high-molecular-weight cytokeratins and p63. Other mixed tumors are often rare, which makes very difficult to establish their relationship with HPV infection, and comprises several subtypes, such as Pseudohyperplastic Carcinoma, Carcinoma Cunilatum and Pseudoglandular Carcinoma. As stated before, distinct pathological variants of PSCC are associated with an indolent behavior (eg, verrucous, warty and Buschke-Lowenstein condyloma) and other with more aggressive forms (eg, usual SCC, basaloid and papillary).For basaloid and warty carcinomas, the HPV-infections are present in 80-100% of all cases.It is important to remember that in situ SCC seems to be strongly related with HPV-infection (Kayes et al., 2007).Seems plausible then that HPV-infection is far more important as a co-factor that will prepare the soil for a neoplastic malignant transformation, due to the several pathways in which HPV-infection contributes.This is in accordance with the theory of two major pathways in penile cancer development, being one driven by factors such as poor hygiene, presence of phimosis, chronic inflammation, etc), and another one driven by high-risk HPV-infection (Rubin et al., 2001).As discussed above, this represents an astonishing opportunity for a new approach to prevent this disease, as HPV vaccination researches are under constant evolution.As a health problem, the prevention of HPV infection might be able to avoid the development of these subtypes of HPV in men, if the current knowledge of HPV-driven malignancy is right. HPV-status impact on outcome in penile carcinomas Although there are not many studies investigating a prognostic role for HPV-infection in penile carcinomas, some studies maintain HPV as a controversial factor, in terms either of survival, or local metastasis, and lymph node involvement.From the three more important studies, it is still unknown if HPV alone may have an impact in penile cancer`s patients overall survival, as demonstrated in several other solid tumors, such those arising in oral cavity and oropharynx (Lont et al., 2006).In a study conducted by Cubilla et al. (2010), HPV-16 was the most prevalent genotype (72% of all cases), followed by HPV-6 (9%) and HPV-18 (6%).The 16 and 18 genotype (high-risk HPV types) were proposed to be associated with aggressive variants of penile tumors, and to be associated with a poorer outcome in these patients.In several studies, the role of HPV infection in penile cancer could only be observed by indirect means, as the observation that HPV-infected PSCCs were those with more aggressive subtypes, as basaloid and warty tumors.So, it is believed that HPV-infection, specially related to HPV-16 and -18, represented a more aggressive subtype, with a worst survival when compared to HPVnegative PSCCs.But directly comparing HPV expression and survival curves, the most extensive study on high-risk HPV infection was carried out by Lont and colleagues, whom had demonstrated that penile tumors presenting high-risk HPV infection had a better outcome from those tumors where high-risk HPVs were not detected.Interestingly, HIV infection did not correlated (Lont et al., 2006). HPV vaccine In many countries, vaccines against some HPV types are administered to girls and young women with the goal of protecting them against HPV-induced cervical cancer (Villa et al., 2005;Muñoz et al., 2010).The introduction of HPV vaccines has also drawn more attention to the fact that HPV is associated not only with cervical cancer and genital warts but also with other tumors, such as head and neck and anogenital cancers (Zur Hausen, 2006). Although the majority of HPV vaccine research has focused on cervical cancer, some vaccine developers have targeted other diseases related to different strains of HPV. Emerging results from vaccine trials have suggested that some cross-protection is possible. Vaccines against cervical cancer also have the potential to prevent other cancers that are caused by the same types of HPV (Herrero et al., 2003, Kreimer et al., 2005), and half or more of anogenital cancers outside the cervix, including cancer of the vulva, vagina, penis, and anus (Daling et al., 2005, Gross & Pfister 2004).Theoretically, these vaccines should also work against the same viruses at other anatomical sites, which would be of great value for the majority of the patients.Since different HPV-related diseases have share the same contamination basis (eg, HPV contamination in sexual act may happen in anogenital, cervical and even in head and neck areas.Also, almost all HPV-related tumors share individual at risk with the same behavior, and it is believed that this prevent potential directed to several organs could reduce the prevalence of several tumors simultaneously.If proven to do so, this approach would represent a major conceptual breakthrough, not only in prevention of these diseases, but equally importantly, by providing the 'missing link' in the chain of evidence for the final proof of HPV etiology in these tumors (Syrjänen, 2010). The US Food and Drug Administration (FDA) approved Gardasil for females aged 9-26 in 2006.In October 2009, the FDA approved Cervarix for use in females aged 10-25 and approved Gardasil for use in males aged 9-26 to prevent genital warts and to prevent the spread of cervical cancer.Moreover, the FDA (2010 and 2010a) has proclaimed that the dosing and administration schedule should be 0.5 mL administered intramuscularly (preferably in a deltoid muscle) on a 3-dose schedule.The second dose should be administered 1 to 2 months later, and the third dose should be administered 6 months after the first dose. Although clinical trials of Gardasil and Cervarix have been extremely promising, these first generation VLP vaccines may not be the ideal vaccine candidates, especially in already infected patients, and older men and women. The most recent report from Quadrivalent Human Papillomavirus Vaccine presents important facts about immunization practices, and provides excellent results.The efficacy for prevention of HPV 6-, 11-, 16-and 18-related genital warts was 89.3%, as a profilatic vaccine from those who have take 3 doses and was seronegative at day 1.From males who have received only one dose, regardless of serology or previous infection was of 68.1%.This efficacy was also confirmed by several other trials in female patients, with >98% efficacy in preventing HPV 6-, 11-, 16-and 18-related grade 2 or 3 cervical intraepithelial neoplasia or adenocarcinoma in situ(CDC MMWR, 2011). Another important issue in vaccination process is to determine who are the individuals in more risk populations, in order to a better efficacy, and a reduction of the high costs involved in the vaccine production and distribution.Based on incidence of HPV-infection between several groups, the probabilities of being infected, especially subtypes 16 and 18, are higher in men who have sex with men (MSM) group than in heterosexual men (Heiligenberg, 2010).Several diseases have a higher incidence in MSM group, such as anal intraepithelial neoplasia (AIN), anal cancers, and genital warts (Jin et al., 2007).Another important group which might be benefited by HPV immunization is the HIV-positive patients, although it is not clear whether the immunization could provide a long time antibody titers against HPV 6, 11, 16, and 18, and how immunossupressed patients would react to HPV4 vaccine, in terms of safety and adversely reactions.However, as HPV4 is not a live vaccine, it can be safely administered to person in the most highly risk, such as immunocompromised individuals (like HIV-positive, drug-driven immunossupression, or disease-related immunossupression). Researchers are now actively working to better develop prophylactic HPV vaccines that may be effective against a broader range of HPV types and have a longer shelf life. The HPV therapeutic vaccines and its perspectives Immunotherapy offers an attractive alternative treatment strategy because it can address both the underlying HPV infection and the visible lesions.Moreover, immunotherapy can target all HPV-associated lesions, regardless of location, and induce long-lasting immunity, thus preventing recurrence (Chu, 2003;Stanley, 2012). A judgment of whether therapeutic HPV vaccine candidates have a real effect on disease has been difficult because most trials have not been placebo-controlled, and more important, it stills not clear for how long these patients can maintain high levels of immune response, as they have been already infected.The vaccines have also shown, at best, limited efficacy in eradicating established tumors, although the fact that they have mostly been tested in advanced stage cancer patients with compromised immune systems may have limited their impact (Brinkman et al., 2005). Perhaps the ideal HPV vaccine strategy calls for a vaccine that possesses both prophylactic and therapeutic properties.A chimeric vaccine of this type could both prevent new HPV infections and clear existing infections.Moreover, such a vaccine would benefit and could be administered to both sexually inexperienced young individuals and older individuals who already harbor HPV (Franceschi, 2005).Of course the costs of the rise of individuals been vaccinated needs to be estimated, in order to not turn HPV vaccine in an expensive waste of health budget.It is important to remember that although some groups are in risk group, not all individuals of this risk group will develop an HPV-related cancer.So, before implementing HPV vaccination to a wide range of patients, it needs to be better classified what populations should be included in vaccination process, and further develop new guidelines to better incorporate in this vaccination individuals that, even in risk groups, still have a higher risk in HPV-infection and spread.Opportunities for primary and secondary prevention should be assessed, including the use of HPV vaccines to prevent infection and therapeutic vaccines in the adjuvant setting for locoregional recurrence and distant disease (Marur et al. 2010). Combined with the fact that no therapeutic vaccines currently exist for other diseases, this goal makes therapeutic HPV vaccine development a challenging task. The most recent report from Quadrivalent Human Papillomavirus Vaccine (HPV4) presents important facts about immunization practices, and provides excellent results.The efficacy for prevention of HPV 6-, 11-, 16-and 18-related genital warts was 89.3%, as a profilatic vaccine from those who have take 3 doses and was seronegative at day 1.From males who have received only one dose, regardless of serology or previous infection was of 68.1%.This efficacy was also confirmed by several other trials in female patients, with >98% efficacy in preventing HPV 6-, 11-, 16-and 18-related grade 2 or 3 cervical intraepithelial neoplasia or adenocarcinoma in situ (CDC MMWR, 2011). Final considerations Several aspects still remain to be discovered in the field of penile cancers and HPV infection, and although last decade researches were not able to define a causal role for HPV-infection, several progresses have being made in this matter.The genomic detection of HPV DNA, primarily in some subtypes of PSCCs, provides stronger support for a viral etiology in this disease, and corroborates the idea that there are at leas 2 main pathways in penile carcinogenesis, and one of them is closely related to HPV. Targeted therapy for PSCCs now demands more predictive biomarkers, such as the HPV infection status and mutation status of crucial genes, which could contribute to personalized treatment for each individual and decrease the inherent morbidities.However, for a better understanding of whether the HPV status of tumors has real therapeutic implications in affecting the clinical outcome, upcoming clinical trials should be significantly standardized in their design and performed on PSCC, which have been adequately selected and classified with respect to the different penile carcinoma subtypes.Moreover, we suggest that a more defined consensus in the histological classification in PSCCs should be utilized to improve HPV detection and provides means to compare studies in different populations.This is highly remarkable as is now fully accepted that penile carcinogenesis is quite dependent of local characteristics, and varies worldwide. Figure 1 . Figure 1.Paraurethral longitudinal section presenting anatomical levels of the Penis.CC: Corpus Cavernosum; CS: Corpus Spongiosum; LP: Lamina Propria; SF: Skim of the Foreskim and TA: Tunica Albuginea.(Adapted from Chaux & Cubilla 2012).Recently, Hernandez et al. (2008) performed an epidemiological study with 4967 United States men with the diagnosis of penile squamous cell carcinoma.Thirty four percent of patients (1712) presented neoplasms arising in gland, 13.2% in prepuce, 5.3% in penis shaft, 4.5%, in overlapping of penis, and 42,5% in unspecified site.Lesions generally initiate on the glans and slowly extend to involve completely the glans and shaft of the penis.During the neoplasm progress Buck's fascia act as a natural barrier to local tumor invasion defending the corporal bodies from tumoral expansion (Pow-Sang et al. 2010).This assessment is schematically illustrated in figure 2. lateral expansion, working to spread infection cells throughout epithelial tissue.While the basal cells and viral DNA divide, some daughter cells may be maintained in the basal layers, whereas other daughter cells move toward the upper layers of the epithelium and begin to differentiate.During this process in which the infected cells enter into the suprabasal layers, the viral genome replicates to a higher copy number; 'late' viral gene (L1 and L2) expression is initiated; and structural proteins, as such capsid proteins, are formed.Subsequently, virions are assembled and released as the upper layer of epithelium is shed(Fehrmann & Laimins, 2003;Scheurer et al., 2005;Vidal & Gillison, 2008). Figure 3 . Figure 3. Representation of normal and HPV-infected epithelium according to the cellular differentiation and the differentiation-dependent viral functions (Adapted from Hebner & Laimins, 2006). Table 1 . Cubilla et al. (2009)aton et al. 2001nomas (SCCs) of the penis (Adapted fromChaux & Cubilla 2012)entiated cells; grade III (Poorly differentiated) presenting <75% undifferentiated cells and grade IV (Anaplastic/Pleomorphic) >75% undifferentiated cells.Cell anaplasia degrees are also pointed as common approach to determine Penile Squamous Cell Carcinomas grading(Mikuz et al. 2004;Slaton et al. 2001), absence of anaplasia (well differentiated cells), grade 1; grade 2, moderately differentiated (<50% anaplastic cells); and grade 3, poorly differentiated (>50% anaplastic cells).Cubilla et al. (2009)reported a method to grade Penile Squamous Cell Carcinomas.Carcinomas with a minimal deviation from normal/hyperplastic morphology of squamous epithelium were considered Grade 1 (extremely well-differentiated). Grade 3 are tumors showing any proportion of anaplastic cells, identified as solid sheets or irregular small aggregates, cords or nests of cells with little or no keratinization, high nuclear cytoplasmic ratio, thick nuclear membrane, nuclear pleomorphism, clumped chromatin, prominent nucleoli and numerous mitosis.Grade 2 is composed by remainder tumors.Grading both extremes of the spectrum is simple and reproducible. (Guimarães et al., 2011)alles-Guri et al., 2009)have highlighted the prevalence of HPV infection in penile cancer, with an average prevalence of 47% to 48% in more than 60 studies(Backes et al., 2009;Miralles-Guri et al., 2009).Differently from cervix cancer, in penile cancer the prevalence of HPV infection varies according to histological subtypes, being strongly prevalent in basaloid and warty carcinomas, and lesser prevalent in keratinizing variants, such as verrucous, papillary and usual carcinomas(Guimarães et al., 2011).Before understanding the relationship between HPV and specific histological subtypes, a basic knowledge of penile cancer histology is required.
2017-09-18T03:40:35.094Z
2013-04-30T00:00:00.000
{ "year": 2013, "sha1": "79e753f1fa36266d2799ec4ff4b501eea8568122", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/55811", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "1d3d15471fa87aa89790c761dc30696e9a7c8d2c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
164217389
pes2o/s2orc
v3-fos-license
Association between vitamin D level and hematuria from a dipstick test in a large scale population based study: Korean National Health and nutrition examination survey Background Vitamin D deficiency is an important health concern because it is related to several comorbidities and mortality. However, its relationship with the risk of hematuria remains undetermined in the general population. In this study, we analyzed the association between vitamin D deficiency and hematuria. Methods We conducted cross-sectional analysis using data of participants from the Korean National Health and Nutrition Examination Survey (KNHANES) 2010–2014. A total of 20,240 participants, aged ≥18 years old, were analyzed. Serum 25-hydroxyvitamin D (25(OH)D) levels were measured in a central laboratory and hematuria was defined as ≥1+ on a dipstick test. Multivariate logistic regression was conducted to calculate the odds ratio (OR) of hematuria risk according to serum 25(OH)D quartiles, after adjusting several covariates. Results A total 3144 (15.5%) participants had hematuria. The mean 25(OH)D level was 17.4 ± 6.2 ng/mL (median, 16.6 ng/mL (interquartile range, 13.1–20.8 ng/mL)). The 3rd and 4th quartiles had a higher risk of hematuria than the 1st quartile, with adjusted ORs 1.26 (1.114–1.415) and 1.40 (1.240–1.572) in the 3rd and 4th quartiles, respectively. However, this relationship was only significant in women, not in men. When stratified analyses were conducted according to menopausal status, there was a significant increase of hematuria risk according to quartiles in postmenopausal but not in premenopausal women. Conclusion We found that vitamin D deficiency is correlated with hematuria in women, particularly after menopause. Further interventional studies are warranted to address whether correcting vitamin D deficiency can lower the risk of hematuria. Background Vitamin D has receptors that are expressed in many nucleated cells and controls the expression of various human genes [1]. Vitamin D deficiency aggravates bone diseases, leading to osteoporosis, and increases the risk of falls and fractures [2,3]. In addition to its relationship with skeletal health, the association of vitamin D deficiency and various other diseases such as hypertension [4], cardiovascular disease [5][6][7], cancer [8][9][10][11], infectious disease, and metabolic disease [12] have also received attention. Vitamin D deficiency is a global health problem related to poor nutrition [2], and the prevalence of vitamin D deficiency is relatively high worldwide. According to data from the National Health and Nutrition Examination Survey of the United States, 10-40% of the population is deficient in vitamin D [13]. The prevalence of vitamin D deficiency is even higher among Asians than in the United States [2]. According to a Korean report, 47.3% of males and 64.5% of females are deficient in vitamin D [14]. Correcting vitamin D deficiency is essential to preventing several related diseases and improving global human health. Hematuria is the presence of red blood cells in the urine. The prevalence of hematuria ranges from 0.2 to 16.1% in the general population [15,16]. In one study, 6.2% of Korean participants who underwent health screening had asymptomatic hematuria [17]. Hematuria is frequently the result of nonglomerular causes, such as an infection or stone in the urinary tract. Additionally, hematuria can be a manifestation of glomerular kidney disease or polycystic kidney disease and is known to be a risk factor of progressive kidney dysfunction and end-stage renal disease [18]. Various urinary tract neoplasms originating in the bladder, prostate, ureter, and kidney may manifest as microscopic and gross hematuria [19]. Therefore, hematuria is an important sign of disease and its cause should be evaluated to prevent further disease progression. Despite the clinical importance of vitamin D deficiency and hematuria, no studies have been conducted to investigate their correlation in the general population. Correlation between proteinuria and vitamin D deficiency has been evaluated in various studies [20,21]. However, the association between vitamin D status and the hematuria, another important parameter of kidney disease other than proteinuria, has not been evaluated yet. Furthermore, there are accumulating evidence that vitamin D deficiency contributes to pathologic conditions that can be presented as hematuria such as urinary stone [22], infection [23] and malignancy [24]. The present study is the first to examine this correlation using data of a nationwide population-based survey, stratified by sex and menopause status as these parameters are known to be important in analyzing the effects of vitamin D deficiency [25,26]. Study population This was a nationwide population-based cross-sectional study using data of the Korean National Health and Nutrition Examination Survey (KNHANES), conducted by the Korean Centers for Disease Control and Prevention in South Korea. We used data of both the KNHANES V (2010-2012) and KNHANES VI (2013-2015) surveys conducted in South Korea. Of a total 41,102 participants, we included 20,295 participants, aged ≥18 years, for whom results of both urinalysis and serum 25-hydroxyvitamin D (25(OH)D) levels were available. After excluding 55 women who were menstruating at the time of examination, a total 20,240 participants (49.2% of the total population surveyed) were analyzed in the present study. Study variables Demographic variables were collected during health interviews, including age, sex, menopause status, alcohol consumption, and smoking status. Alcohol consumption was defined as drinking once or more per month. Smoking status was classified as nonsmoker, former smoker, or current smoker. Information was also obtained about underlying comorbidities including hypertension, diabetes, and cardiovascular disease. Weight (kg) and height (cm) were measured with participants wearing a gown and no shoes. Body mass index was calculated as weight (kg) divided by square of height (m 2 ). Body mass index < 18.5 kg/m 2 , 18.5-22.9 kg/m 2 , 23.0-24.9 kg/m 2 , and ≥ 25.0 kg/m 2 were defined as underweight, normal weight, overweight, and pre-obese and obesity, respectively [27]. Blood pressure was measured with patients at rest. Participants were defined as having hypertension with systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg, or a history of taking blood pressure lowering agents. Fasting blood samples were collected during health examination surveys. The samples were refrigerated and transported to the designated central laboratory (NeoDin Medical Institute, Seoul, Korea). Fasting glucose levels were measured using the enzymatic UV (hexokinase) method with a Hitachi 7600 automated analyzer (Hitachi, Tokyo, Japan). Participants with diabetes were defined as those with a fasting glucose level of ≥126 mg/dL or taking diabetes medication or insulin. A fasting glucose level between 100 mg/dL and 125 mg/dL was defined as impaired fasting glucose status. Hypercholesterolemia was defined in participants with total fasting cholesterol level ≥ 240 mg/dL or taking cholesterol lowering agents. Total cholesterol was measured using an enzymatic method and a Hitachi 7600-210 analyzer (Hitachi). Serum hemoglobin levels were measured using the SLS hemoglobin detection method with a XE-2100D analyzer (Sysmex, Tokyo, Japan), and anemia was defined as a hemoglobin level < 13 g/dL for men and < 12 g/dL for women. Serum creatinine levels were measured by the Jaffe rate-blanked and compensated method using the Hitachi 7600-210 analyzer. The estimated glomerular filtration rate was calculated using the Chronic Kidney Disease Epidemiology Collaboration equation [28]. Serum 25(OH)D levels were measured using radioimmunoassay with a 1470 WIZARD gamma counter (PerkinElmer Finland Oy, Finland) with a 25-hydroxyvitamin D 125I RIA kit (DiaSorin Corp., Stillwater Minnesota, USA). We defined serum 25(OH)D inadequacy as serum 25(OH)D level < 30 ng/mL and deficiency as < 20 ng/mL [29]. Random early morning urine samples were collected, whenever possible. All urine samples were refrigerated and transported to the central laboratory (NeoDin Medical Institute). The results of dipstick tests were scored from negative to + 4. Hematuria, proteinuria, and glycosuria were defined as ≥1+ on a dipstick test. Statistical analysis IBM SPSS version 20.0 (IBM Corp., Armonk, NY, USA) was used for all analyses. Continuous variables including age, height, body mass index, blood pressure, fasting blood glucose, serum hemoglobin, and estimated glomerular filtration rate showed normal distributions and were presented as mean value and standard deviation. However, serum 25(OH)D levels showed a non-normal distribution, and were therefore expressed as median value and interquartile range. A logistic regression analysis was used to calculate odds ratios (ORs) and 95% confidence intervals for the risk of hematuria. Multivariate logistic regression was conducted after adjusting all covariates, such as comorbidities and laboratory findings. A nonlinear relationship between 25(OH)D and risk of hematuria was examined using the cubic spline regression model. A P value < 0.05 was considered significant. In this study, subsequent analyses according to sex and menopausal status were conducted to see the risk difference in hematuria. Predicted probability plot of hematuria was drawn according to sex using cubic spine regression model and multivariate logistic regression was conducted according to sex and menopausal status in women. Factors associated with hematuria A univariate logistic regression analysis was conducted to examine the association between the covariates and hematuria ( Table 2). Age > 30 years, female sex and especially postmenopausal status, hypertension, hypercholesterolemia, anemia, 30-60 mL/min/1.73 m 2 of estimated glomerular filtration rate, and proteinuria were associated with risk of hematuria. Drinking alcohol, former or current smoker, diabetes mellitus, and glycosuria showed a negative relationship with hematuria. These covariates were adjusted in subsequent multivariate regression analyses. Association between serum vitamin D level and hematuria As shown in Fig. 1, the prevalence of hematuria increased in proportion to lower 25(OH)D levels. A total 13.8% of participants in the 4th quartile of serum 25(OH)D (≥ 20.8 ng/mL) showed hematuria whereas the prevalence of hematuria increased from the 3rd to 1st quartiles, as follows: 14.6% in the 3rd quartile (16.4-20.7 ng/mL), 16.3% in the 2nd quartile (13.0-16.3 ng/mL), and 17.7% in the 1st quartile (< 13.0 ng/mL) (P trend < 0.001). In univariate analysis, the 3rd and 4th quartiles showed a higher risk of hematuria than the 1st quartile: OR 1.20 (1.072-1.336) and OR 1.35 (1.210-1.501) in the 3rd and the 4th quartiles, respectively. When comparing the groups with 25(OH)D levels < 30 ng/mL and ≥ 30 ng/mL, the lower group showed a higher OR of hematuria (1.33 (1.071-1.639)) than the higher group (P = 0.010). When vitamin D deficiency was defined as < 20 ng/mL, the deficient group showed a higher OR of hematuria (1.20 (1.102-1.309)) than the higher group (P < 0.001). These differences were also significant despite adjusting for multiple covariates which were significant in the univariate analysis (Table 3). Subgroup analysis according to sex and menopausal status Because the risk of several diseases differs according to sex and menopausal status [10,25,30], subsequent analyses were conducted after stratification by these factors. Figure 2 shows the predicted probability plot of hematuria according to sex. The linear relationship seemed to be more dominant in women than in men. When multiple covariates were adjusted, the low 25(OH)D groups (inadequate or deficient) showed higher ORs of hematuria than the high 25(OH)D groups for both sexes. According to menopausal status, no relationship was found among premenopausal women, however, the relationship was significant in postmenopausal women (Table 4). Discussion Vitamin D deficiency and hematuria are important public health problems with high incidence in the general population, and both may be related to more severe diseases. However, there have been no studies conducted to investigate the relationship between vitamin D deficiency and hematuria. We addressed this question in the present study, using data of the KNHANES nationwide population-based survey. The risk of hematuria Previous studies have reported the correlations between vitamin D deficiency and various diseases wherein hematuria is one of the disease signs [22,24,31,32]. Because vitamin D enhances the absorption of calcium from the intestine and stimulates bone absorption to physiologically increase serum calcium levels [2], it is plausible that vitamin D might increase the risk of urinary stones, thereby leading to hematuria. However, the evidence is insufficient owing to the observational nature of conducted studies [33,34], and there are contradictory reports in which the risk of calcium-based urinary stones is higher with vitamin D deficiency [22,35]. Vitamin D has an important role in the immune system via controlling the expression of many immunologic factors. As a result, an association between vitamin D deficiency and risk of urinary tract infection has been reported [23,36,37]. One study showed that premenopausal women had a 4-fold increased risk of recurrent urinary tract infection with serum 25(OH)D levels < 15 ng/mL [23], and the correlation between vitamin D deficiency and urinary tract infection has been documented in children and kidney transplant recipients [36,37]. Because hematuria is one sign of urinary tract infection, the present results may be attributable to the above mechanism. Vitamin D deficiency is related to the progression of kidney disease via both direct and indirect effects. End-stage renal disease and proteinuria are more prevalent in individuals who are deficient in vitamin D [20,21,38]. In a cross-sectional analysis of patients with polycystic kidney disease, kidney volumes were larger in individuals with vitamin D deficiency [31]. Animal studies have showed that low vitamin D levels are correlated with podocyte loss and development of glomerulosclerosis [39]. An acute kidney injury model demonstrated that vitamin D deficiency induces tubulointerstitial damage and fibrosis and diminishes renal vascularity, which finally leads to chronic change [40]. Vitamin D deficiency is additionally linked to activation of the renin-angiotensin system, promoting endothelial damage and the progression of diabetes [41]. Vitamin D deficiency is related to high incidence and aggressiveness of various malignancies [8,10,42] that have been documented in the urological system, such as renal cell carcinoma [24] and bladder cancer [32]. Various possible antineoplastic mechanisms of active vitamin D have been suggested. Active vitamin D can regulate transcription of anticancer target genes that induce apoptosis and differentiation and inhibit proliferation, inflammation, angiogenesis, invasion, and metastasis of cancer cells [43]. Vitamin D regulates signaling pathways such as the Wnt/β-catenin, estrogen receptor, and androgen receptor in the colon, breast, and prostate, respectively, which subsequently affect the growth of cancer in each tissue [43]. Additionally, microRNA can mediate the antineoplastic functions of vitamin D [43]. Collectively, the above mechanisms support the present study results. The subsequent analysis showed that the correlation between hematuria and vitamin D deficiency was predominant in postmenopausal women but not in premenopausal women. The different effects of vitamin D deficiency according to menopausal status have been previously reported [10,26,30], but the mechanisms have not been clearly determined. Vitamin D is one of the steroid hormones and it is closely related to sex This study has several limitations. We used one-time spot urine samples and defined presence of hematuria as ≥1+ on a dipstick test. Owing to the possibility of a false positive or false negative on a single test, this approach might have resulted in incorrectly grouped participants. Furthermore, positive dipstick test does not always mean hematuria but may reflect the presence of heme pigment which can be positive in the condition of red blood cell lysis or myositis. Accordingly, using the dipstick test alone may result in false-positivity. Another major limitation is that, we could not obtain information on the cause of hematuria and other laboratory results (e.g., calcium, phosphorous, parathyroid hormone and 1,25 OH Vitamin D level) which may have an interaction in the relationship results. Because the study design was cross-sectional, there is a lack of information about whether the effects of vitamin D deficiency on hematuria eventually lead to occurrence of disease and alter patient prognosis. Our study is the first to address the correlation between vitamin D deficiency and hematuria risk using a large nationwide cohort. Despite adjusting for several covariates that might affect the presence of hematuria, participants who had inadequate or deficient vitamin D levels had a higher risk of hematuria than participants with normal levels. Further physiological and epidemiological studies are required to find out the underlying mechanisms and whether the supplemental vitamin D would be beneficial in various diseases related to hematuria. Conclusions Vitamin D deficiency and hematuria are both common health problem in general population. The association between vitamin D deficiency and hematuria was noticed in this study, particularly in postmenopausal women. Patients with vitamin D deficiency should be concern about the risk of hematuria and related disease.
2019-05-25T11:52:19.914Z
2019-05-24T00:00:00.000
{ "year": 2019, "sha1": "a1e22856657c572a9b0a4fbdc2e0e549df5ee2b1", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-019-1369-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1e22856657c572a9b0a4fbdc2e0e549df5ee2b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265046676
pes2o/s2orc
v3-fos-license
Survival and Malignant Transformation of Pineal Parenchymal Tumors: A 30-Year Retrospective Analysis in a Single-Institution Background This study aims to elucidate clinical features, therapeutic strategies, and prognosis of pineal parenchymal tumors (PPT) by analyzing a 30-year dataset of a single institution. Methods We reviewed data from 43 patients diagnosed with PPT at Seoul National University Hospital between 1990 and 2020. We performed survival analyses and assessed prognostic factors. Results The cohort included 10 patients with pineocytoma (PC), 13 with pineal parenchymal tumor of intermediate differentiation (PPTID), and 20 with pineoblastoma (PB). Most patients presented with hydrocephalus at diagnosis. Most patients underwent an endoscopic third ventriculostomy and biopsy, with some undergoing additional resection after diagnosis confirmation. Radiotherapy was administered with a high prevalence of gamma knife radiosurgery for PC and PPTID, and craniospinal irradiation for PB. Chemotherapy was essential in the treatment of grade 3 PPTID and PB. The 5-year progression-free survival rates for PC, grade 2 PPTID, grade 3 PPTID, and PB were 100%, 83.3%, 0%, and 40%, respectively, and the 5-year overall survival rates were 100%, 100%, 40%, and 55%, respectively. High-grade tumor histology was associated with lower survival rates. Significant prognostic factors varied among tumor types, with World Health Organization (WHO) grade and leptomeningeal seeding (LMS) for PPTID, and the extent of resection and LMS for PB. Three patients experienced malignant transformations. Conclusion This study underscores the prognostic significance of WHO grades in PPT. It is necessary to provide specific treatment according to tumor grade. Grade 3 PPTID showed a poor prognosis. Potential LMS and malignant transformations necessitate aggressive multimodal treatment and close-interval screening. INTRODUCTION Various histological neoplasms originate in the pineal region.Pineal parenchymal tumors (PPT) are a rare group of tumors originating from pineal parenchymal cells.They account Survival and Malignant Transformation of Pineal Parenchymal Tumors: A 30-Year Retrospective Analysis in a Single-Institution for less than 0.5% of central nervous system neoplasms and 20%-30% of tumors in the pineal region [1,2].PPT are classified into three types: pineocytoma, pineal parenchymal tumor of intermediate differentiation, and pineoblastoma.Pineocytoma (PC) is a low-grade tumor that exhibits benign cytologic features [3].Pineoblastoma (PB) is a malignant embryonal tumor that is prevalent in children and young adolescents [4].Pineal parenchymal tumor of intermediate differentiation (PPTID) was first defined by Schild et al. [5] and classified by the World Health Organization (WHO) in 2000 [6].It is char-acterized by intermediate morphological features and clinical behavior between PC and PB [7].PC and PPTID are more common in adults than in children. PPT can present with various symptoms associated with increased intracranial pressure.Common symptoms and signs are caused by hydrocephalus, brainstem compression, and cerebellar dysfunction [2,8].The diagnosis and treatment of PPT pose challenges because of their location within the brain and proximity to deep cerebral veins, which can make surgical removal difficult [9].The difficulty in choosing optimal treatment and predicting prognosis is due to the lack of clear-cut histological criteria and rarity of these tumors.We investigated the clinical data from patients of all ages with PPTs from a single institution to identify differences in clinical features, biological behavior, therapeutic strategies, and prognosis based on the histopathological grade.This study aimed to provide a comprehensive perspective on the spectrum of these rare tumors. Patient selection and clinical data This study was approved by the Institutional Review Board (IRB No. 2102-122-1198) of Seoul National University Hospital (SNUH).The requirement for informed consent was waived due to the retrospective nature of the study.We reviewed medical records of 48 patients diagnosed with PPT at SNUH between 1990 and 2020.We excluded 3 patients who underwent initial surgery for diagnosis in other hospitals.The 2021 WHO classification shows five types of PPT [10].We excluded papillary tumors of the pineal region (PTPR) and desmoplastic myxoid tumors of the pineal region (DMTPR) because they were extremely rare at our institution.Two patients had PTPR and no patient had DMTPR during the study period.Therefore, we included 43 patients in this study. The following information was collected: age at diagnosis, sex, symptoms at presentation, tumor size, extent of resection, surgical approach, histopathological report, presence of hydrocephalus, leptomeningeal seeding (LMS), adjuvant therapy, disease progression/recurrence, and survival.The extent of resection was defined as follows: 1) gross total resection (GTR) as no residual lesion on postoperative MRI; 2) near total resection (NTR) as the removal of more than 90% of the tumor; 3) subtotal resection (STR) as the removal of 50%-90% of the tumor; 4) partial resection (PR) as the removal of less than 50% of the tumor; and 5) biopsy. Histopathological diagnosis The diagnoses were made according to distinct histopathological features of each grade of PPT. PC is a well-differentiated pineal parenchymal neoplasm composed of 1) uniform cells forming large pineocytomatous rosettes, and/or 2) pleomorphic cells showing gangliocytic differentiation.Mitotic activity of PC is rare or absent, and Ki-67 proliferation index (PI) is less than 1% [11].PB has histopathological features similar to those of embryonal tumors.It is a highly cellular, diffuse, and dense tumor composed of small blue cells.The shape of the hyperchromatic nuclei is irregular, and the nuclear-to-cytoplasmic ratio is high.Pineocytomatous rosettes are absent; however, Homer-Wright and Flexner-Wintersteiner rosettes may be seen.Necrosis is common and mitotic activity is high.The Ki-67 PI ranges from 20%-50% [2]. PPTID consists of diffuse sheets or large lobules of monomorphic round cells that appear more differentiated than those observed in PB.Mitotic activity is low to moderate, and Ki-67 PI ranges from 3%-20%.No criteria to satisfy the diagnosis of PB should be present.Although most PPTIDs fall under WHO grade 2, more aggressive cases may occur under WHO grade 3.There are no definite grading criteria to distinguish grades 2 and 3 [2].Jouvet et al. [3] proposed grading criteria, grade 2 for PPT with less than 6 mitoses per 10 high-power fields (HPF) and positive immunostaining for neurofilament protein (NFP), and grade 3 for PPT with 6 or more mitoses or less than 6 mitoses with negative NFP.Our institution also followed to the criteria suggested by the study of Jouvet et al. [3]. Statistical analysis Statistical analyses were performed using the R software version 4.2.1 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria).Quantitative variables were presented as using median and interquartile range, whereas categorical variables were presented as frequencies and percentages.Due to the small sample size, we used the Kruskal-Wallis H test for quantitative variables and Fisher's exact test for categorical variables to examine the differences in clinical variables between the three histological subgroups.Overall survival (OS) was defined as the time from the date of diagnosis till death.Progression-free survival (PFS) was defined as the time from the date of diagnosis to the date of the first recurrence.We used the Kaplan-Meier method for survival analysis.The log-rank test was used in the univariate analysis to determine the effects of prognostic factors on OS and PFS.Statistical significance was set at p<0.05. Baseline characteristics We identified 10 patients with PC, 13 with PPTID, and 20 with PB in our database ( The median age of patients with PB was 6 years.Most of the patients with PB were children and adolescents (75%).There was a female preponderance in the PC and PPTID groups.In contrast, approximately 70% of patients with PB were males.Most of the patients had hydrocephalus.Patients presented with various symptoms including headache, nausea/vomiting, diplopia, and gait ataxia, which are mainly related to hydrocephalus.The median diameter of the masses increased with increasing tumor grade.Mitotic count and Ki-67 PI also showed clear differences according to tumor histology.One patient with PPTID and 8 patients with PB had LMS at the time of diagnosis.One patient with PC developed LMS after malignant transformation of the disease to PPTID.During the follow-up, 53.8% of patients with PPTID and 65.0% of patients with PB experienced LMS. Treatment Table 2 describes the surgical and adjuvant treatment of PPT. The most preferred surgical procedure was the occipital transtentorial approach, followed by the interhemispheric transcallosal approach.Half of patients (50%) with PC and 76.9% of patients with PPTID underwent surgical biopsy simultaneously with an endoscopic third ventriculostomy (ETV).Four patients with PB (three of whom were adults) had endoscopic biopsy only and received adjuvant treatment.Five patients with PC, 6 with PPTID, and 16 with PB underwent surgical resection.Two patients with PC, 3 with PPTID, and 9 with PB initially underwent biopsy to confirm the diagnosis, and subsequently underwent craniotomy for the removal of the residual mass.GTR and NTR were achieved in 4 patients with PC, 3 with PPTID, and 4 with PB. Six patients with PC, 11 with PPTID, and all patients with PB received radiotherapy (RT).Gamma knife radiosurgery (GKRS) was commonly used for PC (40%) and PPTID (46.2%).Craniospinal irradiation (CSI) was predominantly used (90%) in patients with PB.Among 18 patients who received CSI, 15 received CSI as an upfront adjuvant treatment.Two infants underwent delayed CSI after 36 months of age.A 74-year man received GKRS without biopsy at another hospital with an im- aging diagnosis of PC.The size of the primary tumor initially decreased.After 2 years, he visited our institution with recurrent tumor accompanied by LMS, which was surgically diagnosed with PB.Subsequently, the patient underwent conventional chemotherapy (CTx) and CSI.Two patients with PB did not undergo CSI.CSI was delayed for a 1-year boy; however, his disease progressed rapidly.He had only palliative spinal RT and died.A 41-year woman was initially diagnosed with grade 3 PPTID.The tumor almost shrank completely after local RT.Ten years later, she presented with LMS of the cervical spine, diagnosed as PB.The initial diagnosis was reviewed by our pathologists and revised to PB.She has not begun adjuvant RT or CTx for PB. Only one patient with PC received CTx.The patient experienced a malignant transformation of the disease to PPTID.She underwent CTx as adjuvant treatment after diagnosis with PPTID.Five patients with PPTID received CTx.A patient with grade 3 PPTID underwent conventional CTx, and intrathecal chemotherapy (IT) after LMS.Another patient with grade 3 PPTID underwent induction CTx and high dose chemotherapy (HDCT) with autologous blood stem cell transplantation (aPBSCT) after malignant transformation of the disease to PB.Of 18 patients with PB received CTx, 8 pediatric patients underwent HDCT with aPBSCT and one adult patient underwent IT.All patients who underwent HDCT or IT received upfront conventional CTx.Two patients with PB did not undergo CTx.A 2-year girl could not receive CTx as first-line adjuvant therapy due to shunt-related problem and her poor general condition.CSI was attempted, but it was discontinued due to leukopenia.Her guardian voluntarily rejected all treatments. Of the 43 patients with PPT, 90.7% (39/43) had obstructive hydrocephalus.Among the 39 patients with hydrocephalus, 89.7% (35/39) underwent cerebrospinal fluid (CSF) diversion surgery, with 74.3% (26/35) of them receiving ETV with simultaneous tumor biopsy first.Nine patients who initially underwent stereotactic biopsy or craniotomy underwent shunt procedures.The effect of ETV was maintained in all patients with PC (5/5) and 80% of patients with PPTID (8/10).When there was LMS at diagnosis or when craniotomy was performed consecutively after endoscopic biopsy, the effect of ETV did not persist, and shunt surgery was inevitably required.Compared to the other groups, the transition rate from ETV to shunt placement was higher in PB group (54.5%, 6/11).Among all 8 PB patients with LMS at diagnosis, 4 underwent shunt placement initially, 4 received ETV first, which were later replaced with shunt. Survival outcome and prognostic predictors The median follow-up periods for PC, PPTID, and PB were 207, 90, and 75 months, respectively.When PPTID was analyzed at once without division by WHO grade, the prognosis was favorable in the order of PC, PPTID, and PB.The log-rank test indicated that a lower survival rate was associated with high-grade tumor histology (PFS, p=0.085;OS, p=0.035) (Fig. 1A and B).The 5-year PFS rates for PC, PPTID, and PB were 100%, 38.5%, and 40%, respectively.The 5-year OS rates for PC, PPTID, and PB were 100%, 61.5%, and 55%, respectively.Nonetheless, upon stratification of PPTID according to WHO grade, the prognostic outlook for grade 2 and 3 showed a clear disparity.Furthermore, grade 3 PPTID exhibited a worse prognosis than PB (PFS, p=0.0064;OS, p=0.0059) (Fig. 1C and D).The 5 year-PFS rates for grade 2 and grade 3 PPTID were 83.3% and 0%, respectively.The 5 year-OS rates for grade 2 and grade 3 PPTID were 100% and 40%, respectively. Since all patients with PC survived (except for one whose disease transformed to PPTID), we analyzed only the factors associated with recurrence in PC.Although the trend was not statistically significant, tumor diameter ≥30 mm (p=0.069) and a lesser extent of resection (NTR/GTR vs. biopsy/PR/STR, p=0.073) were associated with worse PFS in PC (Fig. 2).Four patients who had GTR did not experience recurrence.The tumor diameters ranged from 10 mm to 30 mm.A female patient with 36 mm-sized tumor underwent STR without adjuvant RT.Six years later, she received GKRS for grown residual tumor.Four patients underwent biopsy with adjuvant RT, including focal RT or GKRS.The tumor sizes of two patients who experienced recurrence were 26 mm and 34 mm, and those of other two patients without recurrence were 20 mm and 26 mm.A male infant who received biopsy without adjuvant treatment survived.The tumor size at diagnosis was 10 mm, and stabilized without increasing in size. PFS (p=0.0017) and OS (p=0.016) were worse in patients with high WHO-grade of PPTID (Fig. 3A and B).Two of 8 patients (25%) with grade 2 tumors experienced recurrence.They initially did not have LMS and received endoscopic biopsy with adjuvant GKRS.Two patients (25%) with grade 2 tumors died.Four of 5 patients (80%) with grade 3 tumors experienced recurrence as LMS.All five patients with grade 3 tumors received upfront RT-local RT for 2, GKRS for 2, and CSI for 1regardless of the extent of resection (including two GTR).A patient had an initial LMS and expired due to rapid deterioration of the general condition within 13 months of follow-up, despite adjuvant CSI.Four patients (80%) with grade 3 tumors There were no prognostic factors for PB recurrence in any of the analyzable factors, such as age, sex, extent of resection, or LMS at diagnosis.A lesser extent of resection (biopsy/PR vs. STR/NTR/GTR, p=0.033) was associated with worse OS in patients with PB (Fig. 4A).All five patients who had biopsy or PR died, while 6 out of 15 patients (40%) who underwent more than PR expired due to their disease.A patient with PB died of secondary acute myeloid leukemia, which developed 5 years after the diagnosis of PB, although the primary lesion was cured after GTR without recurrence.LMS at diagnosis (p=0.0027) and LMS during the entire period of disease (p= 0.0047) were poor prognostic factors for OS in patients with PB (Fig. 4B and C).Age <3 years at diagnosis showed the trend toward worse OS in all patients with PB (p=0.052) (Fig. 4D) and in the pediatric (0-18 years) subgroups (p=0.12).Among the patients receiving CTx, there was a better OS (p=0.17) with HDCT, although the trend failed to reach statistical significance. Patient 1 was a 41-year-old woman diagnosed with PC by endoscopic biopsy.She received 54 Gy of local RT 2 years after diagnosis.The tumor resolved on follow-up MRI.Three years later, LMS lesions at the lumbar spinal cord were confirmed.Grade 3 PPTID was diagnosed using open biopsy of the lumbar spine.She underwent whole-spinal RT and CTx with procarbazine, lomustine (CCNU), and vincristine (PCV).However, LMS aggravated, and she died 2 years after LMS identification. Patient 2 was a 47-year-old woman diagnosed with PPTID via endoscopic biopsy.The mitotic count was 0 per 10 HPF, and the Ki-67 PI was 3%.Immunostaining of NFP was negative.Therefore, the tumor was classified as WHO grade 3 despite its low mitotic count and Ki-67 PI.This patient was reported by our institution in 2009 as a rare case of malignant transformation [12].She underwent adjuvant GKRS (volume After the treatment, the residual mass increased.Our oncologists changed CTx regimen to "8 in 1" including vincristine, CCNU, procarbazine, hydroxyurea, cisplatin, Ara-C, cyclophosphamide, and solumedrol.After 10 cycles of this salvage CTx, he underwent HDCT and aPBSCT including busulfan, melphalan, and thiopeta.Four years after HDCT, he is alive without recurrence.Brain and spinal MRI have been closely followed since the diagnosis of PB.He is the only survivor with grade 3 PPTID, even his disease transformed to PB. Diagnosis assisted by molecular classification Since 2018, our institution has used a next-generation sequencing (NGS) panel to diagnose brain tumors.Two patients were assisted by a novel molecular classification to revise or confirm their diagnosis.Aforementioned female patient ini- tially diagnosed with grade 3 PPTID underwent surgery for cervical LMS.The diagnosis of LMS was PB because DICER1 mutation (c.1045_1054delGACACTTTCC) and partial deletion of chromosome 14q were confirmed in the NGS panel.The original diagnosis was reviewed and corrected to PB. Considering DICER1 mutation, loss of chromosome 14, and older age, this patient may belong to the PB-miRNA2 group according to recent consensus study [13].Another male patient with grade 2 PPTID underwent surgical resection of lumbar LMS.KBTBD4 insertion (c.882_887dupCCCACG) was detected in the tumor NGS panel.The LMS lesion was also a grade 2 tumor.PPTID is characterized by recurrent KBTBD4 small in-frame insertions and the absence of DICER1 mutation or DROSHA homozygous deletion, which are typical molecular characteristics of PB [14]. DISCUSSION The present study indicates that the prognosis of PPT obviously varies depending on the histopathological grade.In particular, grade 3 PPTID revealed a distinct clinical course compared to grade 2 tumors.Cases of rare malignant transformations were also observed. PPTs commonly exhibit characteristic "exploded" calcification from the pineal gland toward the periphery and display low to moderate signal intensity on T1-weighted images and intermediate to high signal intensity on T2-weighted images, accompanied by notable contrast enhancement on MRI.PC appears to be an enlarged pineal gland and is well-circumscribed.These lesions have rare hemorrhage, less cellularity, and greater diffusivity than PB.Considering malignant nature of PB, indicators such as internal hemorrhage, necrosis, infiltration into adjacent structures, the presence of LMS, and diffusion restriction could be suggestive of PB.There are no distinct imaging features that can differentiated PPTID from PC or PB [15].In this study, given the challenges of quantifying these imaging findings, only the tumor size and presence of LMS were addressed.Although PC and PPTID mainly affect adults and PB is more common in children, age and imaging findings are insufficient for a reliable diagnosis prediction of PPT. PC is known to be controlled with appropriate surgical resection and adjuvant RT, with a long-life expectancy.Literature reported nearly 100% of 5-year PFS rate and above 85% of 5-year OS rate [16][17][18].According to Clark et al. [19,20], the group that underwent resection demonstrated a better PFS than the group that received adjuvant therapy after biopsy.The GTR group benefited in both PFS and OS compared to the group that underwent STR with adjuvant RT.There was no significant difference in the effect of adding RT to STR on PFS and OS.We could not stratify patients with PC according to treatment modality because of the limited number of subjects.In the present study, the 5-year PFS rate was 100%.The GTR group showed better PFS than the group who had less than GTR with or without adjuvant RT.PFS ranged from 66 to 139 months in 3 patients who had tumor recurrence.If complete resection of PC is not achievable owing to surgical challenges, adjuvant RT is necessary, and a long-term follow-up of more than 10 years is required considering recurrence.Our analysis revealed that tumor diameter (based on 30 mm) was another predictive factor for recurrence.Tumor size may influence the choice of treatment and outcomes.The smaller the tumor, the easier the surgical resection and the better the result of adjuvant RT. PPTID has an intermediate level of neoplastic behavior and treatment response compared to PC and PB [21].Although PPTID is assigned to WHO grades 2 and 3, the definite histological criteria for WHO grading remain undefined.According to Jouvet et al. [3], grade 3 tumors are associated with aggressive behavior and poor outcomes compared with grade 2 tumors.In the present study, there was a marked difference in PFS and OS between grade 2 and 3, as previously reported [1,22,23].Patients with grade 3 showed worse prognosis than those with grade 4 PB.Our results suggest that certain subgroup of PPTID exhibit aggressive behaviors.The inclusion of 2 patients with malignant transformations in the grade 3 PPTID group in our analysis might make the prognosis of this group look worse.Several studies questioned the criteria of Jouvet et al. [3], and they set the grading with their own criteria using mitotic count and Ki-67 PI [7,24,25].Various prognostic factors including the extent of resection, age, and sex have been suggested in previous studies [1,16,23,26].We could not confirm statistical differences in the outcomes according to factors other than tumor grade and LMS.There is no universally accepted treatment protocol for PPTID.STR is typically followed by adjuvant RT [24,27,28].Maximal resection or STR with adjuvant local RT may be suitable for grade 2 PPTID.Various platinum-based CTx regimens including PCV, have been used for grade 3 PPTID with LMS; however, specific indications and protocols are yet to be standardized [22,29].Yi et al. [30] reported on a case of grade 3 PPTID in which remission was achieved through PCV CTx following partial tumor removal and local RT.In this study, local RT was the primary adjuvant treatment for grade 3 PPTID.CSI or CTx was performed in some patients after LMS was identified; however, poor outcomes could not be avoided.Preemptive treatment, including CSI or CTx for LMS, should be considered for grade 3 PPTID. PB is a malignant embryonal tumor with poor prognosis despite the implementation of radical surgical resection and multimodal adjuvant therapy.Age is an important factor af-fecting PB outcomes.A recently proposed molecular classification of PB supports differences in the biological behavior of this tumor with age [13,31].Hansford et al. [32] reported that the 5-year OS rate was 67.3% in children aged >3 years, and the 5-year OS rate was 16.2% in those aged <3 years.Despite intensive treatment, the survival outcomes of younger patients with PB were poor [33][34][35].Our data showed a trend similar to that reported in the literature, with a 5-year OS rate of 60% in children aged >3 years and 20% in children aged <3 years.LMS at diagnosis was a poor prognostic factor for OS regardless of age and treatment modality in the present study, as in previous studies [35][36][37].Several studies have reported that GTR is correlated with a better prognosis [4,36,38].However, there were studies that the extent of resection was not associated with survival outcome [35,39].Our data revealed that less than STR was associated with worse OS.Although controversial, it remains an important starting point for treatment for achieving maximal safe resection.Most patients with PB underwent adjuvant RT and CTx.Therefore, it was difficult to determine whether adjuvant treatment administered or not affected the prognosis.A differentiated treatment strategy is selected for children, especially those younger than 3 years, because of the neurotoxicity of CSI and aggressive features of the tumor at this age [35].Despite the rare occurrence of PB, formulated treatment protocols are used based on the results of multiple clinical trials conducted on pediatric malignant brain tumors, such as medulloblastoma [32,34,35].Although there are specific differences in the protocols between countries and institutions, the common fundamental concept is to delay CSI in infants younger than 3 years of age and administer intensive chemotherapy during this period.Although there have been changes over time, the recent protocol applied to treat patients with PB younger than 3 years in our institution consists of maximal surgical resection, induction CTx with IT, HDCT/aPBSCT, and CSI at 3 years of age if LMS was identified at diagnosis [40].In the protocol for patients with PB aged >3 years, CCRT with a reduced dose of CSI and boost are administered after surgery, and induction CTx considering HDCT/aPBSCT is performed.Gururangan et al. [41] reported that HDCT in addition to RT is an effective treatment for patients with newly diagnosed PB.A trend that HDCT/ aPBSCT being associated with better OS was observed in the present study.However, transplantation-related complications are devastating; therefore, their occurrence must be carefully monitored and the adverse events have to be managed appropriately.Adult patients with PB have considerably less aggressive clinical course than in the pediatric patients [42].In the present study, the 5-year OS rate in adult patients with PB was 80%, although the final survival rate was 40%.Adult patients in our data were managed differently compared to pediatric patients.Surgery followed by CSI and conventional CTx such as PCV regimen were the main treatment modalities, and HDCT/aPBSCT was not considered generally. Pineal tumors frequently exhibit obstructive hydrocephalus early in the course of the disease because of the proximity of the mass to the cerebral aqueduct [43].ETV is preferred to ventricular shunt because it provides the opportunity to conduct a biopsy of a bulging tumor in the posterior portion of the third ventricle, in addition to relieving hydrocephalus.Furthermore, a ventricular shunt carries the risk of tumor dissemination [16].Approximately 15% of patients who underwent ETV may require ventricular shunt during follow-up.There were possible causes of ETV failure.The stoma can be occluded by the intraventricular tumor debris.Craniotomy following endoscopic biopsy may transform non-communicating hydrocephalus into absorptive type because of the release of proteins and blood into the CSF.LMS is another potential cause of ETV failure [44].The relatively high rate of transition to shunt placement in patients with PB in the present study may be attributed to these reasons. Endoscopic biopsy obtains a limited sample of tumor.Small samples may lead to misdiagnosis because they are not representative of the entire tumor, particularly in cases with mixed histology [44,45].In the present study, an 11-year-old boy was diagnosed with PC using endoscopic biopsy; however, the diagnosis was changed to PB after consecutive craniotomy.Our pathologists reviewed the specimen of the previous biopsy, and they concluded that the diagnosis of PC was appropriate for the specimen.Perhaps the tumor had some admixture of mainly PB and some tissues with characteristics of PC. Malignant transformation of PPT has been reported in several studies [12,29,46,47].The progression-free period ranged from 3 to 10 years.Most of the cases were transformed from PC to PPTID.We reported two cases from PPTID to PB.Whether the diagnosis is a true malignant transformation or a mixed pathology is debatable if the initial diagnosis was obtained by biopsy rather than resection.In the case of patient 1 who experienced malignant transformation in the present study, no detailed information other than the diagnosis was found in the pathological report when PC was diagnosed by endoscopic biopsy.In the case report of patient 2, specific justifications for establishing PPTID as a diagnosis in the first biopsy were provided [12].When patient 3 was initially diagnosed and relapsed, each diagnosis was confirmed by open craniotomy.PPT can present as a mixture or a continuous spectrum of lowgrade and high-grade tumor.Over the past few years, the biology and inter-tumor heterogeneity underlying PPT have been clarified through multi-omic research [13,14,31].Recent molecular classification of PPTID and PB provide accurate diagnosis through biopsy, help confirm the diagnosis of recur-rent lesions, and enable comprehensive prognostic prediction. The present study had some limitations.This study was constrained by a small sample size and the inherent biases of a retrospective design.Recently, several studies have pooled multicenter data or used nationwide registry such as the Surveillance, Epidemiology and End Results (SEER) or the National Cancer Database (NCDB) [4,17,23,26,32,35,39,42,[48][49][50].Despite having less statistical power than larger-scale research, our study is meaningful for evaluating our past clinical practice, comparing it to other studies to identify the strengths and weaknesses of our institution, and finding areas for future improvement in PPT patient care.A prospective multicenter trial is needed in the future.Because of the rarity of PPTs, we pooled the data from adult and pediatric department.The treatment strategies of each department were not uniform.Establishing standard treatment protocol is essential.Since the application of tumor gene panel has not been long, few cases reflected novel molecular classifications in diagnosis.If the molecular classification of PB and PPTID is employed, more precise diagnosis and prognostication will be feasible. Four WHO grades had different impacts on the prognosis.In particular, grade 3 PPTID shows aggressive behavior and dismal prognosis.Maximal surgical resection is critical; if it is unavailable, appropriate adjuvant treatment should be administered timely.Performing imaging evaluation, especially spinal surveillance, at close intervals and preemptively applying adjuvant treatment in preparation for LMS is important in the management of grade 3 PPTID.Considering the possibility of malignant transformation, long-term attentive follow-up is required. Table ).All descriptions and analyses were based on initial diagnosis.The median age of patients with PC was 36 years.Children and adolescents accounted for 30% of PC cases.The median age of patients with PPTID was 45 years.None of the patients with PPTID younger than 19 years. Table 1 . Baseline characteristics of patients diagnosed with pineal parenchymal tumors Table 2 . Surgical and adjuvant treatment
2023-11-08T16:02:57.464Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "96df9e32e23a19611b2c219b311bc96e5cf610bf", "oa_license": "CCBYNC", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "d9b14749da2cc784c290988de3f70f0719433fca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216593444
pes2o/s2orc
v3-fos-license
Developmental loss of MeCP2 from VIP interneurons impairs cortical function and behavior Rett Syndrome is a devastating neurodevelopmental disorder resulting from mutations in the gene MECP2. Mutations of Mecp2 that are restricted to GABAergic cell types largely replicate the behavioral phenotypes associated with mouse models of Rett Syndrome, suggesting a pathophysiological role for inhibitory interneurons. Recent work has suggested that vasoactive intestinal peptide-expressing (VIP) interneurons may play a critical role in the proper development and function of cortical circuits, making them a potential key point of vulnerability in neurodevelopmental disorders. However, little is known about the role of VIP interneurons in Rett Syndrome. Here we find that loss of MeCP2 specifically from VIP interneurons replicates key neural and behavioral phenotypes observed following global Mecp2 loss of function. mature function. However, nothing is known about the contribution of VIP interneurons to neurodevelopmental dysregulation in Rett Syndrome. Using a mouse model, we generated conditional mutations of Mecp2 in VIP interneurons and compared these (i) with a conditional pan-interneuron mutation using the Dlx5/6 promoter to drive embryonic deletion in three major interneuron classes (VIP, parvalbumin-expressing interneurons [PV], and somatostatin-expressing interneurons [SST]) and (ii) with two conditional mutations in discrete interneuron populations (PV, SST). To identify the distinct contributions of each interneuron class, we assayed mortality, cortical activity, locomotor and anxiety phenotypes, and social behavior. Loss of MeCP2 selectively from VIP interneurons replicated key physiological and behavioral phenotypes observed in the pan-interneuron Dlx5/6 mutants, including altered firing rates, disruption of high-frequency cortical local field potential (LFP) patterns, and loss of state-dependent modulation of cortical activity. VIP interneuron-specific mutants further phenocopied impairments in marble burying and social behavior observed in the Dlx5/6 mutants. Overall, our findings suggest an unanticipated role for VIP interneuron dysfunction in the Mecp2 loss-of-function model of Rett Syndrome. MeCP2 expression in PV, SST, and VIP interneurons To confirm that MeCP2 is expressed in three major populations of GABAergic interneurons, we costained sections of cortex from adult mice with antibodies for interneuron markers and MeCP2 (Figure 1, Figure 1-figure supplement 1). As reported previously (Ito-Ishida et al., 2015), nearly all PV and SST interneurons expressed MeCP2. In addition,~80% of VIP interneurons expressed MeCP2, suggesting a previously unappreciated potential role for this signaling pathway in VIP interneuron development and function. Seizure incidence following Mecp2 mutation Previous work identified a characteristic seizure phenotype resulting from Mecp2 deletion in the brain (Chao et al., 2010) and found that loss of MeCP2 from SST-expressing cells may confer a lateonset tendency towards seizure (Ito-Ishida et al., 2015). We therefore evaluated the incidence of seizure in each of the three interneuron-specific Mecp2 deletion lines from weaning through to late adulthood. We compared the impact of MeCP2 depletion from VIP, PV, or SST interneuron populations with simultaneous depletion from all three interneuron classes using the Dlx5/6 Cre line. We further compared the interneuron-specific mutation mice with Mecp2 f/y littermate controls. To identify the relative impact of Mecp2 loss of function in GABAergic cells compared to loss of function of Mecp2 in all cells, we also compared seizure incidence in the interneuron-specific deletion mice with that in mice carrying a complete knockout of the Mecp2 gene (Mecp2 -/y ). MeCP2 depletion from specific interneuron populations had markedly different effects on seizure incidence ( Figure 1-figure supplement 2A-B). We found that 100% of Mecp2 -/y (n = 10) and Dlx5/ 6 mutant (n = 13) mice exhibited at least one seizure, compared to only 52.9% of SST mutants (n = 36), 35.0% of PV mutants (n = 22), and 37.5% of VIP mutants (n = 8). In comparison, the seizure rates in Mecp2 f/y controls and in wild-types were 17.1% (n = 38) and 0% (n = 20), respectively, suggesting a small contribution of the floxed allele to the seizure phenotype (Ito-Ishida et al., 2015). The mean age of initial seizure was significantly earlier in Mecp2 -/y mutants, but not the Dlx5/6, PV, SST, or VIP mutants, compared to Mecp2 f/y controls (Figure 1-figure supplement 2B). Cumulative distribution plots of survival for controls (CON; black; n = 38), wild-types (WT; magenta; n = 10), and Mecp2 -/y (KO; gray; n = 10), Dlx5/6 (DLX; red; n = 13), VIP (orange; n = 8), PV (green; n = 22), and SST (cyan; n = 36) mutants. *, p<0.05. The online version of this article includes the following source data and figure supplement(s) for figure 1: Altered patterns of cortical and hippocampal activity following Mecp2 mutation To examine the cellular and local network consequences of MeCP2 loss, we performed electrophysiological recordings in the cortex and hippocampus of awake animals. MeCP2 loss from interneurons caused alterations in the cortical LFP, a measure of local network activity (Figure 2A-B). MeCP2 depletion caused a modest change in LFP power, measured during periods of quiescence, in the 3-6 Hz range in the Dlx5/6 mutants (p=0.03; Kruskal-Wallis test with Dunn's post-test), but not in other groups ( Figure 2C). However, we observed a robust broadband decrease in high-frequency LFP activity in the Dlx5/6 mutants that was replicated in the VIP mutants, but not PV or SST mutants ( Figure 2A). Quantification of high-frequency activity around the gamma (40-55 Hz) band revealed a significant decrease in gamma-range activity in both the Dlx5/6 (p=0.005) and VIP mutants (p=0.04; Kruskal-Wallis test with Dunn's post-test; Figure 2D). We further found a loss of spike-field coherence in the gamma band in Dlx5/6 and VIP mutants ( Figure 2-figure supplement 1). By contrast, hippocampal recordings revealed a loss of gamma-range LFP power in the Dlx5/6 mutants that was replicated by the SST mutants ( Figure 2-figure supplement 2), suggesting potentially distinct celltype-specific roles in different brain areas. Perturbation of interneurons during development can result in elevated firing rates due to loss of synaptic inhibition and the reorganization of neural circuits (Close et al., 2012;Rossignol et al., 2013;Batista-Brito et al., 2017). We therefore recorded cortical firing activity in awake mice with Mecp2 mutations ( Figure 2B). Single-unit recordings revealed that loss of MeCP2 in the Dlx5/6 mutants led to a three-fold increase in the firing rates of regular-spiking, putative excitatory pyramidal neurons as compared to that in control animals (p=0.004), and this finding was replicated in the VIP (p=0.03) and SST (p=0.002; Kruskal-Wallis test with Dunn's post-test) mutants ( Figure 2E). In previous work, we found that cortical firing rates are robustly modulated by changes in behavioral state, and are typically increased at the onset of locomotion (L) as compared to quiescence (Q) . Loss of normal VIP interneuron activity reduces this state-dependent cortical modulation (Fu et al., 2014;Batista-Brito et al., 2017). To determine whether MeCP2 loss from VIP interneurons impairs this function, we examined state-dependent modulation in the Mecp2 mutants. Both pan-interneuron MeCP2 loss in the Dlx5/6 mutants (p=0.004) and MeCP2 loss specific to VIP interneurons (p=0.02; Kruskal-Wallis test with Dunn's post-test), but not other interneuron populations, led to decreased state-dependent modulation of single-unit cortical firing rates, measured as a modulation index [(FR L -FR Q )/(FR L +FR Q )] ( Figure 2F). We did not find any differences in neural activity between Mecp2 f/y controls, wild-types, and Vip Cre controls, suggesting that these results arise from cell type-specific Mecp2 loss of function ( Figure 2-figure supplement 3). Behavioral phenotypes associated with interneuron-specific Mecp2 mutation Previous work has linked Mecp2 mutations in GABAergic interneurons to key impairments in motor, repetitive, and social behaviors (Moretti et al., 2005;Chahrour and Zoghbi, 2007;Samaco et al., 2008;Chao et al., 2010;Kaufmann et al., 2012;He et al., 2014;Ito-Ishida et al., 2015). We therefore examined the behavioral consequences of VIP-specific Mecp2 mutations. In agreement with previous work (Chao et al., 2010;Ito-Ishida et al., 2015), we found no significant impairments in percentage of time spent in the open arms of the elevated plus maze task, a measure of anxiety (Carobrez and Bertoglio, 2005), for the Dlx5/6, VIP, PV, or SST mutants compared to controls ( Figure 3A). Likewise, we found no impairment in locomotor behavior in the open field assay in any of the mutants (Figure 3-figure supplement 1A). By contrast, we found a significant impairment in marble burying, a measure of motor and repetitive behavior (Thomas et al., 2009;Silverman et al., 2010), in the pan-interneuron Dlx5/6 mutants compared to the controls (p=0.007). The deficit observed in the Dlx5/6 mutants was fully replicated in the VIP mutants (p=0.001; Kruskal-Wallis test with Dunn's post-test), but not in the PV or SST mutants ( Figure 3B). We tested social interaction behaviors in each Mecp2 deletion line using the three-chamber sociability task (Nadler et al., 2004). Control animals exhibited a significant preference for a chamber containing a conspecific over an empty chamber ( Figure 3C, Figure 3-figure supplement 1B). By contrast, the pan-interneuron Dlx5/6 mutants exhibited a reverse preference for the empty chamber (p=0.002). The VIP mutants (p=0.002; Kruskal-Wallis test with Dunn's post-test), but not the PV or SST mutants, fully replicated the effects of the pan-interneuron deletion, showing a significant preference for the empty chamber over the conspecific. In addition to altered overall social preferences, Mecp2 deletion affected the number of approaches mice made towards conspecifics. Control animals made more approaches to the conspecific than to the empty holding cage ( Figure 3C). By contrast, the Dlx5/6 mutants made more approaches to the empty holding cage than to the conspecific animal (p=0.01). The VIP mutants (p=0.01; Kruskal-Wallis test with Dunn's post-test), but not the PV or SST mutants, fully replicated these effects of pan-interneuron deletion, approaching the empty holding cage more than the conspecific. Together, these data suggest that VIP interneurons may contribute to the deficits in social behavior caused by global Mecp2 dysfunction. We did not find any differences in behavior between Mecp2 f/y controls, wild-types, and Vip Cre controls (Figure 3-figure supplement 2), suggesting that these behavioral impairments arise as a consequence of cell-type-specific MeCP2 deletion. Discussion Our results reveal an unanticipated role for VIP interneurons in the Mecp2 loss-of-function model of Rett Syndrome. On the basis of previous characterizations of the Mecp2 model (Chahrour and Zoghbi, 2007;Samaco et al., 2008;Chao et al., 2010;He et al., 2014;Ito-Ishida et al., 2015), we examined interneuron contributions to several major categories of neural dysregulation: mortality and seizure, cortical activity patterns, anxiety and repetitive behaviors, and social behavior. We found that Mecp2 deletion from VIP interneurons recapitulates major phenotypes observed following pan-interneuron Mecp2 deletion at the levels of both neural activity and behavior (Supplementary file 1). Patients who have Rett Syndrome exhibit respiratory impairments and seizure (Chahrour and Zoghbi, 2007), and these phenotypes are replicated in mouse models following deletion of Mecp2 from all GABAergic cells (Chao et al., 2010;Ito-Ishida et al., 2015). In agreement with previous work (Ito-Ishida et al., 2015), we found that Mecp2 loss of function in SST interneurons conferred a late seizure phenotype. However, conditional mutations of Mecp2 in VIP interneurons did not increase seizure or mortality rates. Mecp2 knockout animals had significantly earlier seizure onset and mortality than the Dlx5/6 mutants, supporting previous findings that global loss of MeCP2 in excitatory neurons also contributes to seizure (Goffin et al., 2014;Meng et al., 2016) and that respiratory impairments in the Mecp2 knockouts may increase early mortality (Chen et al., 2001;Guy et al., 2001). In comparison, a previous study did not observe a seizure phenotype in Dlx5/6 mutants, possibly as a result of methodological differences (Chao et al., 2010). We observed a low rate of late-onset seizure in the MeCP2 f/y control animals, indicating that the decrease in MeCP2 levels associated with the conditional allele (Samaco et al., 2008) may be associated with epileptogenic consequences in addition to some mild behavioral phenotypes. Recordings in the cortex of awake mice revealed a robust impact of Mecp2 deletion on the pattern of cortical activity. We found a decrease in high-frequency LFP activity in the Dlx5/6 mutants that was replicated in the VIP mutants. In particular, VIP mutants exhibited a decrease in cortical gamma-range activity, which is associated with cognition and sensory processing (Cardin, 2016;Cardin, 2018a) and which may be impaired in Rett Syndrome (Peters et al., 2015). Although hippocampal gamma-range activity was also affected in the Dlx5/6 mutants, these effects were associated with SST rather than with VIP interneuron mutations, suggesting potential heterogeneity of the circuit-level impact of Mecp2 deletion across brain areas. Loss of MeCP2 in the Dlx5/6 mutants was associated with a three-fold increase in cortical firing rates, and this increase was replicated in the VIP and SST mutants. In the healthy cortex, transitions between behavioral states, such as quiescence and arousal or locomotion, are associated with robust modulation of cortical firing rates. (Niell and Stryker, 2010;McGinley et al., 2015;Vinck et al., 2015;Tang and Higley, 2020). However, loss of MeCP2 in the VIP and Dlx5/6 mutants caused a profound dysregulation of this state-dependent modulation. This loss of modulation is unlikely to result from a 'ceiling effect' caused by increased overall firing rates, as putative excitatory neurons in the SST and VIP mutants exhibited equally enhanced firing but only the VIP mutants showed a loss of state-dependent modulation. VIP interneurons are thought to play a key role in regulating the state-dependent modulation of cortical circuits, partly via strong inhibition of SST interneurons and consequent disinhibition of pyramidal neurons (Pi et al., 2013;Fu et al., 2014;Prö nneke et al., 2015;Karnani et al., 2016;Muñoz et al., 2017). We had previously found that developmental perturbation of VIP interneurons by conditional deletion of the schizophrenia-associated gene ErbB4 caused a similar loss of state-dependent cortical regulation (Batista-Brito et al., 2017). Together, these findings suggest that disruption of state-dependent cortical dynamics may be a common outcome of disease mechanisms affecting VIP interneurons. Previous work has highlighted alterations in repetitive and motor behaviors, but not anxiety, following GABAergic deletion of Mecp2 (Chao et al., 2010;Ito-Ishida et al., 2015;Ure et al., 2016). We therefore examined the impact of Mecp2 deletion from VIP interneurons on locomotor and anxiety phenotypes. None of the Mecp2 loss-of-function mutations resulted in anxiety-related or locomotor phenotypes in the elevated plus maze or the open field, respectively, in agreement with previous work (Chao et al., 2010;Ito-Ishida et al., 2015). However, the pan-interneuron Dlx5/6 mutants exhibited deficits in marble burying, a task that is susceptible to altered anxiety and OCDlike behaviors as well as changes in fine motor function (Thomas et al., 2009;Silverman et al., 2010). Cell-type-specific mutations of Mecp2 in VIP interneurons, but not in the PV or SST populations, phenocopied this behavioral impairment. Abnormal or reduced social behavior is a hallmark of many autism spectrum disorder models, and has previously been shown in mice lacking MeCP2 in all GABAergic populations (Chao et al., 2010;Ito-Ishida et al., 2015). We found that the pan-interneuron Dlx5/6 Mecp2 mutants exhibited a reversal of normal social preferences in the three-chamber sociability assay, preferring an empty chamber to one containing a conspecific. Notably, Mecp2 deletion from VIP, but not from PV or SST, interneurons fully replicated this phenotype. SST-specific deletion led to loss of any social preference, suggesting a potential contribution of both VIP-and SST-expressing cells to deficits in social behavior following global Mecp2 loss of function. Together, these results suggest a previously unknown and potentially important role for VIP interneuron dysregulation in social behavior deficits in the Mecp2-deletion model. Overall, our behavioral findings in the Dlx5/6, PV and SST mutants are in general agreement with previous work (Supplementary file 1). The Pvalb Cre line used here largely expresses Cre recombinase in PV interneurons, along with some thalamocortical projection neurons (Hippenmeyer et al., 2005;Cardin et al., 2009). In comparison, the Pvalb-2A-Cre line used in some previous work (Goffin et al., 2014;Ito-Ishida et al., 2015) also expresses Cre in a subset of cortical pyramidal neurons and additional thalamic nuclei (Madisen et al., 2010). These differences may contribute to the more severe behavioral phenotypes and early mortality previously observed in the Pvalb-2A-Cre line (Ito-Ishida et al., 2015). Other work examining the consequences of Mecp2 deletion in the Pvalb Cre line likewise found only mild behavioral phenotypes (He et al., 2014). However, as the PV promoter only becomes active postnatally, our findings do not preclude a substantial contribution of embryonic Mecp2 deletion from PV interneurons to Rett Syndrome phenotypes. We find a unique impact of Mecp2 deletion from VIP interneurons. Despite being few in number (Rudy et al., 2011), VIP interneurons are targets of multiple neuromodulatory systems and play critical roles in state-dependent regulation of local neural circuits (Pi et al., 2013;Fu et al., 2014;Garcia-Junco-Clemente et al., 2017;Muñoz et al., 2017), making them a potential key point of vulnerability in neurodevelopmental disease. Although our electrophysiology results are specific to cortex and hippocampus, the three interneuron classes examined here exhibit distinct cellular-and circuit-level properties and play key roles across many brain areas that contribute to behavior. In addition, dysregulation of one GABAergic population is probably amplified by extensive synaptic connectivity with other inhibitory interneuron classes (Pfeffer et al., 2013;Cardin, 2018a). Indeed, our previous work suggests that developmental disruption of VIP interneuron activity may have multiple circuit-level consequences, including loss of synaptic inhibition of other interneurons, altered experience-dependent plasticity, and dysregulated cortical circuit maturation, in addition to ongoing perturbation of normal adult function. Mecp2 loss of function across multiple GABAergic interneuron classes may thus exert diverse influences on neural and behavioral deficits in Rett Syndrome. Materials and methods Animals All experiments were approved by the Institutional Animal Care and Use Committee of Yale University. We used the Dlx5/6 Cre (JAX#008199; Monory et al., 2006), Pvalb Cre (JAX#008069; Hippenmeyer et al., 2005), Sst Cre (JAX#013044; Taniguchi et al., 2011), and Vip Cre (JAX#010908; Taniguchi et al., 2011) mouse lines to target all forebrain GABAergic interneurons, parvalbuminexpressing interneurons (PV), somatostatin-expressing interneurons (SST), and vasoactive intestinal peptide-expressing interneurons (VIP), respectively. We crossed each Cre line to the conditional Mecp2 line (Mecp2 f/f ; JAX# 007177; Guy et al., 2001). In each case, we assayed male mice that were hemizygous for the floxed Mecp2 allele and heterozygous for Cre. All crosses were made on a C57BL/6J background (JAX#000664). Control animals were Cre-negative male mice that were hemizygous for the floxed Mecp2 allele (Mecp2 f/y ). We further compared Mecp2 f/y mice with age-matched wild-type C57Bl/6 mice (JAX#000664) and Vip Cre mice (JAX#010908). In a subset of experiments, we compared the interneuron-specific crosses with male mice from the Mecp2 knockout line (Mecp2 -/y ; JAX#003890; Guy et al., 2001). All behavioral assays were performed at P120 except for those involving the Dlx5/6 Cre+/-Mecp2 f/y animals, which were assayed at P90 due to their early morbidity. All behavioral and electrophysiological assays were carried out in animals with no prior seizure incidence. Immunohistochemistry For immunofluorescent staining of brain tissue, mice were perfused with 4% paraformaldehyde and post fixed for an hour before transferring into successive sucrose solutions at 15% and 30%. 20 mm thick cryosections were prepared for immunohistochemistry (IHC). Tissue was incubated with 1.5% normal goat serum (NGS) (Life Technologies) and 0.1% Triton X-100 (Sigma) in PBS for 60 min at room temperature. Sections were incubated with primary antibodies (Rat Anti-Somatostatin 1:250 [Millipore MAB354]; Anti-parvalbumin 1:1000 [Sigma P3088]; Anti-VIP 1:250 [ImmunoStar 20077]; Anti-MeCP2 1:250 [Millipore 07-013]) in the blocking buffer overnight at 4˚C. After washing three times with buffer, sections were incubated with secondary antibodies for 1 hr at room temperature (secondary antibodies: Alexa Fluor 488, 594 or 647 [Life Technologies, 1:1000]). Finally, coverslips were mounted using ProLong Gold Mounting Medium with DAPI (Life Technologies) and imaged at 10x. Quantifications were performed in Adobe Photoshop. Pictures were divided into a grid measuring 1 Â 1 mm in total and cells were counted in each grid square. The number of cells positive for antibody staining against MeCP2 was counted to assay the proportion of co-expressing cells. Headpost surgery and wheel training For recordings performed in awake animals, mice were initially handled for 5-10 min/day for 5 days prior to a headpost surgery. On the day of the surgery, the mouse was anesthetized with isoflurane and the scalp was shaved and cleaned three times with betadine solution. An incision was made at the midline and the scalp resected to each side to leave an open area of skull. Two skull screws (McMaster-Carr) were placed at the anterior and posterior poles. Two nuts (McMaster-Carr) were glued in place over the bregma point with cyanoacrylate and secured with C&B-Metabond (Butler Schein). The Metabond was extended along the sides and back of the skull to cover each screw, leaving a bilateral window of skull uncovered over primary visual cortex. The exposed skull was covered with a layer of cyanoacrylate. The skin was then glued to the edge of the Metabond with cyanoacrylate. Analgesics were given immediately after the surgery and on the two following days to aid recovery. Mice were given a course of antibiotics (Sulfatrim, Butler Schein) to prevent infection and were allowed to recover for 3-5 days following implant surgery before beginning wheel training. Once recovered from the surgery, mice were trained with a headpost on the wheel apparatus. The mouse wheel apparatus was 3D-printed (Shapeways Inc) in plastic with a 15 cm diameter and an integrated axle and was spring-mounted on a fixed base. A programmable magnetic angle sensor (Digikey) was attached for continuous monitoring of wheel motion. Headposts were customdesigned to mimic the natural head angle of the running mouse, and mice were mounted with the center of the body at the apex of the wheel. On each training day, a headpost was attached to the implanted nuts with two screws (McMaster-Carr). The headpost was then secured with thumb screws at two points on the wheel. Mice were headposted in place for increasing intervals on each successive day. If signs of anxiety or distress were noted, the mouse was removed from the headpost and the training interval was not lengthened on the next day. Mice were trained on the wheel for up to 7 days or until they exhibited robust bouts of running activity during each session. Mice that continued to exhibit signs of distress were not used for awake electrophysiology sessions. Locomotion detection Wheel position was extracted from the output of a linear angle detector. We used a change-point detection algorithm that detected statistical differences in the distribution of locomotion velocities across time (see Vinck et al., 2015;Batista-Brito et al., 2017). Quiescent periods that lasted longer than 20 s were selected for analysis. For analysis of modulation with changes in behavioral state, we selected trials for which the preceding quiescent period lasted longer than 20 s, average speed until the next locomotion offset point exceeded 1 cm/s, and running lasted longer than 2 s. Extracellular recordings LFP recordings were made with tetrodes (Thomas Recording GMBH, Germany) targeted to layers 2/ 3 and 5 of visual cortex and to the CA1 field of the dorsal hippocampus (AP: +1.5-2 mm; ML: 1.2-1.75, Paxinos and Franklin, 2001). Signals were digitized and recorded with a DigitalLynx 4SX system (Neuralynx, Bozeman MT). All data were sampled at 40 kHz and recordings were referenced to the cortical surface. LFP data were recorded with a bandpass 0.1-9000 Hz filter. Spike sorting Spikes were clustered using previously published methods Batista-Brito et al., 2017). We first used the KlustaKwik 3.0 software (Kadir et al., 2013) to identify a maximum of 30 clusters using the waveform energy and the energy of the waveform's first derivative as clustering features. We then used a modified version of the M-Clust environment to separate units manually. Units were accepted if a clear separation of the cell relative to all the other noise clusters was observed, which generally was the case when isolation distance (ID) (Schmitzer-Torbert et al., 2005) exceeded 20 . We further ensured that maximum contamination of the ISI (inter-spike-interval) histogram did not exceed 0.1% at 1.5 ms. Electrophysiology analysis The firing rate was computed by dividing the total number of spikes a cell fired in a given period by the total duration of that period. To examine whether firing rates were significantly changed around locomotion onset, we computed the firing rate in the [À0.5, 0.5] s window around locomotion onset (L; as in Vinck et al., 2015) and compared this to the firing rate in the [À5,-2] s quiescence (Q) period before locomotion onset by computing a modulation index ([FR L -FR Q ]/[FR L +FR Q ]). All LFP power analyses were made using data from quiescent periods after animals had been stationary for a minimum of 20 s and excluding data from within 10 s of the next locomotion bout. Relative power in the specified frequency bands was measured as a ratio between power in those bands and the total power. Spike-field coherence measures were performed as previously described (Miri et al., 2018), analysis code available at https://github.com/jesscardin/Miri-Vinck-et-al (Cardin, 2018b; copy archived at https://github.com/elifesciences-publications/ Miri-Vinck-et-al). Power spectra were normalized to total power for visualization purposes. Seizure detection All mice were handled for at least 10 min each day throughout the study. During the daily handling regime, the mice were assessed for seizures. If a seizure did occur, the mouse was immediately returned to its home cage. Seizure severity was scored using the Racine scale (1: Mouth and facial movement, 2: Head nodding, 3: Forelimb clonus, 4: Forelimb clonus and rearing, 5: Forelimb clonus, rearing, and falling). Seizures were defined as events reaching Racine scale levels 4 or 5, with animals exhibiting rearing and forelimb clonus or rearing, forelimb clonus, and falling. Morbidity analysis After weaning at P21, all mice were monitored every day throughout the study. All deaths were noted, and animals were tracked daily until P500. Behavioral analysis The elevated plus maze, marble-burying, and sociability assays were performed under low-level (20-25 lux) illumination. In each assay, mice were given 15 min to acclimate to the behavioral assay room. In all cases, the researcher was blind to the genotypes of the mice until after all behavioral data were scored. Elevated plus maze Custom Labview software was used to control a camera recording the mouse's locomotion in the maze. At the beginning of the session, the mouse was placed at the center of the maze and allowed to freely move on either arm for five minutes. At the end of the session, the mouse was returned to the home cage and the maze was cleaned for the next mouse. Video recordings of mouse behavior were hand-scored to determine the amount of time spent in the open and closed arms of the maze. Open field The open field assay was performed in a 30 cm square box divided into nine quadrants. Custom software (Labview) was used to control a camera recording the mouse's path in the box. At the start of the session, the mouse was placed in the center quadrant of the box and allowed move freely for 20 min. After the time elapsed, the mouse was returned to the home cage and the box was cleaned for the next mouse. ImageJ software was used to analyze the total distance traversed by the mouse. Marble burying 12 marbles were evenly placed in a cage with 1 inch of clean bedding. Custom Labview software was used to control a camera recording the mouse's activity in the cage. The mouse was placed in the center of the cage with the marbles and allowed to explore the cage for 20 min. At the end of the session, the mouse was returned to the home cage and the marbles were cleaned with a 10% bleach solution. The proportion of marbles buried was analyzed in ImageJ using the 'analyze particles' function to compare the initial and final exposed surface area of the marbles. Sociability The sociability apparatus was divided into three equal areas with Plexiglas dividers, each with an opening allowing access to neighboring chambers. Custom Labview software was used to control a camera that recorded the mouse's activity in the chamber. An unfamiliar age-and sex-matched conspecific mouse (reared in separate cages, C57BL/6 genotype) was placed into a small cylindrical holding cage in one side of the chamber and an identical empty holding cage was placed in the other side. The location of the conspecific was randomly varied across trials. At the beginning of the session, the test mouse was placed in the central chamber and habituated to the central chamber for ten minutes. The dividers were then removed to allow the mouse to move freely among all the chambers for ten additional minutes. At the end of the session, the mice were returned to their home cages and the apparatus was cleaned. Video recordings of the mouse's behavior were scored to determine the amount of time spent in each of the three partitions and the number of approaches that the test mouse made to the conspecific and the empty holding cage. An approach was defined as the test mouse coming within a 5-cm radius of a cage or making contact with a holding cage. Social preferences were calculated both as comparisons of raw values and as index values for time spent ([Time C -Time E ]/[Time C +Time E ]) and approaches ([App C -App E ]/[App C +App E ]), where C denotes conspecific and E denotes the empty container. Statistical analysis Paired and unpaired non-parametric tests generated in GraphPad Prism (version 8 for Mac; San Diego CA) were used throughout the study to accommodate non-normal data distributions. Animals were used as the 'n' in all analyses. Exact p values and estimation statistics are reported in the source data files for all tests. All group data are shown as box-and-whisker plots in which the bars denote the minimum and maximum of the distribution and the box denotes the first and third quartiles and the median. experiments were approved by the Institutional Animal Care and Use Committee of Yale University (#11317). Additional files Supplementary files . Supplementary file 1. Consequences of cell-type-specific Mecp2 deletion. Summary table of phenotypes observed in conditional deletion mice with loss of Mecp2 function in GABAergic interneurons (upper) and glutamatergic excitatory neurons (lower). Observed phenotypes are noted as Y/N, assays that were not performed in a given study are left blank. The two distinct Pvalb Cre mouse lines used by different studies are identified by their JAX line numbers. . Transparent reporting form Data availability Source data files are included for each figure and supplementary figure. Analysis code is available at https://github.com/jesscardin/Miri-Vinck-et-al (copy archived at https://github.com/elifesciencespublications/ Miri-Vinck-et-al). All data included in this study will be freely available upon request, as the data files and associated intermediate analysis files are very large (400GB) and depositing the full data is not feasible.
2020-04-29T13:03:46.099Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "20dde85b793bfcb63d01de1a1ff0dc918f8c23a5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.55639", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d42eb9913bd5840532bd628f05828afb2b6d667", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
266830476
pes2o/s2orc
v3-fos-license
Temporomandibular Joint Ankylosis as a Sequel of an Overlooked Condylar Fracture in a Child Temporomandibular joint ankylosis is an important entity that dentists and maxillofacial surgeons should know about. It clinically manifests through a permanent limitation of mandibular movements coupled with mouth opening inferior to 3 cm. This serious pathology can have serious functional repercussions, such as mastication problems, speech troubles, eating disorders, and jaw growth hindrance, in addition to the psychological difficulties in coping with such a condition in daily life. Herein, we present a radiological and chronological illustration of the evolution of temporomandibular joint ankylosis following an overlooked traumatic fracture of the mandibular condyle. The present case report involves an 8-year-old patient referred for a gradually evolving mouth opening limitation following a car accident. Tomodensitometry was helpful as it revealed an osseous block between the left temporomandibular joint surfaces, showing an ankylosis. Posttraumatic cerebral computed tomography scan was performed. It revealed an undetected fracture of the left condyle. The aim of this paper was to show how a traumatic ankylosis could have been avoided if enough attention was paid to the interpretation of immediate posttraumatic computed tomography scans. A thorough dental examination must be carried out once vital emergency is over. Early diagnosis of temporomandibular joint trauma is a crucial factor in preventing complications, such as ankylosis and its consequent oral dysfunctions. The dentist must automatically suspect condylar fracture when a child presents a history of head trauma, especially a mandibular trauma. This case should be a reminder that although temporomandibular joints are very often left out in patients' vital emergency first examination, temporomandibular joints/they are still a highly important structure which omission, and thus, dysfunction, if lesions are present, can lead to nonnegligible medico-legal consequences/that temporomandibular joints should be taken into account during patients' vital emergency first examination because if they are neglected, in the presence of lesions, they cause dysfunction, thus leading to nonnegligible medico-legal consequences. Introduction Mandibular fractures are the most commonly encountered type of maxillofacial fractures in children.Condylar fractures account for 39% to 52% of all mandibular fractures according to some papers.They even reach 70% according to some authors [1]. A large head, a thin cortical bone, and a fragile narrow neck are all features of condyle anatomy in children.They increase risks of fracture especially in mandibular trauma contexts [2]. Condylar fractures are usually caused by an indirect blow to the chin or the mandibular angle.In the absence of pain or the inability to express suffering, lesions can be overlooked and not diagnosed until a complication appears [1,3].Delayed and inadequate treatment can have serious repercussions, including malocclusion, jaw growth disorders, and facial asymmetry.In some cases, ankylosis can be noted [4]. Herein, we present the role of imaging modalities in the diagnosis, early detection of possible complications of traumatic fractures involving temporomandibular joints (TMJ), and patients' follow-up. This paper also highlights the importance of careful interpretation of radiological examinations in preventing condylar fracture complications, especially in pediatric patients. Case Presentation An 8-year-old boy was referred to the oral radiology department for further exploration of severe mouth opening limitation (10 mm), evolving gradually over the previous 3 years with no history of pain episodes reported by the patient or his parents.The patient's caregivers mentioned that he had a car accident at the age of 5 and that he started to get skinner and skinner since then. Radiological explorations were performed.Panoramic radiography did not allow a good visualization of the left TMJ as this area was blurred (Figure 1).Computed tomography (CT) scan of TMJs was helpful in showing details of the TMJ osseous structures (Figures 2 and 3).Scans revealed some morphological deformities in the temporal and condylar articular surfaces and irregularities in the joint space in the left TMJ.These findings were in favor of an ankylosis of the left TMJ, probably occurring after the trauma caused by the car accident. Anterior cerebral CT scan, performed on the day of the accident, revealed a fracture of the left condyle that was overlooked as the patient reported no pain in that area at that time (Figure 4). After obtaining a written permission from the patient's parents to use medical records for scientific publication and consent to treatment plan, a surgery involving the left TMJ was carried out under general anesthesia.It consisted in eliminating the bony mass and smoothing both the temporal and condylar articular surfaces, thus recreating the lost joint space which immediately allowed a degree of freedom for the condyle within the articulation. Anti-inflammatory medication and muscle relaxers were prescribed and taken over 1 month to avoid postsurgical trismus. Physiotherapy was indicated right after the surgery in order to regain normal mouth opening.The patient was asked to perform tasks several times a day, including movements of protrusion, retrusion, lateral deviation, and mouth opening using tongue depressors. After 3 months of rehabilitation, the patient regained a maximum interincisal opening reaching 30 mm.He was able to communicate via speech and to eat solid food without any discomfort.His weight ameliorated, and his school results improved remarkably. The long-term follow-up could not be carried out as the patient did not show up for check-up sessions. Discussion In this paper, a chronological CT illustration of TMJ ankylosis which is not commonly found in the literature is presented.The pictures emphasize the importance of radiological archive as a fundamental pillar of medico-legal justification in cases of complications. Ankylosis is usually painless.Clinical findings do not reveal any joint sound.Ankylosis can develop secondary to trauma as it causes extravasation of blood into the involved joint, leading to disruption of the fibrocartilage integrity and therefore to an increase in fibrous connective tissue [1,3,5].In the present case, the etiology of ankylosis was related to a previous head trauma in which condylar fracture was overlooked. The ankylotic mass can be mistaken for a benign fibroosseous tumor (osteochondroma or osteoma) [6,7].In the present case, the diagnosis of ankylosis was evident given the clinical and radiological context. It is important to know that these tumors do not invade the joint space which remains visible.The absence of trauma history or other joint diseases (infectious or autoimmune conditions) can help to differentiate this condition from others. Fibrous ankylosis can also resemble bony ankylosis because of the presence of hypomobility.However, it is the presence or absence of the articular space that differentiates them. Several classifications were proposed in order to properly assess the extension of ankylosis and therefore to establish an adequate treatment plan, including the surgical method and the nature and quantity of the TMJ reconstruction materials [8][9][10]. The most known classification of TMJ ankylosis was established by Sawhney, proposing 4 types of pathological alterations of the joint elements.Although it gives an objective insight into the bony surface remodeling, this classification is still insufficient with regard to a precise evaluation of the evolution of ankylosis, which may strongly impact the treatment outcomes. It is worth noting that other recognized classifications of TMJ ankylosis are available.The categories of El-Hakim et al. are based on CT scans, dealing with the morphological changes of TMJ anatomical elements as well as the proximity of the ankylosed mass to the adjacent vital structures, especially the maxillary artery, thus allowing the surgeon to elaborate an adequate surgical treatment plan and to achieve better operative results with the fewest complications possible [8].Another classification aiming to assess the medial displacement of condyles was proposed by He et al. [10]. For our patient, ankylosis was fibrous as a thin joint space was still visible on CT scans.It was classified as Type II according to both classifications. In general, panoramic radiographs reveal TMJ deformity and complete absence of the joint space obliterated with a bone formation bridging the ramus and the zygomatic arch [1]. The projection of the superior airways frequently crosses over the condylar neck, producing a thin radiolucent line that may be misleading, especially in the context of trauma. 2 Case Reports in Dentistry In the present case, on the day of the accident, the patient had a nonreadable panoramic due to superimpositions.The panoramic performed 3 years later to further explore the patient's mouth limitation was confronted with the clinical findings, and the diagnosis of ankylosis was then made. With high resolution and using multiplanar reconstructions, the CT scan provides further data about the anatomical elements and the surrounding environment of TMJ.It also provides details about the morphological changes as it assesses both the medial and lateral poles as well as the region in-between without overlap [1,11]. In cases of fibrous ankylosis, CT and CBCT (cone beam computed tomography) usually reveal a limited or absent condylar translation as the joint space is narrowed.Cortical bone may show irregularities.With regard to cases of osseous ankylosis, CT findings include partial or complete obliteration of the joint space by a small or a 3 Case Reports in Dentistry large osseous mass that may fuse the condyle and the temporal fossa [12]. For our patient, CT was performed because a cerebral lesion was suspected.This tool allowed the visualization of the fractured condyle which was previously overlooked. CBCT is less expensive and irradiating than CT, especially for pediatric patients, but it is not used for explorations in emergency cases because children are generally not cooperative, leading to movement artifacts [13][14][15]. In the present case, the condylar fracture took place in the context of a head trauma.Priority was therefore given to the exploration of possible cerebral lesions, and CT was the tool of choice.Besides, CBCT was not performed as it was not available in our hospitals in the 90s. To the best of the authors' knowledge, only few publications have presented a radiological illustration of the evolution of condylar fractures to ankylosis. The distance between the ankylotic mass and some important structures, such as the internal maxillary artery, mandibular foramen, lateral pterygoid plate, and external auditory canal, should be considered before any intervention.CT studies have also revealed the involvement of the glenoid fossa, foramen ovale, jugular foramen, and mastoid bone in the ankylotic mass [1,16,17]. TMJ ankylosis in children can be a deterrent to normal mandibular growth, especially in the presence of a bilateral problem giving the young patient a "Bird face" appearance.The sequelae become more visible as the child grows [16,18,19]. The complications related to TMJ ankylosis include several oral dysfunctions.Moreover, this pathology deeply affects the child's facial skeletal development.Facial dysmorphosis caused by an early traumatic ankylosis can cause psychological stress and therefore negatively impacts the patient's quality of life [5,20].Surgical treatment proves to offer an overall improvement in the pediatric patients' welfare, which is confirmed by their caregivers [21,22]. This therapeutic choice entails risks of injury to the facial nerve, middle meningeal artery, and maxillary artery [17,23,24]. In this case, the patient's maximal mouth opening was severely affected due to ankylosis as it was restricted to 10 mm.This hugely hindered the patient's ability to make mandibular movements and thus to adequately eat various foods (he was only consuming liquid food), resulting in a progressive weight loss of the child.He was incapable of properly communicating with peers at school.As the patient was an infant, mandibular growth was slowed down, leading to both gnathic and dental malocclusion.Indeed, the patient was suffering from a mandibular retrognathism and a unilateral cross-bite of the left posterior teeth.Other complications may include obstructive sleep apnea if ankylosis persists in the long term [25]. Multidimensional radiological examination of traumatized patients should focus on the search for condylar and subcondylar fractures to avoid risks of ankylosis. CT allows a thorough study of the fracture lines and their orientation.Matching the reconstruction plane and the fracture line helps to visualize the fracture in all its length.Narrow window allows visualization of TMJ fracture inflammatory and/or infectious complications in the surrounding soft tissues (thickening/abscess of the lateral pterygoid, medial pterygoid, masseter, and temporal or septation of subcutaneous fat). Once TMJ fracture is diagnosed, and particularly in cases of pediatric patients, a multidisciplinary decision to treat should be immediately taken and discussed with the patient's family [26], thus avoiding any possible alterations in the child's facial and overall growth [5,18,19,27,28]. In the present case, if a jaw mobilizer was indicated right after the trauma, ankylosis would have been minimized, if Case Reports in Dentistry not avoided.Indeed, in the cerebral CT performed right after the trauma, a fracture in the head of the condyle was visible, but it was overlooked. Conclusion TMJ ankylosis can have a negative impact on facial growth in young patients and can therefore be a huge impairment to a person's physical and psychological development as many functions can be affected by this disorder, mainly mastication, speech, and swallowing [29].The aim of this paper was to show the crucial need for suspecting and establishing the diagnosis of condylar fractures within an adequate timeframe, especially for children having a trauma history, even in the absence of external signs of head injury.This implies the medical responsibility of radiologists, dentists, and other healthcare practitioners involved. Any pediatric patient presenting to the dental office or to the emergencies with a head trauma context involving mainly the mandible must be oriented for further exploration of TMJ by CT or CBCT scans.The latter are becoming more available and accurate and are considered more economic, irradiation-wise, and financially-wise. It is important to note that early physiotherapy should be immediately indicated to restore normal masticatory activity, continue mandibular growth, and thus prevent ankylosis [30]. We recommend that traumatized patients engage in physiotherapy as a prophylactic measure even if TMJ radiological examinations are not conclusive. A long follow-up period extending until the end of mandibular growth must be associated with surgical treatment in order to detect early signs of recurrence which remains possible due to the high regenerative and remodeling capacity in children [29,31]. Figure 2 : Figure 2: Axial computed tomography slice in bone rendering showing ankylotic mass (black circle) located laterally to the left condyle. Figure 3 : Figure 3: Frontal computed tomography reconstruction in bone rendering showing alteration in articular surfaces of the lateral aspect of the left temporomandibular joint.Note the persistence of joint space in the medial aspect (hatched black arrow). Figure 1 : Figure 1: Panoramic radiograph showing ankylotic mass on the left condyle blurring the joint space (black arrow). Figure 4 : Figure 4: Immediate posttraumatic computed tomography scan.Axial slice in bone rendering showing fracture of the medial pole of the left condyle (white circle).
2024-01-08T16:03:56.442Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "45696d9daa4fdd8dab3399dc0beda17b168c6cab", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crid/2024/5101486.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3739daeaa6f4afa6d499e0516082563846281ce9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4144322
pes2o/s2orc
v3-fos-license
Buffered Qualitative Stability explains the robustness and evolvability of transcriptional networks The gene regulatory network (GRN) is the central decision‐making module of the cell. We have developed a theory called Buffered Qualitative Stability (BQS) based on the hypothesis that GRNs are organised so that they remain robust in the face of unpredictable environmental and evolutionary changes. BQS makes strong and diverse predictions about the network features that allow stable responses under arbitrary perturbations, including the random addition of new connections. We show that the GRNs of E. coli, M. tuberculosis, P. aeruginosa, yeast, mouse, and human all verify the predictions of BQS. BQS explains many of the small- and large‐scale properties of GRNs, provides conditions for evolvable robustness, and highlights general features of transcriptional response. BQS is severely compromised in a human cancer cell line, suggesting that loss of BQS might underlie the phenotypic plasticity of cancer cells, and highlighting a possible sequence of GRN alterations concomitant with cancer initiation. DOI: http://dx.doi.org/10.7554/eLife.02863.001 Introduction At every level of organisation, biological entities, such as genes, proteins and cells, function as ensembles. Interaction networks are therefore a fundamental feature of biological systems, and a vast amount of analysis exploring the organisation of biological networks has been performed Barabasi and Oltvai, 2004;Alon, 2006;Buchanan et al., 2010). This analysis has provided interesting insights into the features of these networks (Barabasi and Oltvai, 2004;Brock et al., 2009;Tyson and Novák, 2010;Ferrell et al., 2011;Liu et al., 2011;Cowan et al., 2012), and has led to new methodologies for characterizing their topologies. However, one might argue that this work has had less impact on our understanding of the reasons underlying the network topologies observed and on the possible selective pressures leading to the emergence of common network features. Here we present a simple theory, Buffered Qualitative Stability (BQS), motivated by biological robustness, which has strong explanatory power and provides a number of hard, readily verifiable predictions for the topological structure of interaction networks, at both global and local scales. Besides leading to new predictionsthat are consistently verified-BQS provides a theoretical justification for the ubiquitousness of network features already observed. BQS is therefore an important step in providing a general mechanistic explanation for the overall structure of GRNs at different scales and in shedding new light on previous observations. Robustness is a remarkable feature of living organisms allowing them to tolerate a wide variety of contingencies, such as DNA damage, limitations in nutrient availability, or exposure to toxins (Lopez-Maury et al., 2008;MacNeil and Walhout, 2011). Although much is now known about how cells respond to particular stresses or environmental cues, little is known about how cells remain stable and respond appropriately whatever the contingency. Over evolutionary time it is also advantageous for organisms to be robust to genetic changes, including those that occur as a consequence of the shuffling of genes during sexual reproduction. In order for cells to be fully robust, changes to any of the thousands of individual quantitative parameters-for example the concentration of a transcription factor or its affinity for its cognate DNA sequence-cannot be critical because contingencies may cause these to change. We propose that the robustness of a biological system should therefore depend on qualitative, not quantitative, features of its response to perturbation. Robustness is a complex and fundamental feature that can be formalised in many ways (Jen, 2003;Silva-Rocha and de Lorenzo, 2010). Features commonly associated with robustness include resistance to noise, redundancy and error-correction. Here we will focus on an important component of robustness: the ability of a system at equilibrium to respond to a perturbation by returning to its equilibrium state. Such a feature, generally called 'stability', is essential to allow a system to properly operate in noisy conditions and withstand unexpected environmental challenges. This type of robustness has been studied before (Quirk and Ruppert, 1965;Puccia and Levins, 1985) and has been applied to economics (Quirk and Ruppert, 1965;Hale et al., 1999), ecology (May, 1973a;Puccia and Levins, 1985) and chemistry (Tyson, 1975). However, this notion has never been used to predict network features beyond simple topological properties required by the 'rules' that allow such stringent robustness and has not previously been applied to molecular cell biology, or the evolutionary pressures shaping the behaviours of living organisms. Transcriptional regulation plays a central role in the behaviour of cells in response to environmental cues and is aberrant in many diseases (Lee and Young, 2013). Moreover, networks representing eLife digest The genomes of living organisms consist of thousands of genes, which produce proteins that perform many essential functions. Cells receive signals from both their internal and external environments, and respond by changing how they express their genes. This allows a cell to make the right amount of different proteins when needed. The proteins that a cell produces can then, in turn, influence how the cell's genes are expressed. This set of interactions between genes and proteins is called a gene regulatory network, and is akin to a computer program that the cell runs to define its behaviour. At present, we understand very little about why these networks take on the forms seen in living cells. A remarkable feature of living organisms is their ability to withstand an extremely wide variety of predicaments, such as DNA damage, physical trauma or exposure to toxins. This ability, generally called robustness, requires a cell to rapidly activate different gene sets and maintain their activity for as long as necessary. However, very little is known about how cells are programmed to respond appropriately, whatever happens, and keep themselves in a stable state. Albergante et al. propose that a fully robust gene regulatory network should be able to stabilize itself. This means that the robustness of a gene regulatory network should only depend on how it is wired up, and not on quantitative changes to any features that may change unpredictably-for example the concentration of a protein. By analysing data that is already available about gene regulatory networks in a wide selection of organisms ranging from bacteria to humans, Albergante et al. show that all known gene regulatory networks are wired up in a way that any quantitative change to the network will not cause the state of network to change. In addition, gene regulatory networks tend to remain stable even if new regulatory links are randomly added. Albergante et al. call this property Buffered Qualitative Stability (BQS): the network is qualitatively stable because its state does not change when the activity of particular regulatory links in the network changes, and it is buffered against its stability being compromised by the random addition of new links. Albergante et al. also found that the gene regulatory network of a cancer cell does not match up with the predictions of BQS, suggesting that the robustness of the network is compromised in these cells. This could explain why cancer cells are able to easily change their characteristics in response to changes in the environment. In addition, using BQS to analyse the gene regulatory network of bacteria such as E. coli reveals points in the network that, if disrupted, would make the network unstable, potentially harming the cell. Therefore, in the future, an understanding of BQS could help efforts to design new drugs to treat a range of infections and diseases. Research article transcriptional interactions have been derived for diverse organisms. These networks, termed 'gene regulatory networks' (GRNs), comprise directed links between pairs of genes. For a given pair of linked genes, one encodes a transcription factor (TF) that regulates the expression of the other (Buchanan et al., 2010). Figure 1 shows the GRN of Escherichia coli, with TFs coloured red and the arrows colourcoded according to the number of genes regulated by the source TF. Systematic network analysis of GRNs is possible because comprehensive and high quality GRN datasets are available for different organisms (Lee et al., 2002;Harbison et al., 2004;Luscombe et al., 2004;MacIsaac et al., 2006;Galan-Vasquez et al., 2011;Sanz et al., 2011;Garber et al., 2012;Gerstein et al., 2012;Salgado et al., 2012). Here we consider the hypothesis that to confer robustness and promote evolvability, GRNs must be stable to changes in interaction parameters and also stable to the addition of new regulatory links, that is to changes in the structure of the GRN itself. The type of robustness that we hypothesize ensures that the transcriptional state of a cell remains largely stable in response to random perturbations. We show that published GRNs, including those of E. coli, M. tuberculosis, P. aeruginosa, S. cerevisiae, mouse and humans are robust in this way, a property we term 'Buffered Qualitative Stability' (BQS). Remarkably, the only published GRN of a cancer cell line deviates strongly from BQS, suggesting that the loss of BQS may play an important role in cancer. Figure 1. The E. coli GRN. The E. coli GRN derived from Salgado et al. (2012) using two evidence codes. Genes that are reported to regulate transcriptionally at least one other gene, that is transcription factors (TFs), are represented as red circles; the other genes are represented by blue circles. Arrows indicate a transcriptional interaction from the TF to the target gene. The arrows are colour-coded according to the number of genes regulated by the source TF. Note the logarithmic scale in the colour coding. DOI: 10.7554/eLife.02863.003 GRNs are qualitatively stable Interaction networks are ubiquitous in biology, and robustness in their response to perturbation is a desirable property in many circumstances. A mathematical theory called 'Qualitative Stability' has determined how the topological structure of a network is related to robustness (Quirk and Ruppert, 1965). This theory, discussed in 'Materials and methods', shows that certain network topologies remain stable even if the strength of any of the network interactions is varied in an arbitrary way. Qualitatively Stable GRNs would be robust, for example, to changes in the concentration of a transcription factor or its affinity for its cognate DNA sequence. A primary requirement for Qualitative Stability is the absence of long feedback loops (meaning, in the case of GRNs, feedback loops involving three or more genes) regardless of whether the connections comprising the loops are stimulatory or inhibitory. In addition, 2-node feedback loops can be Qualitatively Stable depending on the precise nature of their interactions (see 'Materials and methods'). The danger inherent in feedback loops was first analysed by James Clerk Maxwell who showed that mechanical governors regulating the output of steam engines can fail if the input changes faster than the system response, causing 'an oscillating and jerking motion, increasing in violence till it reaches the limit of action of the governor' (Maxwell, 1868). Because there is an inevitable time lag between a transcription factor (TF) binding to the promoter of a gene and the production of the protein product of that gene, this form of instability can occur if GRNs contain feedback loops consisting of three or more TFs. This concept is supported by the behaviour of the repressilator, a well-known gene circuit consisting of a 3-gene feedback loop, which has been shown to produce oscillations of increasing intensity in vivo (Elowitz and Leibler, 2000). We therefore examined the structure of organisms for which system-wide GRNs have been published, including three bacteria-E. coli (Salgado et al., 2012), M. tuberculosis (Sanz et al., 2011), P. aeruginosa (Galan-Vasquez et al., 2011)-the yeast S. cerevisiae (Harbison et al., 2004), and human (represented by the GM12878 cell line) (Gerstein et al., 2012). The main Figures present data from E. coli, S. cerevisiae and human, whilst analysis of M. tuberculosis and P. aeruginosa plus additional confidence levels for E. coli and S. cerevisiae, and additional yeast datasets (Lee et al., 2002;Luscombe et al., 2004;MacIsaac et al., 2006) are reported in the Figure Supplements and confirm our main results. A review of available datasets and the rationale for our selection is given in 'Materials and methods'. On studying feedback loops in the GRNs of these organisms (Figure 2A-C, Figure 2-figure supplement 1A,B, lightly shaded bars), we find that P. aeurginosa, S. cerevisiae and human GRNs have no feedback loops comprising three or more genes. The E. coli GRN has no feedback loops comprising four or more genes, and only two 3-gene feedback loops. M. tuberculosis has two 3-gene feedback loops and one 4-gene feedback loop. Notably, all the 3-gene feedback loops observed in real GRNs share the same peculiar structure, with implications discussed below. In contrast, when networks of the size and connectivity of the biological GRNs are constructed with randomly placed links, they display an exponential increase in feedback loops consisting of three or more genes, which number in the thousands ( Each of 1000 randomly simulated E. coli networks had at least one long feedback loop. The vastly different abundances of feedback loops clearly demonstrate the profound difference in topologies between real and random networks. Statistical analyses suggest that there is an extremely small probability (<10 −6 ) that the absence of long feedback loops with >3 genes in E. coli is a chance event ( Qualitative Stability of GRNs prevents the catastrophe described by Maxwell from occurring, even when any of the myriad quantitative system parameters (TF abundance, promoter availability, the rate of transcription, etc.) are altered. This stability provides a type of 'damping', which will tend to restore the state of the GRN when challenged with contingencies that might otherwise induce chaotic or unpredictable behaviour. Notably, the damped oscillatory response expected from a stable system has been observed in single cell experiments (Tay et al., 2010). BQS predicts large-scale properties of GRNs In principle the Qualitative Stability observed in GRNs might be easy to break by addition of another link to the network. For example, a long feedback loop can be created by the addition of a feedback connection from a TF lower down in a network path to a TF higher up in that path. This could occur, for example, through a mutation in the promoter of the target gene allowing it to be bound by a new TF. It is also possible that stress conditions could cause TFs to act inappropriately at promoters they do not normally regulate. In this way Qualitative Stability could be lost, and GRNs could become unstable. Thus, we predict that if long feedback loops are detrimental because of their instability, then GRNs would be configured to minimize destabilization via the addition of new connections. We call network paths that can be transformed into loops by the addition of a single new link 'incomplete feedback loops' (Figure 3D-E). The abundance of incomplete feedback loops in the GRNs of E. coli, S. cerevisiae and human is shown in Figure 3A-C (lightly shaded bars). Data for M. tuberculosis and P. aeruginosa are shown in Figure 3-figure supplement 1A,B. For each of these GRNs there are <2000 incomplete feedback loops and they tend to be of a relatively small size. A similar empirical observation has been made regarding transcriptional cascades (Rosenfeld and Alon, 2003;The modENCODE Consortium et al., 2010). This is in stark contrast to comparable random networks ( Figure 3A-C, (Figure 3-figure supplement 4). Note that the distribution of incomplete feedback loops is indicative of the different topological structures that can be observed in the network, and is not necessarily monotonically decreasing (Figure 3-figure supplement 5). The striking difference between real and random networks in the preponderance of both the number of long feedback loops and incomplete feedback loops strongly suggests a profound selective pressure on living organisms to adopt GRN topologies that are stable under all parameter regimes. In addition, these topologies efficiently prevent random mutations from introducing possible sources of destabilization. Note that feedback loops involving more than three genes are also highly susceptible to the creation of additional long feedback loops making them strongly disadvantageous to stability. We say that networks configured to minimize the number of real and incomplete long feedback loops possess Buffered Qualitative Stability (BQS). Networks having this property are stable to perturbations and are also buffered against the potentially destabilising effects that occur when new links are added. If BQS is really a fundamental design principle of GRNs, as our data seem to suggest, they should display a range of other properties, which we describe and examine henceforth. BQS predicts intermediate-scale properties of GRNs An important global network property constrained by BQS is the degree of cross-regulation between TFs. Since a TF must be both regulated and regulating to take part in a feedback loop, one way that GRNs could satisfy BQS and minimise the risk of unstable loops being formed, is by having a high proportion of TFs that are not regulated by other TFs. Consistent with this prediction, the percentage of unregulated TFs in E. coli, S. cerevisiae, M. tuberculosis, P. aeruginosa, human and other yeast datasets is very high ( Figure Some TFs, however, must be regulated by other TFs in order for the GRN to be able to combine information from multiple pathways and change state depending on different circumstances. In order to satisfy BQS, highly connected TFs should either be regulated by a large number of other TFs or should themselves regulate a large number of target TFs, but not both (since otherwise the TF in question is significantly more susceptible to becoming part of a 3-gene feedback loop after addition of a link); in other words, highly centralised control is disallowed. This prediction of BQS is indeed verified in E. coli ( Figure 4B . The E. coli TF with the largest number of 'outgoing regulatory connections' regulates 38 other TF genes, but is itself regulated by only one TF; the TF with the largest number of 'incoming regulatory connections' is regulated by nine other TFs but itself regulates only one TF gene. There are no E. coli TFs that are both highly regulated and highly regulating (in fact, there are no TFs regulating >2 other TFs that are themselves regulated by >2 TFs). BQS does not even allow central control to be split by connecting highly regulated TFs to highly regulating TFs, as this would create a large number of incomplete loops. Consistent with this idea, no E. coli or human TFs with in-degree >2 regulate TFs with out-degree >2 ( Figure 4C, Figure 4-figure supplement 3D). In S. cerevisiae there are several TFs that exceed these limits, but this represents only a small minority of TFs ( Figure 4-figure supplement 3C). These results show that BQS strongly favours distributed control over central control, and provide another example of BQS being a key determinant of the topology of GRNs. BQS predicts small-scale properties of GRNs Our discussion so far has focused on the effect of BQS on large-and intermediate-scale global properties of GRNs. BQS also makes strong predictions about the small-scale local structure of GRNs. To investigate this, we dissected each of the GRNs into a series of small motifs comprising three or four genes (Alon, 2006;Milo et al., 2002). A single motif can, in principle, break Qualitative Stability by forming a feedback loop composed of three or more genes. As we have shown above, such motifs are essentially absent from real GRNs. However, motifs may be susceptible to feedback loop formation through the Comparisons are made to randomly simulated networks containing the same number of genes, TFs, and connections. (B) Each of the 154 TFs in the E. coli GRN is plotted in the space of incoming regulatory connections (number of regulatory links from other TFs) and outgoing regulatory connections (number of regulatory links to other TFs). Solid lines indicate isoclines of relative probability (normalized to unity at reference point RP) for creating a 3-gene feedback loop under random addition of a link. Percentages and colours indicate the fraction of TFs within each band demarcated by isoclines. (C) Each of the 154 TFs in the E. coli GRN is plotted in the space of incoming regulatory connections (number of regulatory links from other TFs) and outgoing regulatory connections of regulated TFs (number of regulatory links originating from a TF that is regulated by the selected TF). In (B and C), schematic motifs are provided; black solid arrows indicate the motif, while red dashed arrows indicate potential arrows whose addition results in the formation of long feedback loops. For each motif, the number of actual arrows is indicated in black and the number of potential destabilising arrows is indicated in red. DOI: 10.7554/eLife.02863.015 Figure 4. Continued on next page addition of a link, and we can therefore speak of 'buffered motifs' as motifs that are resilient to this, and therefore enhance BQS locally. Note that, to prevent possible biases introduced by the large number of non-TF genes, only motifs completely formed by TFs were considered. Using symmetry arguments, we grouped 3-and 4-gene motifs into buffered and non-buffered categories, which are equi-probable in a random network (confirmed by These findings, besides confirming the role of BQS, provide additional support and potential explanations for the prevalence of well-studied motifs such as the feedforward loop (a stable motif) and the bi-fan (a buffered stable motif) as building blocks of networks (Alon, 2006;Milo et al., 2002). Indeed, the most buffered 3-and 4-gene motifs studied here are, respectively, latent feedforward loops and latent bi-fans. Measuring BQS When a network uses only a subset of the possible edges in a motif, new connections can be added to expand the network ( Figure 5G). In general, some of these connections will create long feedback loops (red dashed arrow), while others will not (green dashed arrow). As we have shown, robustness appears to exert a strong selective pressure on the topology of GRNs. To assess the extent of this pressure we used an extensive computational approach to estimate the probability that a random edge addition between two TFs creates a long feedback loop in the GRNs. All the possible edge insertions were tested in the real GRNs of E. coli, S. cerevisiae and human, and the values were compared with those estimated in the corresponding random networks. The extent of buffered stability is quite remarkable: 4363 new interactions can be added to the human GRN, but only 48 of them will create long feedback loops (Supplementary file 1). By contrast, addition of single extra links to random networks would lead to the creation of approximately 2000 different feedback loops on average: a hit rate of approximately 50%. The probability that real GRNs will gain long feedback loops by random edge additions is low ( Figure 5H, lightly shaded bars), and is significantly smaller than that found for comparable random networks ( Figure 5H, heavily shaded bars). Nevertheless, these probabilities are non-zero, indicating a trade-off between stability and the need for cells to regulate gene expression. BQS highlights critical network modules Qualitative Stability is compromised to a very small degree in the GRN of E. coli by the two small, but 'illegal' feedback loops shown in Figure 6A,B and highlighted in Figure 6-figure supplement 1. Three aspects are noteworthy. Firstly, of the seven 2-node feedback loops in the E. coli GRN, four are embedded into the two potentially unstable motifs depicted in Figure 6A,B, whilst the other three are isolated from other feedback loops and so are either stable or likely to act as switches. Secondly, the genes comprising the two illegal motifs are involved in drug resistance (Ariza et al., 1995;Martin and Rosner, 2002;Nishino et al., 2008;Keseler et al., 2011; Figure 6C) and/or acid resistance (Sayed et al., 2007;Keseler et al., 2011; Figure 6C). This raises the possibility that limited instances of deviation from BQS may arise through evolution as a short-term expediency allowing survival in a changing environment. Thirdly, both loops share a remarkably similar sub-structure: two linked 2-gene feedback loops connected by one link into a 3-node feedback loop. It has been previously shown in chemical networks that such a configuration can display chaotic behaviour (Sensse and Eiswirth, 2005). It is tempting to speculate that these illegal motifs act as localized sources of chaos, allowing a cell population to quickly explore very diverse gene expression levels, thus accelerating the emergence of resistant phenotypes (Lopez-Maury et al., 2008). Moreover, compatible with the idea that chaotic behaviour should be tightly controlled, most of the genes depicted in Figure 6A,B are highly regulated ( Figure 6D). These ideas are also supported by the M. tuberculosis GRN: all the four genes involved in the formation of illegal motifs are implicated in stress responses (He et al., 2006;Rodriguez et al., 2002), and the two 3-gene feedback loops share the same topology observed in E. coli ( Figure 6-figure supplement 2A,B). In addition, of the six 2-node feedback loops observed in the M. tuberculosis GRN, three are isolated from other feedback loops and the other three are embedded into the two potentially chaotic motifs. Finally, it is noteworthy that the 4-gene feedback loop in the M. tuberculosis GRN is formed by joining the two 3-gene feedback loops ( Figure 6-figure supplement 2C), consistent with our earlier observation that long feedback loops are susceptible to the formation of additional embedded feedback loops. P. aeruginosa has no long (>3 genes) feedback loops, and of its seven 2-node feedback loops five are isolated, and so are either stable or likely to act as switches, whilst the other two are linked but are of a form (positive-negative) that makes them Qualitatively Stable. Both of the S. cerevisiae 2-node feedback loops are isolated, whilst the human GM12878 cell line has only a single 2-node feedback loop which is therefore isolated. Curiously, the only long feedback loop observed in the yeast GRN derived by Lee et al. (2002) presents the same potentially chaotic topology discussed above. However, this illegal motif is not present in the more recent GRN derived by Harbison et al. (2004) (Figure 6-figure supplement 3). BQS is lost in cancer Cancer cells have a dysregulated behaviour, breaking the 'social contract' necessary for maintenance of a healthy multicellular organism. Even within an individual tumour a wide range of cellular phenotypes is often observed (Marusyk et al., 2012). To some degree, this is likely to be a consequence of the genotypic heterogeneity of tumours. However, cancer cells also appear to be phenotypically less stable than normal cells (Brock et al., 2009;Gupta et al., 2011). Might this phenotypic instability result from a breakdown of BQS in cancer cells? There is currently only a single cancer cell line, the human leukaemia cell line K562, for which a high quality system-wide GRN has been derived (Gerstein et al., 2012). We therefore investigated the topological differences between the GRNs of K562 and the human non-cancer cell line GM12878. Figure 7 and Figure 7figure supplement 1 show that the feedback loop distribution in the GRN of the leukaemia cell line is strikingly different from that of the non-cancer cell line. In K562, significantly more moderately long feedback loops of 4-8 genes are present ( Figure 7A,B), and the number of loops formed by 3-5 genes is comparable to the number expected in a random network ( Figure 7A). The number of incomplete feedback loops is also significantly larger in K562 ( Figure 7C,D). In addition, poorly buffered motifs are abundant ( Figure 7E,F) and the TFs cross regulation is less extreme (Figure 7-figure supplement 2A,B). Interestingly, only a limited number of TFs contribute to the formation of the 59 long loops in the K562 GRN ( Figure 7G) and it can be substantially 'stabilized'-that is most of the long feedback loops can be removed-by the removal of single pairs of TFs: FOSL1 and JUNB, JUNB and EGR1, or EGR1 and CEBPB. JUNB and FOSL1 are proto-oncogenes that are components of the AP-1 transcription complex (Kouzarides and Ziff, 1989) and have been reported to be part of a long feedback loop in ovarian cancer (Stelniec-Klotz et al., 2012); EGR1 is a regulator of tumour suppressor genes and an oncogene itself (Baron et al., 2005); and the TFs of the C/EBP family have been described as both tumour promoters and suppressors (Nerlov, 2007). These genes are also active in the non-cancer cell line GM12878, yet their destabilizing role in cancer arises from a rewiring of the network. GM12878 satisfies BQS whilst K562 comparatively does not, and yet the two GRNs have similar numbers of genes and regulatory connections (in fact, the K562 TF network has a lower link density than that of GM12878). Finally, the probability of introducing long feedback loops by random edge additions is significantly larger in the cancer cell line than in non-cancer cells, but still lower than the value expected in a random network ( Figure 7H). These findings suggest that cancer cells break BQS and have a vastly less stable GRN than normal cells, which is however less unstable than expected in a random network. Certain features of GRNs, such as rare, long incomplete feedback loops, make them more susceptible to the formation of long feedback loops when new regulatory connections are added (Figure 7-figure supplement 3A-G). Therefore, we investigated how many of the long feedback loops observed in K562 cancer cells could have originated from these relatively unbuffered network structures in noncancerous GM12878 cells. We find that the three genes that contribute the most to the formation of long feedback loops in the cancer cell line (EGR1, FOSL1, and JUNB) are all involved in the formation of the longest incomplete feedback loops observed in the non-cancer cell line ( Figure 7G). As highlighted earlier (Supplementary file 1), single insertions of a new connection into the non-cancer human cell line can create 48 different long feedback loops. Of these 48 potentially destabilising interactions of the non-cancer cell line, three are actually observed in the cancer cell line, namely JUNB-BCLAF1, BATF-EBF1, and JUNB-EBF1. The likelihood of these potentially destabilising interactions occurring in the cancer cell line is 3/48 = 0.063. In comparison, there are a total of 4363 additional links that could be made between the TFs of the non-cancer cell line. Of these potential links, 56 are observed in the cancer cell line. Hence, the links observed represent 56/4363 (0.013) of all the possible ones. Therefore, the destabilizing interactions are five times more abundant than would be expected by chance (0.063 vs 0.013), consistent with the idea that destabilizing interactions were positively selected for during the microevolution process underlying cancer progression. This suggests that, despite the diverse biological histories of these two cell lines (the networks of GM12878 and K562 share only four common links between TFs), the progression of K562 into a cancerous state has involved changes to regions of the GRN in the progenitor normal cells that displayed weaker BQS. Such genetic factors are therefore likely to play a pivotal role in the process of cancer progression in other cell types. BQS and transcriptional response The eukaryotic GRNs analysed so far are static 'snapshots' of potential transcriptional interactions of a population of cells under rich media growth. As discussed in 'Materials and methods', such conditions are ideal to minimize cell heterogeneity and to obtain high quality equilibrium networks that are ideally suited for theoretical analysis. However, during the lifetime of an organism GRNs are dynamic and the set of actual transcriptional regulations can change (Luscombe et al., 2004). GRNs transitioning from one transcriptional program to another are unlikely to be at steady state and, under these circumstances, transcriptional robustness may be less important. We decided to test if BQS could provide new insights into the role of robustness during such structural changes. To this end, we used the data presented in Garber et al. (2012) to reconstruct the GRN of murine dendritic cells at four different time points after stimulation by pathogens: at the time of stimulus application (marked as '0 hr') and after the cells have been exposed to the stimulus for 30 min ('0.5hr'), 1 hr ('1 hr'), and 2 hr ('2 hr'). Since the number of TFs studied in these networks is much smaller than the number considered previously, a different simulation algorithm has been used (see 'Materials and methods'). The '0 hr' GRN was obtained under conditions comparable with the organisms discussed previously. Figure 8 shows that all the predictions of BQS are met at this time. As indicated by Figure 8A, the GRN has a very limited number of long feedback loops (only three with three or more genes), in striking contrast with the hundreds observed in randomly generated networks of the same link density. Interestingly, all of the long feedback loops depend on the transcriptional interaction between SFPI1 and E2F1 (Figure 8-figure supplement 1A-C). Notably E2F1 plays a crucial role in the cell cycle and is only transiently activated at commitment to cell division at the end of G1. Therefore, all of the long feedback loops detected are likely to be transient. Similarly, the number of incomplete feedback loops is very small, and much lower than would be expected in random networks ( Figure 8B). Motif analysis is also consistent with BQS: there is a much higher proportion of unregulated transcription factors than would be expected by chance ( Figure 8E), and the proportion of stable 3-and 4-node motifs is heavily biased towards the buffered stable forms that enhance BQS ( Figure 8C,D). Additionally, the mode of cross regulation-with transcription factors tending to be either highly regulating or highly regulated ( Figure 8F)-also follows the distribution predicted by BQS. Finally, the probability of creating additional long feedback loops in the network by randomly inserting a new regulatory connection is only 0.18, much lower than the value of 0.74 expected in a comparable random network. Since a full eukaryotic transcriptional response typically requires >1 hr, we expect the '0.5 hr' network to be similar to the '0 hr' one. Indeed, the '0.5 hr' network still satisfies all the predictions of BQS and strongly resembles the '0 hr' network ( Figure 9A,B,G). In contrast, at 1 and 2 hr after the stimulus, a marked deviation from BQS is observed. A significant number of new long loops are created, peaking at 1 hr and declining slightly by 2 hr (Figure 9C,E). 22 long feedback loops remain at 2 hr, but interestingly all depend on the transcriptional interaction between RUNX1 and CEBPB. The probability of creating additional long feedback loops is noticeably larger at 1 and 2 hr than in the previous networks ( Figure 9G). There is also a significant increase in the number of 4-node unbuffered motifs at 1 hr, though this does not persist in the '2 hr' GRN. However, some components of BQS remain unchanged in the stimulus response. Incomplete feedback loops still remain preferentially short ( Discussion Previous work on GRNs, in addition to providing important insights into specific functionalities (Kauffman et al., 2004;Albert, 2007;Karlebach and Shamir, 2008), has highlighted some important common features at both local and global scales, in particular the prevalence of certain motif patterns (Alon, 2006;Milo et al., 2002) and scale-free degree distributions (Strogatz, 2001;Barabasi and Oltvai, 2004;Albert, 2005;Buchanan et al., 2010). Moreover, evidence of evolutionary pressures acting primarily on the topology of biological networks has been observed (Tanay et al., 2005;Cross et al., 2011). The biological principles and selective pressures underpinning the emergence of these characteristics are an active area of research (Rosenfeld and Alon, 2003;Tyson and Novák, 2010;Liu et al., 2011) and the identification of general principles is of pivotal importance for the progression of our understanding. By hypothesizing that GRNs must retain stability under a wide variety of circumstances, and so display Qualitative Stability, we provide a novel, simple and powerful explanation for numerous new and previously observed features of GRNs at different scales. We show that the GRNs of six different organisms display a remarkable lack of long feedback loops, which makes them Qualitatively Stable. This means that perturbations in the strength of any individual interaction-the extent to which a TF activates one of its target genes-will not disturb the state of the network. Thus, GRNs are robust to a very wide range of perturbations. Indeed, the selective pressure for Qualitative Stability appears to be so strong that GRNs are heavily buffered to retain this property under the random addition of new network connections, a property we term Buffered Qualitative Stability (BQS). BQS is revealed in numerous different features of GRNs: the lack of long incomplete feedback loops, the high proportion of unregulated TFs, the lack of TFs that are both highly regulated and highly regulating, and the preponderance of buffered over unbuffered motifs. As well as providing stability to immediate disturbances (such as insults that cause a TF to regulate a promoter that it does not normally interact with), BQS also provides robustness to genetic changes that occur as a consequence of sexual recombination of alleles and to mutations that occur over evolutionary time. BQS therefore enhances the evolvability of GRNs, allowing them to function reproducibly in different genetic and environmental contexts. It is worth noting that the scarcity of feedback loops in E. coli has been remarked before , but the selective pressures underpinning this fact were not explored. It is important to consider whether the potential under-sampling of GRNs (i.e., false negatives) and the inherent noise in data generation lead to an underestimation of the true link density in the networks, thus jeopardising the strength of our conclusions that networks satisfy BQS. However, defects in the GRN data are very unlikely to undermine our conclusions, for various reasons. First, as we have clearly shown, random networks with the same link density as the real GRNs do not satisfy BQS, whereas the real GRNs do. Second, as illustrated thoroughly in false positives and negatives are varied in the E. coli and S. cerevisiae GRN datasets. Third, the noncancer (GM12878) and cancer (K562) cell lines show very different BQS properties, despite having similar link densities. Since the data from both sets of cells were derived under similar conditions and with the same algorithmic tools, they would presumably have similar rates of false positives and negatives. A similar comparison can also be made between the stimulated and unstimulated dendritic cells. Robustness is a key feature of cellular behaviour, but an appropriate response time is also critical. Changes in the transcriptional profile of cells are generally slow, requiring from tens of minutes to hours (Alon, 2006). As a consequence, it may be unhelpful for a cell to propagate transcriptionally a signal in a long cascade, and this might also explain the limited number of long incomplete feedback loops observed. While it is likely that timing effects play a role in the organization of GRNs, there are several reasons for believing that robustness is still fundamental. First of all, certain transcriptional changes take place over very long time scales, indicating that a fast response time is not always necessary. Moreover, each TF in a transcriptional cascade is potentially post-transcriptionally controlled (e.g., from signalling pathways), and therefore the signal need not proceed linearly from the top to the bottom of the pathway, but instead genes at the bottom of a cascade could be activated independently. Additionally, it should be remembered that BQS still allows a long pathway (for example A-B-C-D-E) to have a short link between the start and end (A-E in this example); when we enumerated incomplete feedback loops we considered all the possible transcriptional paths through the network, whilst a fast transcriptional propagation may only involve the shortest path. Finally, we note that several key features of BQS that we observe in GRNs, including the prevalence of buffered 3-and 4-node motifs, the lack of transcriptional hubs with similar numbers of incoming and outgoing links, and the high prevalence of unregulated TFs are independent of signaling pathway length. While our analysis suggests multiple possible connections between the breaking of BQS and the cancerous state of the K562 cell line, it is worth pointing out that K562 was derived from a stem cell population. This raises the possibility that some amount of the breaking of BQS observed may be connected with the K562 cell line's original stemness. Further experimentation on the robustness of other cell lines will allow to assess the aetiology of the lack of robustness observed, and whether the loss of BQS is associated to stem-like properties of cells. Although BQS has been developed here in the context of GRNs, it provides general principles that can be used to analyse or manipulate robustness in any regulatory system, biological or otherwise. These principles are summarised by five simple rules ( Figure 10): 1. Avoid long feedback loops to minimize instability arising from perturbations in network interactions (the basic principle of Qualitative Stability); 2. Favour constitutive (unregulated) nodes to reduce the potential number of loops; 3. Avoid long paths to minimize the number of 'incomplete feedback loops' and the emergence of instability due to addition of new network connections; 4. Favour buffered motifs over unbuffered motifs to reduce the potential number of loops; 5. Avoid centralized control hubs with a large number of both regulatory and regulating connections to reduce potential instability (necessitating the use of distributed control). These rules can be used to devise highly stable networks that minimize the 'hyper-risk' inherent in global networks that are difficult to control (Helbing, 2013). Other biological networks It is widely believed that feedback plays a major role in biological control (Harris and Levine, 2005;Tsang et al., 2007;Peter et al., 2012). As we have demonstrated here, BQS demands that GRNs are free of long feedback loops, even under the addition of new links. The question remains, to what extent feedback operates in post-transcriptional regulation rather than the purely transcriptional networks we have examined here. It was not possible for us to perform a similar analysis of post-transcriptional regulation because strongly validated system-wide data describing such interactions are not currently available. However, the biological literature provides Figure 10. BQS Rules. BQS provides five general design principles applicable to any regulatory system. The rules are described in more detail in the 'Discussion' Section. DOI: 10.7554/eLife.02863.037 some clues. Motif analysis of post-transcriptional networks reveals that BQS-compliant feedforward loops are overrepresented, while BQS-breaking feedback loops are not (Gerstein et al., 2010;The modENCODE Consortium et al., 2010;Cheng et al., 2011;Joshi et al., 2012). This suggests that Qualitative Stability may still be an important principle governing the topology of these networks. Additionally, it has been noted that stable motifs are more common than unstable ones even in posttranscriptional signal transduction , suggesting again a role for Qualitative Stability in these networks. Nevertheless, the existence of feedback loops in post-transcriptional networks (Harris and Levine, 2005;Tsang et al., 2007) supports the idea that the severe constraints of BQS are 'loosened' to allow more responsive dynamic functionalities in post-transcriptional regulation. Consistent with these expectations, short feedback loops involving two or three genes appear to be present in some developmentally regulated gene networks (Peter et al., 2012). This raises the hypothesis that the different levels of gene regulation (transcriptional vs post-transcriptional) provide a way of segregating control modules with different robustness properties. Implications for active transcriptional response The robustness provided by BQS allows a transcriptional network to filter out internal or external disturbances. Such robustness is desirable under normal conditions, but can be detrimental during a transcriptional response that requires effective and fast changes in the set of transcribed genes. Consistent with this idea, our analysis indicates that a certain level of instability builds up during a response to a stimulus that produces a transcriptional response. Quite remarkably our results also suggest that two hours after such a stimulus the GRNs considered are 'on the verge of stability', as the deactivation of just one transcriptional interaction will make the network robust according to the BQS rules. It is worth pointing out, however, that each cell in a population is responding to a stimulus independently and potentially at a different pace. Therefore, the apparent post-stimulus instability observed may be the result of sampling transcriptional occupancy of cells at different stages of the transcriptional response. Future experimentation focused on the robustness property of cells and single cell GRN reconstruction will likely clarify which interpretation is correct. Implications for evolvability and drug design The molecular bases of evolutionary innovations are complex and poorly understood (Wagner, 2011). It is notable that the only instances where the E. coli and M. tuberculosis GRNs deviate from BQS are found in genes functionally related to stress responses. This might provide controlled instability allowing bacteria to explore new gene expression levels in response to environmental stresses, thus achieving a short-term evolutionary advantage. While other mechanisms are likely to be in play to achieve longerterm adaptations, therapeutic targeting of unstable motifs may provide a novel systems approach to drug discovery. The deviation of the human cancer cell line K562 from BQS is also very striking. This deviation allows a cancer cell, in principle, to readily change its phenotype in response to internal or external stresses, so it can explore different phenotypic states, which might help its proliferation in otherwise challenging tissue environments, or even to survive drug treatment. Nevertheless, the cancer cell line still possesses a degree of BQS far greater than that observed in a random network. This is consistent with the role of the unstable motifs in E. coli and M. tuberculosis and supports the idea that a small breakdown of BQS might be a hallmark of recent or rapid selection pressure. We note that single-cell experiments report large changes in protein abundance occurring in individual cancer cells after drug treatment, consistent with our theory (Cohen et al., 2008). Taken together, our results indicate that BQS adds a powerful new weapon to the arsenal of network medicine Noh et al., 2013;Pe'er and Hacohen, 2011). The discovery that the transcriptional interactions most critically involved in the formation of long feedback loops in the cancer cell line K562 are also present in the longest incomplete feedback loops in the non-cancer cell line GM12878 may provide clues to mechanisms of cancer initiation and progression. Our theory suggests that random perturbations to the GRN of normal cells will probably leave its stability unchanged. Destabilization of the GRN is likely to involve those genes that are on the edge of stability, for example genes participating in long incomplete loops. It is striking that the two gene families with the highest destabilization potential in the non-cancer cell line (JUN and FOSL) are actually found to be the two gene families with the highest destabilization role in the cancer cell line. These observations are consistent with the notion of cancer as a 'systems disease' that involves changes that lead to destabilisation of the GRN. If network instability is a general feature of cancer cell GRNs, analysis of this kind can potentially be used to design new anti-cancer strategies that exploit the unique weaknesses of cells lacking BQS. Conditions for Qualitative Stability of networks and its applicability to GRN The study of stability in qualitative networks of interacting entities was introduced in the seminal work of Quirk and Ruppert in economics (Quirk and Ruppert, 1965). Since then, the idea has been applied to different disciplines (May, 1973b;Tyson, 1975) and the theory has been slightly enhanced (Jeffries, 1974). It has also been studied within the broader field of qualitative matrix theory (Maybee and Quirk, 1969;Hale et al., 1999). While the theory developed by Quirk and Ruppert is probably the most well-known tool to study Qualitative Stability, a similar formalism is provided by the work of Puccia and Levins on loop analysis (Puccia and Levins, 1985). The theory deals with systems at equilibrium and considers the effect of small perturbations. Stable systems are characterized by the ability to react to these perturbations by returning to their original equilibrium state. In qualitative networks, each node is associated with a quantity or concentration and the presence of an arrow from node A to node B indicates that changing the concentration of A has an effect on B. The sign of the arrow indicates whether increasing A increases (positive arrow) or decreases (negative arrow) B. The absence of an arrow indicates that increasing the concentration of A does not directly affect the concentration of B. Note that self-regulation, represented by an arrow from a note to itself, is also possible. In its original form, the theory states four conditions for qualitative stability: 1. Absence of positive self-regulation 2. Absence of double positive or double negative two-node feedback loops 3. Absence of feedback loops longer that two 4. Invertibility of the sign matrix Condition 1 prevents unlimited autocatalysis. Condition 2 prevents unlimited 'collaborative autocatalysis' (double positive) and switches (double negative), but note that positive/negative two-node feedback loops are allowed. Condition 3 is less intuitive, and disallows feedback systems that may go 'out-of-sync' for example as a consequence of non-linearities in the interactions. Condition 4 forbids the presence of two or more nodes that are being affected, or affect, the same nodes in the same way. While conditions 1, 2 and 4 are very important from a theoretical point of view, they are of limited relevance to our analysis. For autocatalytic positive self-regulation, saturating effects on transcription (for example due to limiting numbers of RNA polymerases) along with protein degradation means that autocatalytic TFs will reach a stable steady-state rather than increasing to arbitrarily high values. Therefore condition 1 is of questionable relevance to GRNs. This argument of finite resources obviating positive self-regulation has also been applied to the study of Qualitative Stability in ecosystems (May, 1973a). GRNs breaking condition 4 have no biologically plausible incarnation. In order for this condition to be violated, two different TFs must act on the network in the same way, that is, they need to regulate each other and promote and inhibit exactly the same set of genes, as this would make the columns of the sign matrix linearly dependent and therefore the matrix non-invertible. If two TFs act in the way described, they would be biochemically indistinguishable. Due to the methodology used to derive GRNs, two such genes would be collapsed into the same node. Condition 4 is therefore trivially verified. The case of double positive and double negative feedback loops is more complex. Under biological constraints relevant for GRNs, isolated double negative feedback loops (i.e., not connected with other feedback loops formed by two or more nodes) are unstable only to the extent that they form switches that can exist in one of two stable states. Double positive feedback loops are potentially capable of a more complex behaviour. However, when considered in isolation with negative self-regulation under constraints relevant to GRNs, they are likely to display a switch-like behaviour (Banerjee and Bose, 2008). Remarkably, in both GRNs for which sign information is available these conditions are verified: in P. aeruginosa the only double positive feedback loop is isolated and in E. coli all non-isolated twogene feedback loops are part of the potentially chaotic motifs discussed above, and are therefore not relevant for these particular 2-node stability arguments. Hence, the only scenario in which condition 2 potentially threatens Qualitative Stability in GRNs is a 'daisy chain' of linked 2-node feedback loops; in the single case where such a daisy chain is observed (in P. aeruginosa) both 2-node feedback loops are of the positive/negative type and so result in a form compatible with Qualitative Stability. Taken together, the above considerations indicate that condition 3 is the most significant in a rigorous comparative study on the GRNs. The introduction of a time delay into a system generally leads to an increased dimensionality. It is therefore not surprising that a delay introduced into feedback loops can lead to oscillations and instability, and this is one additional reason contributing to feedback loops containing >2 nodes not being Qualitatively Stable. Gene expression is a complex multi-step process and the role of regulatory mechanisms as potential sources of delay is an active area of research (Gorgoni et al., 2014). The complexity of the problem, and the difficulty in obtaining quantitative information, limit our ability to assess the role of delays on our model. A delay that is fast compared to transcriptional regulation can be ignored due to the separation of the time-scales. When this is not the case, delays are potential sources of instability, especially in the presence of feedbacks of any size. Delays may be stronger in eukaryotes than in prokaryotes, as transcription and translation are spatially separated. This may partially explain why the GRNs of yeast and GM12878 have a remarkably limited number of feedback loops of any size. A final aspect of the theory is worth noting. The theory of Qualitative Stability has been developed by means of differential equations and the conditions discussed above properly apply only to deterministic autonomous systems. This results in our model being a simplification when compared to biological networks such as GRNs. It has been observed that cells limit the noise in gene expression (Raj and van Oudenaarden, 2008), suggesting the existence of biological mechanisms that reduce the extent of stochasticity. Therefore, whilst our model is likely to be largely compatible with the biological behaviour, noise may play a role in triggering GRN reorganization in response to strong stimuli. It also is worth stressing that while stability is generally of pivotal importance, particular situations may require a fast, rather than stable, response. Therefore, when GRNs must be able to undergo changes of state, for example during development (Peter et al., 2012), or must be able to respond to external stimuli, for example to mount an immune response (Ciofani et al., 2012), the role of stability may be more limited. A brief review of available datasets and a rationale for those chosen The reconstruction of the GRN of an organism is a challenging task, and an active field of research (Kim and Park, 2011). Different methodologies have been developed and each of them presents advantages and disadvantages. Nevertheless, certain techniques, such as PCR and ChIP-Seq, are generally regarded as more reliable. Besides the technical problems, additional complications arise from the intrinsic working of gene interactions. The GRN is dynamic and changes according to external and internal conditions (Harbison et al., 2004;Luscombe et al., 2004). The sources of this variability are probably diverse and additional work will be needed to assess the cellular mechanisms that shape the GRN. Moreover, the different molecular responses observed at a single-cell level (Cohen et al., 2008;Tay et al., 2010) suggest that the different stages of the cell cycle and different stress conditions may result in structurally different GRNs. Single-cell GRN reconstruction is currently beyond experimental reach, and stress is known to promote intra-population diversity (Cohen et al., 2008;Lopez-Maury et al., 2008). Therefore, where possible, we preferred to analyse GRNs obtained under rich media growth. These conditions result in a low cellular stress and thus are more likely to promote homogeneity in transcriptional response. This homogeneity minimizes the errors in GRN reconstruction due to superpositions of possibly different GRNs. No GRN currently available should be expected to be a completely faithful representation of the real interactions among the genes. However, it is reasonable to assume that networks obtained with direct biological methodologies under controlled conditions are not biased towards certain topological features, and therefore provide a good representation of the topology of the real GRN. Given these premises, it is not surprising that different GRNs are available in the literature for the same organism, and a choice had to be made to determine the datasets better suited for our analysis. We tried to select high quality datasets characterized by a statistical assessment of the interactions, a low rate of false positives, and public availability of the data. The Escherichia coli RegulonDB dataset (Salgado et al., 2012) is probably the most validated GRN available in the literature. This dataset is regularly updated to incorporate new data, and is consistently used as a basis for theoretical studies (Alm and Arkin, 2003;Milo et al., 2004). In its current state the dataset does not report the environmental conditions associated with each interaction. Therefore, it is likely that under any specific conditions only a subset of the interactions is active. Recently, the GRNs of two other prokaryotes have been published: Pseudomonas aeruginosa (Galan-Vasquez et al., 2011) and Mycobacterium tuberculosis (Sanz et al., 2011). Since these organisms are less well studied than E. coli, their GRNs should be expected to be less complete. Data on human GRNs are limited and the recent work by the ENCODE consortium (Gerstein et al., 2012) provided a unique opportunity to compare the GRNs of a non-cancerous and a cancerous cell line, studied under similar experimental conditions. Moreover, the methodology used to construct these networks (ChIP-Seq) and the carefully engineered protocol suggest a high degree of biological reality. The situation for yeast is more complex. After the work by Lee et al. (2002), other datasets have been made available. To the authors' knowledge, the work by Harbison et al. (2004) is the most exhaustive GRN derived from direct biological methods under stable conditions (rich media), and therefore the ideal workbench for a topological analysis. Other yeast GRNs have been published and used for different types of studies about the genetic bases of yeast behaviour; in this context, Luscombe et al. (2004) andMacIsaac et al. (2006) are interesting examples. Luscombe et al. (2004) extended Lee et al. (2002) by introducing additional interactions obtained under different environmental conditions, but the data available do not provide a statistical assessment for the interactions. Moreover, the data from Garber et al. (2012) raise the possibility that the GRN of yeast is different under different conditions, suggesting that the network derived by Luscombe et al. (2004) may not provide a faithful representation of an equilibrium yeast GRN. MacIsaac et al. (2006) used the data provided by Harbison et al. (2004) to construct a regulatory network encompassing different Saccharomyces species, and derived the more conserved interactions. To the authors' knowledge, the dataset discussed in Garber et al. (2012) is the only one available where the author used ChIP-Seq to study the dynamics of transcriptional response. Other authors either make use of gene expression data-thus removing the purely transcriptional nature of the network-or consider only a handful of transcription factors-thus making the statistical analysis discussed here inappropriate. As remarked above, the heterogeneity of the transcriptional response in a population of cells is likely to contribute to the experimental error in this this type of data, and additional experimentation is important to verify our conclusions. Derivation of the networks and statistical analysis The E. coli GRN was constructed using version 8 of the RegulonDB (Salgado et al., 2012) available at http://regulondb.ccg.unam.mx/. The network was restricted to those interactions supported by at least two evidence codes. The validity of our approach with a different number of evidence codes is assessed in Figure 2 The S. cerevisiae GRN was constructed using the interactions reported by Harbison et al. (2004) under rich media growth (http://younglab.wi.mit.edu/regulatory_code). The network was restricted to those interactions with a p-value lower than 10 −3 . The validity of our approach with different p-values is assessed in Figure 2- The human non-cancer and cancer cell GRNs were constructed using the filtered data constructed by Gerstein et al. (2012) for the GM12878 and K562 cell lines respectively. As previously observed, cells from different tissues generally display different GRNs (Pe'er and Hacohen, 2011;Bensimon et al., 2012). Therefore, we analysed the GRNs from the two cell lines separately. These networks are encoded by the files enets8.GM_proximal_filtered_network.txt and enets7.K562_proximal_filtered_ network.txt respectively. The files are available at http://encodenets.gersteinlab.org/. The murine dendritic cell GRNs were constructed using the interactions reported by Garber et al. (2012) available at http://www.weizmann.ac.il/immunology/AmitLab/data-and-method/iChIP/data. Only the interactions between transcription factors were considered. An edge is inserted from gene A to gene B at time T if 1. The protein product of A binds to a promoter area of B 2. The combined score for the binding is larger that 26.9 3. The score for the binding at time T is larger than 26.9 Note that the threshold value of 26.9 was extracted from the experimental procedure of Garber et al. (2012). The P. aeruginosa GRN was constructed from the dataset provided by Galan-Vasquez et al. (2011). No filtering was applied and all the interactions were considered; therefore a perceivable level of false positives and negatives is to be expected. The M. tuberculosis GRN was constructed from the dataset provided by Sanz et al. (2011). The dataset includes a list of evidence codes for each interaction. However, most interactions are supported by only one evidence code. Therefore, no filtering was applied and all the interactions were considered. Similarly to P. aeruginosa, a perceivable level of false positives and negatives is to be expected. The yeast dataset provided by Luscombe et al. (2004) does not include any systemic information on the statistical validity of the interactions. Therefore no filtering was applied and a perceivable level of false positives and negatives is to be expected. The yeast dataset provided by Lee et al. (2002) was treated in the same way as Harbison et al. (2004) and only interactions supported by a p-value lower than 10 −3 were considered for the general analysis. To provide compatibility with the statistical conditions used in Harbison et al. (2004) and Lee et al. (2002), the yeast dataset provided by MacIsaac et al. (2006) was constructed using the file orfs_by_ factor_p0.001_cons2.txt. This file is available at http://fraenkel.mit.edu/improved_map/. Table 1 reports the number of genes, the number of transcriptional interactions and the network density for the full networks, identified by the (F), and for the networks composed by the transcription factor and the interaction among themselves, identified by (T), for all genome-wide datasets used in the article. Feedback loops and incomplete feedback loops were computed by counting the number of subisomorphisms from the feedback or incomplete feedback loops for each GRN under consideration. For feedback loops this value was divided by the length of the loop to account for automorphisms. Note that, due to the nature of the analysis used (sub-isomorphism count), all the possible ways in Except when stated otherwise, random networks were generated preserving the number of transcription factors, genes, and interactions for E. coli, P. aeruginosa, M. tuberculosis, yeast, and the human cell lines. Random networks generated by preserving additional topological properties were also considered (See the subection 'Effect of different constraints on the generation of random networks') and confirm the results discussed. For murine dendritic cells, random networks were constructed by preserving the number of transcription factors and interactions among them. Note that consequently the number of feedback loops and incomplete feedback loops is an underestimate with respect to the data considered in the other random networks. Self-regulating interactions were ignored. Since genes not encoding for TFs do not regulate other genes, a full-network analysis would artificially increase the stability property of the network. Therefore, to limit the bias introduced by the limited number of transcription factors, the number of incomplete loops, the motif abundance, the number of regulatory connections, and the probability of adding additional long feedback loops when a new regulatory connection is inserted were computed considering only the interactions among transcription factors. For each different type of random network, 1000 instances were generated for the data presented in the main Figures, while 100 instances were generated for the data presented in the Figure Supplements. DOI: 10.7554/eLife.02863.038 which a feedback loop or incomplete feedback loop can be created in the network are considered separately. Therefore, an edge or node is generally counted multiple times. The analysis is indicative of the general structure of the network, but can lead to counter-intuitive results. Graphical motifs were computed in the usual way , but due to the different theoretical approach, no direct comparison with random networks was performed. The probability of feedback creation by random addition of an edge in real GRNs was computed by trying all the possible edge insertions between the TFs and taking the ratio of insertions that form long feedback loops over the total number of insertions. This approach was computationally unfeasible for random networks and a sampling procedure was used: for each random network, 100,000 independent random insertions of one connection from two randomly selected transcription factors were tried and the probability of interest was estimated by considering the value: where N S is the number of insertions that resulted in the creation of long feedback loops. Additional details on the comparison of GM12878 and K562 are presented in Supplementary file 1. Effect of different constraints on the generation of random networks It is common practice in statistics to use random simulations as a null model to test whether a feature can emerge with a high probability by chance. However, selecting the right type of randomness can be problematic. Network theory is no exception in this regard. Different constraints can be built into a simulation to obtain different types of random networks. To assess the role of different types of randomness on the properties of Qualitative Stability, we generated different types of random networks with different constraints using a variable number of characteristics of the E. coli network discussed in the main article. We focused our attention on four characteristics of the real networks: 1. The number of genes (the number of vertices of the random network) 2. The number of interactions among the genes (the number of edges of the random network) 3. The number of transcription factors (the number of vertices allowing an out-degree larger than zero) 4. The absence of isolated genes (the absence of non-connected vertices) Enforcing the number of vertices and edges of random graph is a common feature of random graphs: these random graphs are called Erdős-Rényi random graphs (Bollobás, 2001). However, it is less common in the literature to enforce additional characteristics. We stress that including additional constraints results in a network that is less 'random'. The types of random networks that we considered are detailed in Table 2. The algorithms used to generate the different types of random networks have been implemented in R and are available as Source code. The functions used are listed by Table 3. Note that TF Fixed IG Not Allowed V1 and TF Fixed IG Not Allowed V2 use different algorithms to generate the networks: • TF Fixed IG Not Allowed V1 generates an initial random network constructed by connecting each gene to one TF (selected at random). This ensures that no isolated genes are present. Then, if the number of edges used is less than the number required, edges are added at random until the expected number of edges is reached. • TF Fixed IG Not Allowed V1 randomly adds edges between TFs and genes until a network with no isolated genes has been generated. At this point, if the number of edges used is less than the number required, edges are added at random until the expected number of edges is reached. Alternatively, if the number of edges is more than the number required, edges are removed at random, in such a way as not to create isolated genes, until the expected number of edges is reached. TF Fixed IG Not Allowed V1 should be expected to be less biased, but is computationally extremely intensive, as a large number of edges usually needs to be removed. TF Fixed IG Not Allowed V2 uses a more biased procedure, but is more computationally tractable. As indicated by In our analysis we focused on a minimal number of constraints, in such a way to be able to assess the selective pressure of robustness at all scales. A detailed analysis of the adherence to BQS of different types of random networks is beyond the scope of this article and will be the subject of future investigations. However, given the existence of widely used more advanced models, it is important to justify our choice. BQS provides predictions on GRNs at many scales, including degree distribution and motifs abundance, and the clear compliance of GRNs to such predictions indicates that robustness has left striking and detectable signatures. Therefore, using a method designed to preserve features such as degree distribution is likely to result in a network carrying the seed of robustness. To test this hypothesis, we assessed the effect of the degree-preserving model commonly used in motif analysis for the E. coli GRN. This model, implemented by the function 'rewire' of igraph, reshuffles the original network in such a way to completely preserve the original in-degree and out-degree for each single node. Our results (not shown) indicate that feedback loops are relatively uncommon in degree-preserving random networks, regardless of the number of rounds of rewiring. However, incomplete feedback loops were more abundant and longer when compared to the real GRNs, even though to a lesser extent than purely random networks. Finally, the distributions of 3-and 4-node motifs were perturbed in such a way as to decrease the number of buffered motifs and to increase the number of unbuffered ones. However, the degree-preserving algorithm was unable to generate networks displaying an equal number of the 3-and 4-node motifs for the classes highlighted in the article, in stinking difference from a purely random model, even after 70,000 rounds of rewiring. Taken together, these observations support the idea that GRNs carry a strong signature of BQS and that random models constrained to be similar to GRNs will inherit, at least in part, many features of BQS. Our results also support the idea that purely random models may be the ideal null models to explore and highlight pervasive features of networks. Additional files Therefore, the likelihood of observing in K562 one of the 4362 potential regulations of GM12878 is 56/4362 ≈ 0.013. The likelihood of observing in K562 one of the potentially destabilising regulations of GM12878 is 3/48 = 0.062. Finally, two of the potentially destabilising interactions of the GM12878 have a particularly strong destabilization potential: an additional transcriptional regulation from JUNB to FOSL2 will create five long feedback loops and an additional transcriptional regulation from JUNB to E2F1 will create 3. Once again, our analysis highlights genes that may play a key role in cancer progression (Kouzarides and Ziff, 1989;Engelmann and Pützer, 2012
2017-04-01T08:38:40.300Z
2014-08-08T00:00:00.000
{ "year": 2014, "sha1": "5918a3bebb4ed6180c5c45f7eb2e8254327a46de", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.02863", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fcbd69820458c2fed4ed389c7917d283149b63c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
51841592
pes2o/s2orc
v3-fos-license
Anoxia-Induced Suspended Animation in Caenorhabditis elegans Some of the most complex biological processes were first elucidated in somewhat “simple” genetic model organisms. For example, where would we be in our molecular and cellular understanding of gene expression, cell division, embryo development and cell death if it were not for research using E. coli, Yeast, Drosophila and C. elegans? Due to the pioneering work by Sydney Brenner and others, the soil nematode C. elegans is now a well-known genetic and developmental model system (Brenner, 1974). Genetic approaches have contributed significantly to our advanced understanding of the mechanisms regulating gene function, organ development, microRNA function and signaling pathways regulating aging and stress (WormBook). The molecular advances and development of genetic tools such as RNA interference (RNAi) and protein expression analysis with the Green Fluorescent Protein (GFP) were initially worked out in C. elegans and thus firmly established this organism as a cornerstone of genetic models for unraveling the molecular mechanisms of many biological processes (Chalfie and Kain, 2006; Fraser et al., 2000; Jorgensen and Mango, 2002; Timmons and Fire, 1998). Research from many labs have clearly demonstrated that this small soil nematode has contributed significantly to our understanding of biology and that multiple types of molecular tools exist to elucidate mechanistic details. Further examples of the significant impact C. elegans has had in our understanding of biology are the fairly recent Noble Prize awards to six individuals (S. Brenner, M. Chalfie, A. Fire, R. Horvitz, C. Mello, J. Sulston) who made paradigm shifting discoveries using C. elegans. Through the effort of many within the C. elegans community, the molecular tools and genetic resources available in this model system have helped to address and elucidate the molecular mechanisms regulating multiple biological processes. Introduction Some of the most complex biological processes were first elucidated in somewhat "simple" genetic model organisms.For example, where would we be in our molecular and cellular understanding of gene expression, cell division, embryo development and cell death if it were not for research using E. coli, Yeast, Drosophila and C. elegans?Due to the pioneering work by Sydney Brenner and others, the soil nematode C. elegans is now a well-known genetic and developmental model system (Brenner, 1974).Genetic approaches have contributed significantly to our advanced understanding of the mechanisms regulating gene function, organ development, microRNA function and signaling pathways regulating aging and stress (WormBook).The molecular advances and development of genetic tools such as RNA interference (RNAi) and protein expression analysis with the Green Fluorescent Protein (GFP) were initially worked out in C. elegans and thus firmly established this organism as a cornerstone of genetic models for unraveling the molecular mechanisms of many biological processes (Chalfie and Kain, 2006;Fraser et al., 2000;Jorgensen and Mango, 2002;Timmons and Fire, 1998).Research from many labs have clearly demonstrated that this small soil nematode has contributed significantly to our understanding of biology and that multiple types of molecular tools exist to elucidate mechanistic details.Further examples of the significant impact C. elegans has had in our understanding of biology are the fairly recent Noble Prize awards to six individuals (S.Brenner, M. Chalfie, A. Fire, R. Horvitz, C. Mello, J. Sulston) who made paradigm shifting discoveries using C. elegans.Through the effort of many within the C. elegans community, the molecular tools and genetic resources available in this model system have helped to address and elucidate the molecular mechanisms regulating multiple biological processes. C. elegans, a soil nematode, was originally isolated in Bristol, England (N2 Bristol strain) (Brenner, 1974).In its natural environment oxygen levels fluctuate, therefore C. elegans likely evolved mechanisms to survive changes in oxygen levels (Lee, 1965).Indeed, it is known that C. elegans survive oxygen deprivation (hypoxia and anoxia) that have an impact on behavior, growth and development (Anderson, 1978;Padilla et al., 2002;Paul et al., 2000;Van Voorhies and Ward, 2000).C. elegans are able to sense various oxygen tensions and in fact prefer a 5-12% O 2 concentration (Gray et al., 2004).A lower level of oxygen in the environment is stressful to the worm yet the animal has evolved mechanisms to survive the stress of hypoxia and anoxia.A review by Powell-Coffman nicely provides an overview of the signaling pathways and responses involved with oxygen deprivation in C. elegans (Powell-Coffman, 2010).In C. elegans and other metazoans a key to sensing low oxygen levels within the environment is the hypoxia-inducible factor HIF-1.The transcription factor HIF-1 is needed to induce the expression of a variety of genes so that the animal may survive low oxygen environments (Jiang et al., 2001).The mechanisms regulating HIF-1 activity are being worked out in a variety of systems including C. elegans and has been reviewed in many publications (Epstein et al., 2001;Semenza, 2007;Semenza, 2010) Interestingly, for C. elegans, the level of oxygen in the environment will dictate which genes are required for oxygen deprivation response and survival.For example, HIF-1 function is required for animals to survive and maintain normal developmental functions in .5% and 1% O 2 however, HIF-1 is not required for anoxia survival (Padilla et al., 2002).This review will focus on the response to anoxia and known mechanisms required for C. elegans to survive anoxia. Methodology for studying anoxia response and survival in C. elegans Given that the specific oxygen concentration affects the response C. elegans has to the environment it is important to discuss the methodologies used to produce an anoxic environment within the laboratory setting.Typically, several laboratory methods are used to expose C. elegans to anoxia.Figure 1 provides a general review of the methodologies commonly used and the pros and cons of each method.For example, a convenient and cost effective approach is to place C. elegans, which are grown on agar nematode growth media (NGM), into anaerobic biobags (Becton Dickson Company).These anaerobic biobags are typically used for growing anaerobic bacteria and use resazurin at an oxygen indicator. Another approach that allows one to study subcellular responses to brief periods of anoxia, in live animals, is to use an anoxia flow-through chamber in conjunction with a spinning disc confocal microscope.This method has been valuable in following chromosome structure in embryos and oocytes within adult hermaphrodites exposed to brief periods of anoxia.If one is planning to expose a large number of animals to anoxia, the use of a hypoxia chamber that can hold many C. elegans plates is the best approach.The use of a hypoxia chamber has been valuable for large-scale forward genetic or RNA interference screens.There are different types of hypoxia chambers; those that can be commercially purchased or those that can be tailor designed by the researcher.The chambers that are researcher designed can be made to allow a flow through of nitrogen to replace the air.The commercially available glove box chambers (Ex: Ruskinn Inc. anaerobic and hypoxia workstations) are useful if one needs to expose animals to oxygen deprivation for very long periods of time or if the animals need to be manipulated while exposed to anoxia.The primary disadvantage to the glove box chambers is that for some models a temperature below ambient can only be reached if the entire chamber is located within a low temperature room.Temperature during anoxia exposure is an important consideration given that temperature does influence anoxia survival rate (LaRue and Padilla, 2011).It is of interest to use more than one of these methodologies to verify observed anoxia responses and phenotypes are consistent. Anaerobic biobag Inexpensive, commercially available Portable and can be placed in various temperature incubators Fairly consistent transition time to anoxia No need for gas tanks Limited number of plates can be put into the environment A brief increase in temperature due to the chemical reaction used to remove oxygen within the biobag Microscope chamber Visualize subcellular and cellular changes In vivo analysis of GFP fusion protein markers Hypoxia experiments also feasible Exposure time is limited Potential issues such as sample dehydration Small number of animals analyzed Flow-through chamber Researcher designed for specific experiments Hypoxia experiments also feasible Large number of animals can be simultaneously exposed Can be placed in a specific temperature controlled room Transition time from normoxia to anoxia can vary Glove box chamber Hypoxia experiments also feasible Manipulation of animals while in the anoxic environment Can immediately place animals into the environment with no transition time Large number of animals can be simultaneously exposed Commercially available from various sources Costly Chamber temperature higher than ambient unless placed in room with reduced temperature C. elegans as a model to study anoxia-induced suspended animation Some organisms, including metazoans, exposed to stresses or naturally occurring signals can arrest processes such as development, cell division or heartbeat (Clegg, 2001;Mendelsohn et al., 2008;Padilla and Roth, 2001;Podrabsky et al., 2007;Renfree and Shaw, 2000;Riddle, 1988).Suspended animation is the arrest of observable biological processes induced by either a cue or stress in the environment, or a signal from within the animal.In the case of C. elegans, animals exposed to anoxia will enter into a reversible state of suspended animation in which observable biological processes, such as cell division, development, eating, egg laying, fertilization and movement arrests until air is reintroduced into the environment.Suspended animation can be maintained for a few days, depending on the developmental stage of exposure; extended periods of anoxia exposure will lead to lethality.Figure 1 demonstrates the developmental arrest that is observed in C. elegans exposed to anoxia.In a hypometabolic state, such as suspended animation, homeostasis is maintained until the environment required to support energy requiring processes is resumed.However, hypometabolic states, including suspended animation, are not maintained indefinitely and at some point the animal may die if the environment is not shifted to a more conducive state to support biological processes.Embryonic diapause, another hypometabolic state that can be either obligate or environmentally induced, is a natural survival strategy to maintain populations and maximize offspring.Embryonic diapause can be thought of as a state of suspended animation in that development and cell divisions are arrested (Clegg, 2001).An example of a vertebrate that enters into an obligate diapause is the killifish embryo Austrofundulus limnaeus.A. limnaeus embryos in diapause II are remarkably tolerant to anoxia (Podrabsky et al., 2007).The mechanistic overlap between developmental arrests induced by anoxia in comparison to naturally occurring diapause remains to be determined.At every stage of development C. elegans survive 24 hours of anoxia exposure at a rate of 90% or greater (Foll et al., 1999;Padilla et al., 2002;Van Voorhies and Ward, 2000).The most anoxia tolerant developmental stages are dauer larvae and embryos (Anderson, 1978;Padilla et al., 2002).The ability to survive longer bouts of anoxia depends upon developmental stage, growth temperature, diet, genotype and fertility; these factors will be elaborated on further in this chapter.In general, C. elegans are sensitive to anoxia (and hypoxia) if the temperature during exposure is increased (Ex: 28°C instead of 20°C) or if the duration of anoxia exposure is increased from one to three days (Mendenhall et al., 2006;Scott et al., 2002).After non-lethal exposures to anoxia the animals will resume biological processes such as cell division, development, eating, movement, and offspring production. How quickly animation resumes is dependent upon the anoxia exposure time.For example, an embryo that was exposed to one day of anoxia will resume cell cycle progression faster than an embryo that was exposed to three days of anoxia (Hajeri et al., 2005).Many aspects of anoxia response and survival are not understood.Listed below are questions that will be of interest to address in terms of molecular mechanisms regulating anoxia responses. 1. What genetic factors control entry, maintenance and exit from anoxia-induced suspended animation?2. What cellular changes occur in animals exposed to anoxia and are such changes necessary and sufficient for anoxia-induced suspended animation?3.In the embryo, how does a reduction in oxygen levels signal cell cycle arrest; via which cell cycle machinery?4. How are developmental programs arrested and resumed in embryos and larvae exposed to anoxia? 5. How are complex tissues, such as muscles and neurons in adults, maintained during anoxia exposure?6.What is the metabolic state of animals exposed to anoxia relative to duration and developmental state?7. How do metabolic changes influence anoxia-induced suspended animation?8. What molecular mechanisms balance offspring production and anoxia stress survival?9. How can anoxia studies in C. elegans be used to better understand oxygen deprivation sensitivity in humans? Anoxia-induced cell cycle arrest in the developing embryo There are known environmental changes that influence cell cycle progression.For example, UV radiation will activate cell cycle checkpoint proteins and lead to a cell cycle arrest and repair of DNA damage so that the cell can progress through cell division (Hartwell and Weinert, 1989;Nurse et al., 1998).Also, exposing cells to drugs (Ex: Taxol, Nocodazole) was instrumental in the identification of cell cycle checkpoint genes.Identifying the fundamental regulation of cell cycle progression is central to the development of cancer treatments, thus understanding how oxygen levels affect cell division is of interest.Anoxia-exposed C. elegans embryos contain blastomeres that arrest at specific positions of the cell cycle: interphase, late prophase and metaphase.The lack of anaphase blastomeres indicates that the embryos are not progressing through the cell cycle further supporting that these embryos are indeed arrested.The phenomenon of anoxia-induced arrest is not unique to C. elegans since zebrafish (Danio rerio) and Drosophila melanogaster embryos also arrest cell cycle progression in response to anoxia or hypoxia (DiGregorio et al., 2001;Douglas et al., 2001;Foe and Alberts, 1985;Padilla and Roth, 2001).The use of cell biological techniques, such as indirect immunofluroescent assays or in vivo GFP fusion protein assays, showed that anoxia-arrested blastomeres have specific characteristics or hallmarks (Hajeri et al., 2005).An overview of the anoxia-induced cellular changes observed in embryos is summarized in Table 2 and discussed in detail throughout this section. Cell cycle stage and environment Characteristics of anoxia Cellular changes associated with interphase arrest Interphase blastomeres of anoxia-exposed embryos contain chromatin that is highly condensed and the level of condensation appears to increase with longer periods of anoxia exposure (Foe and Alberts, 1985;Hajeri et al., 2005).Chromatin condensation is characteristic of inactive chromatin and thus it is likely that a global down regulation of gene expression occurs in anoxic embryos.A reduction in gene expression is likely a means to conserve energy and maintain metabolic homeostasis (Hochachka et al., 1996).We know little about the mechanisms that regulate arrest of interphase blastomeres and if the interphase blastomeres arrest at a specific position of interphase (G1, S or G2).A challenge with C. elegans embryos is that the onset of gap phases may be lineage dependent.Thus, cell lineage would likely need to be considered when trying to determine the position of interphase arrest. Cellular changes and genes associated with metaphase arrest Metaphase arrest or delay has been studied in other systems such as yeast or vertebrate cells in culture exposed to microtubule inhibitors.Through these studies the spindle checkpoint pathway and a greater understanding of cell cycle progression has been elucidated.The advantage of investigating anoxia-induced metaphase arrest in C. elegans embryos is that this can be studied in a developing organism and that oxygen deprivation is a stress that the organism must have adapted to in nature.The cellular changes observed in anoxia-arrested metaphase blastomeres are influenced by anoxia exposure time.For example, depolymerization of astral and spindle microtubules and a reduction of the spindle checkpoint protein SAN-1 at the kinetochore is more extensive in embryos exposed to three days of anoxia in comparison to one day (Hajeri et al., 2005).Likewise, there is a reduction in phosphorylation of certain proteins such as Histone H3 at Serine 10 and the mitotic proteins recognized by mAb MPM-2 (Padilla et al., 2002).These cellular changes could be due to a decrease in energy levels, in the form of ATP, with increased anoxia exposure time. In many organisms including C. elegans, the mAb 414 recognizes FG repeats of specific nucleoporin proteins that are components of the nuclear pore complex (NPC) (Lee et al., 2000).In normoxic embryos, mAb 414 will recognize the NPC of interphase, prophase and prometaphase blastomeres; mAb 414 signal is diminished in metaphase and anaphase blastomeres and will reform in telophase blastomeres.Therefore, mAb 414 is an excellent marker for the NPC and cell cycle position.In anoxic embryos, the metaphase blastomere contains NPC aggregates recognized by mAb 414.The significance of these NPC aggregates is not known, but is a consistent characteristic of anoxia-arrested metaphase blastomeres.All of these anoxia-induced cellular changes are reversible when the embryos are re-exposed to normoxia.The arrested metaphase blastomere will transition to anaphase and chromosome segregation will take place. An RNAi screen for genes on chromosome I that when knocked-down lead to an anoxia sensitivity phenotype showed that the spindle checkpoint is required for anoxia-induced metaphase arrest (Nystul et al., 2003).RNAi or genetic knockdown of the spindle checkpoint genes, san-1 (mad-3/BubR1 homologue) and mdf-1 (mad-1 homologue) leads to a decrease in the viability of embryos exposed to anoxia.The san-1(RNAi) as well as the san-1(ok1580) deletion mutant are sensitive to anoxia exposure.These embryos contain a dramatic decrease in the number of arrested metaphase blastomeres and an increase in nuclei with abnormal nuclear phenotypes such as anaphase bridging.These studies were the first to demonstrate that the spindle checkpoint is active in metaphase blastomeres and that a reduction in oxygen signals spindle checkpoint function.The specific signal from a reduction of oxygen to the activation of the spindle checkpoint apparatus is not worked out.However, there is a reduction in microtubule polymerization in anoxic metaphase blastomeres, suggesting that a decrease in microtubule structure may be the signal to the spindle checkpoint proteins to initiate an arrest of metaphase blastomeres (Hajeri et al., 2005).This is inline with the findings by others that drugs that perturb the microtubule structure lead to an induction in spindle checkpoint function.Since various spindle checkpoint alleles are associated with predisposition to some cancers, the importance the of spindle checkpoint function and oxygen levels in regards to human health related issues is further underscored (Hardwick et al., 1999;Hardwick and Murray, 1995;Hartwell, 2004). Cellular changes and genes associated with prophase arrest In comparison to a metaphase arrest, an arrest of a prophase blastomere is less characterized.To further analyze prophase arrest the progression of prophase to metaphase must be understood.In C. elegans, the transition from prophase to prometaphase occurs when the chromosomes are fully condensed and nuclear envelope break down (NEBD) begins (Dernburg, 2001;Moore et al., 1999;Oegema et al., 2001).The progression of NEBD, which is a commitment to mitosis, can be followed using cellular analysis to detect nucleoporins, which are components of the nuclear pore complex.In an anoxia-induced prophase arrested cell the process of NEBD and the transition to prometaphase is arrested. To further understand prophase arrest two main approaches have been taken.First, cell biological analysis of nuclear structures was conducted to characterize the prophase arrest.Second, RNAi screens and analysis of genetic mutants were conducted to identify genes required for anoxia-induced prophase arrest.These approaches are of interest to identify molecular changes in the arrested prophase blastomere and to identify genes essential for anoxia survival. A hallmark of an anoxia-arrested prophase blastomere is that the condensed chromosomes associate with the inner nuclear periphery; we refer to this phenotype as "chromosome docking" (Table 2) (Hajeri et al., 2005).Interestingly, anoxia-induced chromosome docking occurs in both the somatic cells of the developing embryo and in the oocyte of an adult hermaphrodite exposed to anoxia (Hajeri et al., 2010).In the embryo exposed to anoxia the chromosomes will condense prior to movement to the inner nuclear periphery.The chromosomes will remain docked at the inner nuclear membrane until returned to a normoxic environment.This is in contrast to the normoxic embryo in which the chromosomes move throughout the nucleus until NEBD occurs.In anoxia-exposed adult hermaphrodites the oocytes, which are in prophase I of meiosis, contain bivalent condensed chromosomes that localize to the inner nuclear periphery.In contrast, the oocytes of normoxic controls contain bivalent chromosomes that localize throughout the nucleus (Hajeri et al., 2010).Drosophila embryos exposed to anoxia also contain chromosomes that associate with the inner nuclear periphery indicating that chromosome docking in response to anoxia is not just a C. elegans phenomenon (Foe and Alberts, 1985).The relevance of chromosome docking in blastomeres that are exposed to anoxia is unknown but it is possible that chromosome docking is a means to maintain chromosome integrity or function www.intechopen.comduring anoxia exposure.While much is known about chromosomal territories in the interphase nucleus little is understood about chromosome location in prophase cells (Cremer et al., 2000;Geyer et al., 2011).It is not known if the mechanisms that regulate chromosome territories in interphase cells overlap with those regulating chromosome docking in arrested prophase blastomeres. Given that anoxia induces chromosome docking in prophase blastomeres, indirect immunofluorescence has been used to characterize proteins associated with the nuclear membrane and chromosomes.Cell biological analysis shows that nuclear structures are altered in arrested prophase blastomeres relative to normoxic control embryos.Note that a relevant aspect of C. elegans chromosomes is that they are holocentric instead of monocentric in nature; this allows detailed cell biological analysis of chromosomes and chromosomal associated proteins.Using an antibody to recognize the kinetochore protein HCP-1 (CENP-F like) we determined that the kinetochore is altered in anoxia-arrested prophase blastomeres (Figure 2A).HCP-1 associates with chromosomes of normoxic prophase blastomeres but is not detected on the chromosomes of anoxia-arrested prophase blastomeres until embryos are returned to a normoxic environment (Figure 2A).In anoxia-arrested metaphase blastomeres HCP-1 is detectible indicating that the kinetochore changes observed in anoxia blastomeres is dependent on stage of mitosis (Hajeri, 2005).Lamin, is an important inner nuclear protein that functions to maintain nuclear membrane structure and function.It is a target of post-translational modifications by CDK-1 during the complex process of cell cycle progression through mitosis (D'Angelo et al., 2006;De Souza et al., 2000;Gong et al., 2007;Heald and McKeon, 1990).In C. elegans, Ce-Lamin is localized to the inner nuclear membrane and nucleoplasm in normoxic prophase blastomeres.However, in the prophase blastomeres of anoxia-exposed embryos, Ce-Lamin is primarily localized to the inner nuclear membrane and there is a reduced level in the nucleoplasm (Figure 2B).The significance of reduced lamin in the nucleoplasm in anoxic blastomeres is not understood but does reflect alterations within the nucleus of anoxia-exposed embryos. Antibodies that recognize nucleoporins associated with the nuclear pore complex can be used to monitor the nuclear envelop relative to cell cycle position (D'Angelo and Hetzer, 2008).We did not notice substantial change in the NPC of prophase-arrested blastomeres when assayed using mAb 414 (Table 2).However, using a commercially available antibody raised against human NUP50 we found evidence that an anoxia-arrested prophase blastomere differs in comparison to a normoxic prophase blastomere.The late prophase blastomeres of embryos exposed to normoxia have a reduced level of NPC that is detected by anti human NUP50, which is suggestive of NEBD occurring (Figure 3, arrow).Yet, in the anoxia-arrested prophase blastomere anti-NUP50 signal was present, suggesting that NEBD is not occurring and may be arrested (Figure 3, arrow head).Thus, a plausible mechanism to induce prophase arrest is to prevent NEBD and thus the transition to prometaphase.In both normoxic and anoxic embryos anti-NUP50 recognizes interphase nuclei in a similar manner.Genetic analysis has been instrumental for identifying processes that regulate cell cycle arrest and progression.Previously, we determined that knockdown of the nucleoporin protein NPP-16/NUP50 by RNAi or genetic mutation results in a decrease in embryos that survive anoxia exposure (Hajeri et al., 2010).Additionally, in the anoxia exposed npp-16(RNAi) embryo, there is an increase in abnormal nuclei and a reduction in arrested prophase blastomeres (Figure 4).The number of arrested metaphase blastomeres is not significantly different than wild-type embryos exposed to anoxia indicating that npp-16 function is required specifically for prophase arrest (Hajeri et al., 2010).What is the role of NPP-16 in anoxia-induced prophase arrest?A key to addressing this question was noting that the predicted NPP-16 human homologue NUP50 was shown to interact with p27 kip1 , a CDK inhibitor, suggesting a role of NUP50 with cell cycle regulation (Smitherman et al., 2000).In mammalian cells, CDK-1/cyclinB regulates the G2/M transition and NEBD by targeting a multitude of substrates (Lindqvist et al., 2009;Lindqvist et al., 2007).In C. elegans embryos, the NPC protein gp210, which is phosphorylated by CDK-1/cyclin B, is important for the depolymerization of lamin and required for NEBD (Galy et al., 2006).Data suggest that NPP-16 and CDK-1 have a role in anoxia-induced prophase arrest and that anoxia-induced arrest of NEBD is compromised in npp-16 mutants.The use of antibodies that recognize CDK-1 showed that in wild-type embryos exposed to anoxia the protein is localized near the chromosomes in prophase blastomeres; this localization is reduced in the npp-16 mutant exposed to anoxia.Second, an antibody that recognizes the inactive form of CDK-1 (anti CDK-1 P Tyr15 ) was localized to prophase blastomeres of anoxic embryos but was absent from the prophase blastomeres of normoxic controls or the npp-16 embryos exposed to anoxia.This indicates that not only is CDK-1 regulated differently in anoxic embryos but that this regulation differs in the npp-16 mutant which does not arrest properly in response to anoxia.Although the specific signaling between NPP-16 and CDK-1 is not yet understood this work does provide evidence that anoxia influences cell cycle machinery. Chromatin modifications have major affects on chromatin structure and function (Geyer et al., 2011).Modifications of histones are highly conserved in eukaryotes and influence many cellular processes such as gene expression and chromosome condensation.For example, the phosphorylation of histone H3 at Serine 10 is known to correlate with mitotic and condensed chromosomes.Previously, we showed that the phosphorylation of histone H3 at Serine 10 is reduced in mitotic blastomeres of embryos exposed to long-term anoxia.Alteration in the phosphorylated state of proteins may reflect that energy-requiring processes are reduced in anoxia and that cellular signals change in anoxia-exposed embryos.Acetylation of histones is another example of how post-translational modifications regulate cellular functions.Histone acetylation and deacetylation by Histone Acetyl Transferases (HATs) and Histone Deacetylase (HDAC), respectively, modulate chromatin and influence gene expression via the addition or removal of acetyl groups on histones (Ferrai et al., 2011). There are several C. elegans genes that are involved with histone modifications and many of these genes are essential.We found that the gene hda-2, when knocked down using RNAi, does not lead to embryo lethality or obvious defects in normoxic embryos.However, when these embryos are exposed to anoxia there is an increase in abnormal nuclei (Figure 5).The specific role hda-2 has in anoxia response and survival in the embryo needs to be further analyzed.There is evidence that modulation of the electron transport chain (ETC) activity has a role in cellular arrest.Exposure of embryos to ETC inhibitors (Ex: sodium azide) lead to cell cycle arrest and docking of prophase chromosomes.However, the embryos do not remain arrested and die within an hour of exposure (Hajeri et al., 2010).Thus, ETC inhibitors do not A. B. phenocopy anoxia exposure, suggesting that anoxia-induced suspended animation is partially regulated by changes in the ETC.Unanswered questions regarding anoxia-induced cell cycle arrest include: what is the specific signal between reduced oxygen levels and docking of prophase chromosomes?Is chromosome docking essential for anoxia survival?How are cell cycle checkpoint proteins regulated in the anoxia-exposed embryo?Further genetic analysis of C. elegans embryos exposed to anoxia can lead to answers to these questions. Metabolic and environmental changes that influence anoxia survival in the embryo In the embryo exposed to anoxia, not only does the cell cycle machinery need to respond to the reduction in oxygen levels but metabolic pathways must do so as well.Embryos exposed to anoxia have a reduction in the ratio of ATP/AMP; this reduction will affect metabolic pathways (Padilla et al., 2002).Given the central importance of carbohydrates to metabolism, carbohydrate homeostasis is likely to be important for anoxia survival.Indeed it was found that sugar levels are altered in anoxia exposed embryos.For example, glycogen levels decreased to approximately 20% of intitial levels after a 24 hour exposure to anoxia (Frazier and Roth, 2009).A sufficient level of carbohydrates, perhaps in the form of glycogen, is likely important for maintaining metabolism during anoxia exposure.The gene gsy-1 encodes glycogen synthase and when this gene is knocked down by RNAi the animal has a reduction in glycogen stores are sensitive to anoxia.(Frazier and Roth, 2009).Mutations in other genes that have reduced glycogen levels were also sensitive to anoxia further supporting the idea that glycogen homeostasis is important for anoxia survival.The Frazier and Roth study also demonstrated that the environment to which the parent is exposed can influence the anoxia survival rate of its embryos.For example, when L4 larvae develop to gravid adhulthood in a high salt environment (300 mM sodium chloride) their embryos are sensitive to anoxia (Frazier and Roth, 2009).It is likely that alterations of other central metabolic macromolecules are important for anoxia survival and the use of C. elegans genetics to alter metabolic pathways will shed light on metabolic pathways required for anoxia survival. Embryos are able to survive anoxia and hypoxia (0.5% O 2 ), yet when exposed to to severe hypoxia (100 to 1000 ppm of O 2 ) embryos will die (Nystul and Roth, 2004;Padilla et al., 2002).Whereas anoxia is inducing suspended animation a .5% O 2 hypoxic environment is sufficient to support the signal for developmental activities.It is possible that the embryos exposed to 100 to 1000 ppm of O 2 are exposed to oxygen levels that are too high to induce suspended animation but not high enough to support normal growth and development.In the initial experiments of embryos exposed to 100 to 1000 ppm of O 2 the embryos were on media and not within the adult (Nystul and Roth, 2004).However, when gravid adults were exposed to 100 to 1000 ppm of O 2 the embryos within the uterus survived by arresting (Miller and Roth, 2009).These results indicate that the O 2 microenvironment differs between the uterus of an adult and the surface of NGM media and that embryos differentially respond to O 2 levels. Anoxia tolerance in adult animals C. elegans adult animals have been useful for understanding the genetic and physiological responses to oxygen deprivation particularly because of the mechanistic overlap in oxygen www.intechopen.comdeprivation responses between C. elegans and other metazoans including humans (Powell-Coffman, 2010).Several unique characteristics of adult C. elegans make it a valuable model to study responses to oxygen deprivation.First, the adult animal has a relatively simple anatomy, easily observable somatic tissues and meiotic cells.These tissues amenable to analysis include muscle, neurons, intestinal cells and a very well studied germline that contains meiotic cells that give rise to oocytes and sperm in the hermaphrodite.Second, C. elegans has been used by many within the community to study genes involved with stress responses and lifespan; these studies allow investigators to identify overlapping and distinct mechanisms between stress responses and lifespan.Finally, C. elegans, as a soil nematode, has adapted to changing oxygen levels in the environment.Taken together, the anatomical, genetic and environmental niche characteristics of C. elegans provide a unique opportunity to identify the ways in which this simple model responds to and survives oxygen deprivation.Such information can aid in our understanding of why species do or do not have limitations in oxygen deprivation response and survival. Metazoans, including C. elegans, possess complex biochemical mechanisms that operate at the cellular level to promote oxygen deprivation tolerance (O'Farrell, 2001).These adaptations allow anaerobiosis in severe hypoxia and anoxia through a range of physiological responses that operate via three general strategies: increase the rate of flux through glycolytic pathways, decrease overall energy demand by rapid reduction in metabolic rate, or activation of physiological mechanisms that increase the efficiency of oxygen removal from the environment (Hochachka, 2000;Hochachka et al., 1996).The execution of these strategies involves modulation of a wide range of cellular pathways.For example, animals switch from energy source molecules during oxygen-deprivation from fat that is primarily utilized for aerobic energy metabolism to glycogen/glucose stores.During a 24 hour anoxia exposure as much as two-thirds of the animals carbohydrate reserve may be utilized as an energy source; this usage nearly accounts for the mass of glycosyl units of metabolites produced during the oxygen deprivation period (Foll et al., 1999). C. elegans frequently encounters oxygen-deprived microenvironments in its natural habitat and adult animals have adapted to tolerate these exposures.Wild-type hermaphrodites that are 1-day old (one day after the L4 larval to adult molt) survive 24 hours of anoxia at 90% (20˚C) when assayed on solid NGM medium (Padilla et al., 2002;Van Voorhies and Ward, 2000).Interestingly, Foll et al., (1999) reported a higher mortality for adult worms exposed to 24 hours of anoxia (20˚C) and a subsequent sharp rise in mortality for slightly longer exposure when assayed in liquid culture.The discrepancy between the reported survival rates of the two studies is likely due to differences in methodology.One possibility is that the process of crawling across agar medium requires less energy than swimming in liquid medium.If so, the additional energy expenditure while swimming in liquid media may compromise anoxia tolerance.While adult hermaphrodites are anoxia-tolerant the survival rate plummets as the duration of anoxia is lengthened (Mendenhall et al., 2006;Mendenhall et al., 2009;Padilla et al., 2002).The 1-day old adult has a markedly decreased survival rate (4.7%) in long-term anoxia, defined as a 72 hour or more anoxia exposure at 20˚C, demonstrating that there is an anoxia survival limitation (Mendenhall et al., 2006).The anoxia-survival limitation is taken advantage of to identify genetic mutations that lead to anoxia sensitivity (mutants that cannot survive 24 hours of anoxia) and anoxia tolerance (mutants that can survive long-term anoxia, > 3 days of anoxia). The adult anoxia-tolerance strategies include the worm entering a reversible state of suspended animation.In this state adults do not feed, do not lay eggs and cease to be motile.The process of crawling has been reported to carry a relatively low metabolic cost to the worm compared to the high cost of reproduction and tissue maintenance and this assessment is supported by the observation that animals whose metabolic rate has been reduced by greater than 90% do not show abnormal motility (Van Voorhies and Ward, 2000;Vanfleteren and De Vreese, 1996).The length of time animals remain active after the onset of anoxia varies among C. elegans strains.The majority of wild-type adults cease movement within 8 hours of the onset of anoxia (Mendenhall et al., 2006).However, the daf-2(e1370) animal, which is a long-term anoxia tolerant mutant strain and carries a reduction-offunction mutation in the insulin-like receptor (see section 6 below), will delay entering into suspended animation as demonstrated by observable movement after 24 hours of anoxia.Although movement is observed in the daf-2(e1370) animal exposed to anoxia it is slower than normoxic controls (Mendenhall et al., 2006).To date no mutation has been isolated that prevents the worm from entering into a state of suspended animation. The cylindrical body and simple gut design of the worm favors rapid diffusion of gases across both the gut lumen and cuticle into the metabolically active intestine.C. elegans is an oxygen regulator and seems to be insensitive to hyperoxia (Van Voorhies and Ward, 2000).However, when confronted with oxygen deprivation the worm must respond by either remaining animated or entering into suspended animation; the determining factor often being oxygen tension and perhaps metabolic state (Nystul and Roth, 2004).It has been observed that animals remain active in hypoxia but enter suspended animation in anoxia. Which factors are critical in the molecular decision to suspend or continue processes such as movement and how these factors are regulated at the cellular and tissue level remains unclear.Nevertheless, valuable information regarding genes required for both hypoxia and anoxia survival has been gleaned (Jiang et al., 2001;Padilla et al., 2002;Scott et al., 2002).For example, among the adaptations adults posses is the ability to sustain a steady metabolic rate even when exposed to a range of decreasing oxygen tensions and not until ambient oxygen tension falls to 3.6 kPa will metabolic rates begin to drop for young adults (Anderson and Dusenbery, 1977;Suda et al., 2005;Van Voorhies and Ward, 2000).However, once the environment becomes anoxic, metabolic rate drops to as low as 5% of that in normoxic conditions and recover in a slow linear fashion only after removal from anoxia (Van Voorhies and Ward, 2000). The anatomical and physiological impact of anoxia exposure While in a state of suspended animation, the immobile C. elegans often adopt linearly extended bodies or a bent or curved-sickle shape (Figure 6, arrow).Upon re-oxygenation survivors will resume movement and the overwhelming majority exposed to 24 hours of anoxia will move normally several hours post recovery (Figure 6B).Initial movement begins with slig ht s i de-to-s ide mo vement of the anterior head region then slowly spreads to include the entire head region.As recovery progresses the worm regains the ability to move the mid-body and tail regions in the classic sinusoidal motion and resumes foraging and egg-laying.Recovery from long-term anoxia takes longer and not all physiological processes appear to resume at the same rate.For example, in the few wild-type animals that survive long-term anoxia, contraction of the somatic gonad sheath and ovulation has been observed within 12 hours of post-anoxia, which is often before full body motility has resumed. Anoxia 40 Recovery of anatomical and organ function at different rates may compromise the viability of the animal.For example, if ovulation precedes the ability to lay eggs, the accumulated eggs within the uterus may lead to embryos hatching within the uterus (bagging out phenotype) and further compromise organs such as neurons, muscles or the intestine.It is possible that long-term anoxia survivors are better able to resume anatomical and physiological processes. The extent to which animals regain motility after anoxia exposure can vary within an experimental cohort and is influenced by duration of anoxia and genotype (see section 6). As the duration of anoxia exposure is lengthened the number of individuals that regain normal motility decreases (as visualized via a dissection microscope).In regards to wildtype adult animals, recovery within minutes can be observed after a 24-hour anoxia exposure.However, the few wild-type animals that survive long-term anoxia may not begin moving for several hours after re-oxygenation.This variability has been a useful tool in assessing the post-anoxia condition of survivors.Several approaches have been described for categorizing worm locomotion phenotypes (Gerstbrein et al., 2005;Herndon et al., 2002;LaRue and Padilla, 2011).In each system animals are categorized based on their movement and response to touch.For example, an animal is scored as dead if it is not moving and does not respond to gentle touch with a platinum wire.If the animal moves in response to touch it is scored as alive and then further classified based on level of motility; this classification provides an assessment of how well the animal moves after anoxia exposure.That is, animals capable of completing the typical sine wave motion similar to that of untreated adults are classified as having "unimpaired" movement while animals that move abnormally or move only a portion of the body are classified as having "impaired" movement.Utilizing this method one can assess how well an animal tolerates anoxia by monitoring its ability to recover and execute the fundamental process of movement.Often the impaired worms do not move and will consume the bacteria nearby leaving a fanshaped area emptied of food (Figure 6D).The underlying cause (Ex: a compromise of muscle and/or neuronal function) of anoxia-induced impairment is yet to be determined. It is also possible that post-reoxygenation impairment is due, at least in part, to the organism's inability to execute the processes required for the maintenance of cellular integrity thus leading to a loss of tissue structure (Mendenhall et al., 2006).Wild-type animals recovering from long-term anoxia have an overall loss of tissue structure in the head region that contains both neuron and muscle.Along with distortions in the muscle isthmuses of the pharynx, relatively large vacuoles or cavities also appear throughout the soma (Figure 7).However, long-term anoxia tolerant strains do not show the same tissue disorganization and appear to be better able to maintain tissue structure.This is presumably accomplished by either sustaining a homeostatic physiology during the anoxic period or by activating tissue maintenance and repair pathways post-reoxygenation. In addition to the loss of coordinated movement and incurring tissue damage, anoxia exposure affects multiple aspects of fertility.When eggs are exposed to 24 hours of anoxia and then allowed to mature to adulthood the onset of first reproduction is significantly delayed compared to normoxic controls.The anoxia treated animals also have a reduced reproduction rate and reduced fecundity compared to untreated controls (Van Voorhies and Ward, 2000).It is possible these changes are the result of a programmed response to the stress or may simply be the result of damaged meiotic cells. Environmental factors affect anoxia tolerance The environment to which an organism is exposed can have profound affects on phenotype.Environmental changes that induce a stress response include changes in temperature, water availability, diet and nutrient content, oxygen levels and exogenous molecules including toxins or pharmaceutical agents.If an organism is exposed to these factors during development and/or adulthood, the responses to and ability to survive the stress can be affected.In some cases a preconditioning environment, a low level of stress, can improve stress tolerance. In the laboratory C. elegans are typically grown at 15-20°C and provided the E. coli OP50 strain as a food source.There is evidence that the environment in which the animal is exposed will precondition for an enhanced anoxia survival phenotype by altering the physiology of the animal (LaRue and Padilla, 2011).Wild-type C. elegans grown at 20°C and fed the E. coli OP50 diet are very sensitive to long-term anoxia in that the majority of the www.intechopen.comFig. 7. Long-term anoxia exposure results in tissue abnormalities.One-day old adult wildtype hermaphrodites were exposed to 72 hours of either normoxia or anoxia followed by 24 hours of recovery in normoxia at 20°C.Anoxia treated animals, in comparison to controls, have an overall more grainy appearance, cavities/vacuoles (white arrows) in the psuedocoloem and head region.In addition, the anoxia-exposed animal shows accumulated fluid (black arrows) around the gut, intestine and pharynx.The unexposed animal has a normally structured intestinal lumen, which forms a large atrium-like cavity (A) at the anterior end of the gut at the pharynx-lumen juncture.In contrast, the anoxia survivor has bends in the pharynx and an intestinal lumen that is constricted and distorted forming irregular jagged kinks.The white line traces the lumen of the gut from the anterior bulb of the pharynx through the first intestinal cell.The intestinal cells of the anoxia-exposed worm are also packed with many very small intracellular globules (white arrowhead).The germline morphology is also affected by the anoxia stress.Asterisks mark the nucleus of some of the oocytes which are abnormally stacked well beyond the gonad bend in the anoxia treated animal compared to presence of syncytial germ cells (labeled GCs) visible in the distal gonad of the control animal.Images are both composites of three individual frames reassembled using Adobe PhotoShop CS.Scale bar = 20 µm. animals die and among those that survive most have an impaired phenotype.However, if the animal is grown at 25°C and fed the E. coli HT115 strain throughout larval development there is a significant increase in long-term anoxia survival and an unimpaired phenotype.The animals grown at 25°C and fed the E. coli OP50 strain also survive long-term anoxia yet many have an impaired phenotype in that they display visible defects in motility and tissue morphology.These data suggest that growth at 25°C and a diet of E. coli HT115 during development may synergistically enhance anoxia survival for C. elegans.It is possible that the 25°C temperature induces stress response genes (Ex: heat shock proteins) that prepare the animal to survive long-term anoxia. Alternatively, the preconditioning environment could alter energy stores leading to an increase in anoxia survival.There is evidence to support the idea that metabolic stores are altered in C. elegans raised at 25°C and fed the E. coli HT115 diet during development. First, it is known that the E. coli HT115 strain has higher carbohydrate levels in comparison to the OP50 strain; this may influence the metabolism of the worm (Brooks et al., 2009).Second, staining with carminic acid, which is used to detect carbohydrate stores within the intestines, indicates that carbohydrate levels are increased in the intestine of animals grown at 25°C and fed the E. coli HT115 diet in comparison to those raised at 20°C (LaRue and Padilla, 2011).Further analysis is needed to determine mechanistically how preconditioned metabolic and physiological changes within the nematode contribute to the enhanced long-term anoxia phenotype. While the temperature 25°C is not typically considered a stressful environment for C. elegans, some physiological processes are likely different between animals grown at 25°C or greater in comparison to those grown at 15-20°C.An increased temperature such as >28°C is likely stressful to the animal.It is known that animals exposed to one day of anoxia or severe hypoxia at 28°C instead of 20°C leads to markedly decreased survival rate (Mendenhall et al., 2006;Scott et al., 2002).Animals that are exposed to more than one stress at a time typically have a decrease in survival.However, such a condition has been useful in identifying genetic changes that increase viability in this environment.Alleles that affect the insulin signaling pathway increase viability when exposed to anoxia and 28°C (Scott et al., 2002).The insulin signaling pathway is known to be important for many stress responses and it will be of interest to tease apart the molecular factors that are specific to anoxia response and survival. Genetic factors associated with anoxia survival in adult animals A major strength of the C. elegans model system is the ability to dissect biological processes using a genetic approach.The use of forward genetic screens, RNAi genomic screens, suppression or enhancer screens, and analysis of transgenic fusion proteins is fundamental for discovering mechanisms regulating biological processes.These approaches have been used by many within the C. elegans community to unravel the mechanisms regulating complex processes such as programmed cell death, aspects of embryo and larvae development, chemosensing, dauer development, meiosis and many other processes.C. elegans is also a wonderful model to study stress responses, environmental and genetic factors that influence lifespan and the overlap between mechanisms regulating stress responses and lifespan.In terms of the biology of the adult hermaphrodite, it contains meiotic cells that can differentiate into oocytes and sperm.The hermaphrodite, which produces approximately 300 offspring, typically lays the majority of the embryos within the first two days of adulthood.with offspring production tapering off after the third day of adulthood.Males exist within the population thus genetic crosses can be conducted.In typical laboratory conditions the adult hermaphrodite has an average lifespan of approximately 15-20 days.The capability to manipulate genotypes, a rapid lifecycle and capacity to produce a large number of offspring make C. elegans an excellent system to analyze stress responses in relationship to specific biological function, such as germline and metabolic capacity.Here we expand upon the genetic approaches used to analyze anoxia responses in adult C. elegans.Table 3 summarizes genotypes discussed further in this chapter and their respective phenotypes relative to anoxia tolerance, germline function and lifespan. Reduction in insulin-like signaling favors anoxia tolerance Arguably the most well known pathway in the study of anoxia tolerance in C. elegans is the insulin/IGF-1-mediated signaling (IIS) pathway.Research led to identification and characterization of the molecular nature of the genes functioning in the insulin-like signaling pathway revealing that the pathway regulates metabolism, lifespan, stress responses and the development of the dauer state, which is a stress-resistant larva in diapause (Gottlieb and Ruvkun, 1994;Kenyon et al., 1993;Kimura et al., 1997;Riddle et al., 1981;Tissenbaum and Ruvkun, 1998).Much is known about the genes that function in dauer formation pathway (daf).The dauer regulatory pathway involves the daf-2 and daf-16 genes, which encode the insulin/IGF-1 receptor-like protein and a fork-head transcription factor, respectively (Kimura et al., 1997;Larsen, 2001;Larsen et al., 1995;Lin et al., 1997;Riddle et al., 1981).It is thought that DAF-2 interacts with a variety of insulin-like ligands and sends a signal via the AGE-1/PI3/AKT signaling pathway to repress the translocation of the transcription factor DAF-16 into the nucleus (Kenyon, 2010).The activation of functional DAF-2 results in phosphorylation of cytosolic DAF-16, an action that prevents its translocation into the nucleus keeping it sequestered in the cytoplasm.However, when signaling through the IIS pathway is reduced, for example during periods of food deprivation or in the presence of null or severely reduced function forms of the DAF-2 receptor, this inhibition does not occur and DAF-16 is translocated into the nucleus of the cell where it is thought to link with other nuclear factors to induce expression of a variety of genes in a coordinated manner to promote dauer formation, longevity, fat metabolism, stress response, innate immunity and anoxia tolerance (Kenyon et al., 1993;McElwee et al., 2003;Mendenhall et al., 2006;Murphy et al., 2003;Oliveira et al., 2009). The daf-2(e1370) mutant allele confers a greatly extended lifespan (from 18 to 42 days) when worms are grown early in development at a permissive temperature (functional DAF-2 is present) then shifted to the non-permissive temperature (non-functional DAF-2) at the L4 stage of development or when grown continually through development at 20°C (Kenyon et al., 1993).In addition to modulating lifespan, the daf-2(e1370) allele also confers various stress responses including long-term anoxia tolerance (Mendenhall et al., 2006).Both exceptional phenotypes are DAF-16 dependent and animals carrying a null mutation allele of daf-16 show no extension of lifespan and are long-term anoxia sensitive (survival = ~2%). It is noteworthy that factors influencing stress response and lifespan have both common and distinct genetic signals.Further investigation of the overlap in these pathways is of interest to the study of anoxia response and tolerance. Metabolic regulation is linked to anoxia tolerance Several daf-2 alleles induce a long-term anoxia or high-temperature anoxia/hypoxia survival phenotype; these phenotypes are suppressed by mutations in daf-16 (Mendenhall et al., 2006;Scott et al., 2002).An RNAi screen of genes known to be up-regulated by DAF-16 led to the identification of the gpd-2 and gpd-3 genes; these genes are nearly identical at the amino acid level and encode two of four glycolytic enzyme isoforms of glyceraldehyde-3 phosphate dehydrogenase (GAPDH; GPD-2/3) (Mendenhall et al., 2006).The daf-2(e1370);gpd-2/3(RNAi) animal exposed to one day of anoxia (28°C) or long-term anoxia (20°C) has a significantly reduced viability in comparison to daf-2(e1370) animals.The gpd-2/3(RNAi) animals survive short-term anoxia exposure yet are often impaired.These observations demonstrate that the physiological state generated by the daf-2(e1370) mutation is capable of protecting somatic tissue during anoxic stress and that gpd-2 and gpd-3 suppresses the daf-2(e1370) anoxia tolerance phenotype.Other genes involved with glycolysis were knocked down by RNAi but did not result in an anoxia sensitivity phenotype suggesting that the anoxia sensitive phenotype due to knockdown of gpd-2/3 may be due to something other than changes in glycolysis (Mendenhall et al., 2006). The ability to survive periods of environmental stress such as anoxia involves integration of signals emanating from many sources.The extent to which adaptive response programs are activated should correspond to the level or intensity of encountered stresses. Transcription and translation are modulated to decrease production of unnecessary gene products while ensuring proper levels of immediately necessary ones.Execution of the appropriate pathways and processes require adequate accessibility to energy, specifically ATP.5'-AMP-activated protein kinase (AMPK) is one of the energy sensors that monitors cellular AMP/ATP ratios and is conserved between humans and nematodes (Beale, 2008).In even small decreases in cellular energy status, AMPK will operate on substrates such that anabolic pathways are stimulated and catabolic ones inhibited.Stress triggers of AMPK activation include glucose deprivation, ischemia, oxygen deprivation, exercise and skeletal muscle contraction.However, the key-activating trigger for AMPK is probably starvation making its primary role to function as a whole body energy balancer (Hardie et al., 2006).LaRue and Padilla (2011) evaluated the role of AMPK in anoxia tolerance.While the overall survival rate of wild-type hermaphrodites and daf-2(e1370) were not affected by knockdown of aak-2 compared to untreated controls there was a significant decrease in the number of animals surviving in an unimpaired condition.However, after 4 days of anoxia aak-2(RNAi) suppressed the survival rate and unimpaired phenotype in both wild-type animals grown at 25°C and daf-2(e1370) animal (LaRue and Padilla, 2011).These observations implicate AMPK as a player in anoxia tolerance and necessary for preventing loss of coordination during anoxia treatment. Through work with other metazoan species it has been shown that during periods of anoxia a significant rise in the activities of enzymes responsible for glycogen degradation occurs in liver (Mehrani and Storey, 1995).C. elegans' simple body design localizes many of the functions accomplished by a variety of organs in higher eurkaryotes almost exclusively to the intestine, including carbohydrate storage (McGhee, 2007).In the long-term anoxia tolerant mutant strain daf-2(e1370), metabolism favors production of fat and glycogen in the intestine and hypodermal cells (Kimura et al., 1997).LaRue and Padilla (2011) used carminic acid to investigate the effect of anoxia on levels of stored carbohydrates in wild-type and long-term anoxia tolerant strains including daf-2(e1370).Carminic acid is a fluorescent derivative of glucose that binds to glycogen and trehalose.As expected, animals exposed to long-term anoxia showed a decrease in carminic acid staining post anoxia supporting the assumption that carbohydrates stores are utilized as an energy fuel during anoxic stress.They determined that wild-type adults grown at 25°C had higher levels of carminic staining in the intestine than control animals grown at 20°C and significantly elevated survival rates when exposed to 3 or 4 days of anoxia relative to 20°C controls.The long-term anoxia tolerant strains daf-2(e1370) and glp-1(e2141) both had high levels of carminic acid staining prior to anoxia exposure.RNAi knockdown of aak-2 suppressed this high level of staining indicating a reduction in stored carbohydrate levels.When daf-2(e1370);aak-2(RNAi) animals were exposed to 3 days of anoxia they showed impaired motility compared to daf-2(e1370) controls.Furthermore, aak-2 knockdown suppressed the daf-2(e1370) long-term anoxia tolerant phenotype when exposed to extended anoxic stress (4 days).Together these observations suggest that the level of carbohydrate available to the worm for use as fuel at the time it encounters anoxia can influence its ability to tolerate the stress, and that preconditioning at 25°C may operate at least in part by increasing the amount of stored carbohydrate available during anaerobiosis. Interestingly, AMPK activity has also been implicated as the master metabolic regulator of lifespan extension in C. elegans, particularly under starvation conditions.There is evidence that aak-2 promotes lifespan extension in the IlS mutants such as daf-2 in a daf-16independent manner (Apfeld et al., 2004).AMP/ATP ratios do not differ between wild-type and daf-2, suggesting that the longevity phenotype of daf-2 mutants is not simply due to an altered ratio of the two nucleotide molecules.Furthermore, individuals with the daf-16(mu86);aak-2(ok524) genotype have a reduced lifespan compared to wild-type or individuals carrying each mutation separately.While the long-term anoxia tolerant phenotype of daf-2(e1370) is completely suppressed by loss of daf-16, loss of aak-2 does not reduce the overall survival rate but instead significantly affects post-anoxia health.Taken together these observations suggest the genes function to influence lifespan and anoxiatolerance phenotypes via separate pathways.It will be of interest to determine how other metabolic mutants, such as daf-16(mu86);aak-2(ok524) fares in long-term anoxia. It is possible that alternative forms of carbohydrates naturally present in C. elegans may play a role in the extreme phenotypes of longevity and long-term anoxia tolerance.For example, trehalose is a glucose disaccharide that is thought to participate in a wide variety of stresses including heat, desiccation, hypoxia, oxidative stress and others.It has been proposed that trehalose exerts its stress-protective effects through protein stabilization (Hottiger et al., 1994;Singer and Lindquist, 1998).Lifespans of young-adult animals were optimally extended (by 32%) when the animal was exposed to 5mM trehalose but not by other disaccharides (Honda et al., 2010).A decrease in age-associated decline was seen within a few days of initial exposure to the sugar and the lifespan extension effect was greater in older animals than younger.Furthermore, trehalose-treated animals had an extended reproductive span that was not due to reduced daily progeny production but by prolonged self-fertility.Animals fed trehalose also showed other evidences of slowed aging and senescence, including delay of age-associated decline in pharyngeal pumping and reduced rate of accumulation of age-pigment.Interestingly, Drosophila adults overexpressing tps-1, trehalose-phosphate synthase, and with a confirmed increase in trehalose production had a reduced recovery time following anoxia exposure (Chen and Haddad, 2004).Furthermore, they present evidence and an argument supporting the role of trehalose as a protein stabilizer that operates during stress such as anoxia.The role of trehalose in the anoxia tolerance of C. elegans has not yet been clearly established.Considering the importance of metabolic factors in anoxia tolerance, it would be of interest to determine if trehalose plays a role in establishing the anoxia-tolerant phenotype. Loss of ceramide signaling confers hypersensitivity to anoxia The alleles identified that influence anoxia tolerance are mutations that lead to an increase in anoxia survival and were previously shown to influence stress responses, germline function or lifespan.Recently, a mutation in the hyl-2 gene was isolated that leads to anoxia sensitivity in the adult hermaphrodite (Menuz et al., 2009).In carrying out a genetic screen, the researchers specifically sought mutations that suppressed 24 hour anoxia-survival at 20°C; this led to the identification of the hyl-2(gnv1) allele.The hyl-2 gene encodes one of three ceramide synthases and has homology to Lag1p (yeast longevity assurance gene).Two alleles of a related ceramide synthase, hyl-1(gk203) and hyl-1(ok976), carry loss of function mutations.In contrast to hyl-2(gnv1) the two loss of function alleles actually conferred an increased tolerance to 48 hours and 72 hours of anoxia.The HYL-1 and HYL-2 synthases operate to efficiently produce ceramides and sphingomyelins of different lengths.Presence of a functional hyl-1 gene is not sufficient to rescue the anoxia sensitive phenotype of hyl-2 deficient worms.This suggests that hyl-2 operates to synthesize a specific ceramide required for anoxia tolerance.This is supported by the observation that the daf-2(e1370);hyl-2(gnv1) double mutant has a reduced anoxia survival compared to the daf-2(e1370) further suggesting that hyl-1 and hyl-2 work in parallel to affect anoxia tolerance.Ceramides have been implicated as effectors of several biological processes and it is possible that chemical interactions between ceramides of a specific chemical composition and other molecules may result in regulation of pathways specific to anoxia tolerance.It will be useful to clarify the role of ceramide-signaling in oxygen-deprivation tolerance as an approach to understanding the function of small lipophilic molecules in the regulation of biological processes. The germline influences anoxia tolerance As 1-day old adults, wild-type hermaphrodites actively reproduce via self-fertilization. Through the process of gonadal sheath contraction and dilation of the spermatheca mature oocytes move into the spermatheca to complete fertilization and are ovulated into the uterus theoretically making room for the next proximal maturing oocyte to take its place.These steps are initiated by binding of MSP (major sperm protein) to surface receptors on the proximal oocyte (Greenstein, 2005;Miller et al., 2001).Adult hermaphrodites undergoing oocyte maturation, fertilization and ovulation do not survive long-term anoxia (Mendenhall et al., 2009).In contrast, sterile animals that do not undergo oocyte maturation and ovulation (ex: glp-1(e2141), fog-2(q71) and fem-3(q20)) and animals with reduced progeny due to a reduced rate of ovulation (ex: ksr-1(ku68)) display long-term anoxia tolerant phenotype that is daf-16 independent. The glp-1 gene encodes an N-glycosylated transmembrane receptor that is one of two members of the LIN-12/Notch family of receptors present in C. elegans.Loss of function mutations of glp-1 gene cause germ cells, located in the distal gonad that would normally undergo mitosis, to prematurely enter meiosis thus preventing formation of self-renewing germ cells and a functional germ line.Therefore, while glp-1(e2141) sterile mutants have a somatic gonad they are incapable of producing oocytes and sperm (Crittenden et al., 1994;Mendenhall et al., 2009).Anoxia survival analysis of 1-day old adult glp-1(e2141) hermaphrodites showed them to be long-term anoxia tolerant with a survival rate of approximately 98% (Mendenhall et al., 2009).LaRue and Padilla (2011) were able to partially suppress the glp-1(e2141) long-term anoxia tolerant phenotype when the aak-2 was knocked down via RNAi in the double mutant glp-1(e2141); daf-16(mu86).It is worth noting that in addition to having an anoxia-tolerant phenotype, sterile glp-1(e2141) mutants also have an increased lifespan relative to wild-type animals (Arantes- Oliveira et al., 2002). Sterile genetic strains may exist as temperature-sensitive genetic mutants such as the germline deficient mutant strain glp-1(e2141), or as gonochoristic mutant strains such as fog-2(q71) in which females are incapable of producing self-sperm and thus the strain is maintained by mating with males.Additionally, treatments such as feeding animals the cell-cycle inhibitor drug FUDR or laser ablation of the germline precursor cells in L1 larvae will result in sterile animals.There is not only a relationship between sterility and anoxia survival but sterility also has an influence on increased lifespan.Interestingly, the longevity phenotype of sterile mutants is not due to merely the absence of producing offspring.Laser ablation of the germline precursor cells results in animals without a germline yet still possessing an apparently fully developed somatic gonad; such animals show the lifespan extension phenotype.However, ablation of both the germline and somatic gonad precursor cells results in sterile adults with a wild-type lifespan.Since both ablation treatments render the worm sterile the difference in lifespan cannot be attributed just to reproductive cost.Instead these studies present substantial evidence that the somatic gonad and germline both influence lifespan in contrasting manner (Hsin and Kenyon, 1999;Kenyon, 2010).While absence of a germline in glp-1(e2141) results in an long-term anoxia tolerant phenotype the role, if any, played by the somatic gonad in anoxia tolerance has not been determined. The anoxia-tolerant phenotype of the unmated fog-2(q71) is suppressed by mating with a fertile male.This observation supports the theory that the maternal soma is under the regulatory control of the germline.While the mechanism by which the germ line regulates maternal log-term anoxia sensitivity is not yet known, exceptions to the observation that sterility induces long-term tolerance are known.First, mutant strains have been identified that are sterile yet long-term anoxia sensitive.The sterile strains spe-9(hc52ts) and fer-15(hc15) are capable of completing the initial steps of oocyte maturation but produce no viable offspring, yet both strains are long-term anoxia sensitive (survival rate= 23.4% and 2.6%, respectively).This sensitivity is presumably due to an altered physiology triggered in the somatic tissues in response to signals originating in the germline.Second, in a contrasting exception, the mutant strain daf-2(e1370) is not only long-term tolerant but also fertile (Larsen et al., 1995).At 15°C daf-2(e1370) has a slightly smaller brood size than wildtype (81% of wild-type).Furthermore, the daf-2(e1370) animals lays eggs over an extended period of adulthood (from adult day 1-6) compared to wild-type (from adult day 1-4) (Larsen et al., 1995;Tissenbaum and Ruvkun, 1998).It is unlikely that a reduction in average daily progeny production alone is sufficient to account for the strong long-term anoxia tolerant phenotype seen in a 1 day old adult daf-2(e1370).Instead, the reduction in function of DAF-2 is probably operating by acting on substrates and in pathways not yet identified.Finally, It is important to note that unlike daf-2(e1370) the anoxia-tolerant phenotype of these sterile reproductive mutants is daf-16-independent.Evidence thus far suggests that the longterm anoxia tolerant phenotype can be established via multiple pathways that may genetically overlap but which are not identical. Anoxia tolerance is sex influenced The overwhelming majority of stress response studies, at the genetic and cellular level, have been conducted in adult hermaphrodites.This is likely due to the ease in obtaining and maintaining hermaphrodite animals in comparison to males.Yet, analysis of males and their response to stress may provide insight into the understanding of mechanistic responses to and survival of anoxia.The wild-type male and hermaphrodite differ in several respects aside from the obvious sex-differentiated phenotypes such as different germline structure and function.For example, the lifespan of males is shorter than that of hermaphrodites and male lifespan is dependent upon whether the individual male is solitary or within a group of other males (Gems and Riddle, 2000).Gems and Riddle interestingly found that males that are solitary have a longer lifespan than males that are cultured as a group of other males indicating that male-male interactions reduce lifespan. Survival of long-term anoxia also differs between wild-type adult hermaphrodites and males.One-day old wild-type and daf-16(mu86) mutant hermaphrodites survive 72 hours of anoxia at approximately 10% and 7% respectively, and are considered long-term anoxia sensitive.In contrast, wild-type and daf-16(mu86) mutant males survive long-term anoxia with a viability >98% (Mendenhall et al., 2009).The animals maintain normal motility and demonstrate an unimpaired phenotype after long-term anoxia exposure.Furthermore, the males that were raised in the presence of hermaphrodites and likely had an opportunity to mate still maintained an increased capacity to survive long-term anoxia relative to age matched hermaprodites indicating that mating and interaction with other males did not compromise the long-term anoxia survival phenotype.The tra-2(q276) mutant was used to show that the long-term anoxia survival phenotype observed in males is dependent on male phenotype rather than male genotype.The tra-2(q276) mutant is phenotypically male but instead of having the male genotype (X0) is genotypically hermaphroditic (XX).The tra-2(q276) animal survived long-term anoxia similar to that of wild-type males indicating that something inherent about the male phenotype confers anoxia tolerance. Combined, these studies provided further evidence that anoxia tolerance is strongly influenced by physiology and genotype.The ability of an individual to survive anoxic stress is determined by the interplay of multiple pathways in a complex fashion as evidenced by the wide range in biologic function attributed to the many encoded proteins and enzymes recognized to influence anoxia tolerance. The multifactorial architecture of anoxia tolerance As additional work is conducted to identify the mechanisms by which anoxia tolerant animals survive, recover, and protect tissues it is unlikely that a single important regulatory gene will be identified.Instead, we propose that anoxia survival is by way of a complex interaction of multiple physiological processes that animals are able to survive oxygendeprivation stress and specifically, anoxia.In this chapter we have discussed a wide range of biological processes that naturally, or in the mutant condition, enhance or reduce anoxia tolerance.We have also related the observation that organisms that survive anoxic stress often have other stress resistant phenotypes as well.For example, the long-term anoxia tolerant strains glp-1(e2141) and daf-2(e1370) also share an increased longevity phenotype.However, longevity and anoxia-tolerance phenotypes are not superimposable.The extended lifespan of glp-1 requires the absence of a functional germ line and presence of an intact somatic gonad.In contrast, daf-2 mutants have full reproductive capacity and have nearly wild-type brood sizes.The long-term anoxia tolerant phenotype is daf-16-dependent in daf-2 mutants, but daf-16 independent in sterile mutants.The differences in physiology between these two strains are numerous, for example daf-2 mutants accumulate fat and glycogen while glp-1 mutants do not, daf-2 mutants are dauer-constitutive at 25°C but glp-1 mutants are not.The relationship between sterility-induced anoxia-tolerance and longevity is not yet clear and strains have been identified that carry one but not both characteristics.The unmated fog-2 mutant has a wild-type lifespan and functional oocytes, yet this mutant is long-term anoxia tolerant unless induced to have offspring by mating.Not all lifespan extended mutant strains have an increase in anoxia tolerance relative to wild-type animals, suggesting that at least an overlap in the mechanisms governing the two phenotypes exists but that they are not identical.Currently, the mechanism by which the germline is regulating anoxia tolerance remains unclear. Anoxia-tolerance also under the control of metabolic factors.Reduced signaling through the insulin-IGF pathway confers long-term anoxia tolerance.Animals with reduced caloric intake (which may mimic reduction in insulin signaling) such as the dauer stage of larval development and a mutation in eat-2 (animals have a reduced pharynx pumping rate and therefore reduced food intake), have been found by our lab to have an elevated anoxia survival rate.Complimentary to this observation is that long-term anoxia tolerant strains have elevated levels of fuel source carbohydrates relative to long-term anoxia sensitive strains.These elevated carbohydrate stores and the associated long-term anoxia tolerance can be suppressed by mutations in aak-2, the kinase subunit of the AMPK energy sensor, in some but not all long-term anoxia tolerant strains.The influence of aak-2 on anoxia tolerance is linked to the activation of cellular stress responses that are under the control of the transcription factor daf-16. The ability to survive extended periods of anoxia is arguably un-adaptive if the animal is unable to resume normal activity such as foraging and reproduction after reoxygenation.It is reasonable to expect that adaptive mechanisms have evolved that protect or repair tissues when damage is incurred during stress.Specific genes have been recognized as required for post-anoxia health and they function in diverse biological processes.For example, gpd-2 and gpd-3 are necessary for tissue maintenance during anoxic stress and function in the glycolytic pathway while hyl-2, a ceremide synthase required for short-term anoxia survival, functions in a seemingly unrelated manner to provide proper length fatty acyl chains which serve as the precursors of membrane sphingolipids and cell signaling molecules. The role of environmental factors such as temperature or food source and availability represent yet another genre of factors influencing anoxia tolerance.It is likely that environmental factors exert their influence by altering the rate of reactions associated with the biological processes discussed above.We can view these environmental factors as persistent modern reminders of the pressures to which organisms were obliged to adapt or die.Having evolved under the influence of a range of environmental pressures it is not surprising that multiple mechanisms persist to cope with diverse environmentally induced stresses. Long-term anoxia survival requires an overall reduction in metabolic rate and ability to provide enough energy to sustain the animal through the oxygen deprivation period and allow maintenance of tissue integrity.Figure 8 depicts a model of the multifactorial character of the anoxia tolerant phenotype.It is likely that a long-term anoxia tolerant strain is able to survive anoxia stress at a high rate due to one or more of the biochemical branches that lead to an anoxia tolerant phenotype.Within each branch specific adaptive responses occur, governed by a particular set of genes that may overlap but are not identical to the set of genes working in the other branches.Therefore, not all long-term anoxia tolerant strains of C. elegans survive via a common mechanism. Conclusions -suspended animation and human health related issues Oxygen deprivation is central to many life-threatening human health issues (Semenza, 2010).The extremely high economic and social cost associated with traumas such as blood loss, drowning, suffocation and toxins that affect pulmonary or cardiac function, in addition to diseases that compromise pulmonary and cardiac function underscores the significant importance in understanding responses to oxygen deprivation.In addition to these obvious human health related issues oxygen levels also influence the progression of tumors in individuals afflicted with cancer.It is known that microenvironments within solid tumors exist and that cells in regions of low oxygen are often more resistant to chemotherapeutic or radiation treatment.The solid tumor cells that are further away from the vascular system not only have less chemotherapeutic drugs being delivered but also have a decrease in oxygen levels.This reduction in oxygen can influence the progression of cell division leading to a population of cancer cells that are not rapidly dividing yet remain viable and quiescent.When these cells are re-exposed to oxygen it is possible that they resume rapid cell cycle progression and seed further tumor progression.Therefore, the understanding of how cells, tissues and whole organisms respond to and survive oxygen deprivation is of not only scientific interest but vital in the context of human health related issues. There are many important and significant approaches that researchers are taking to understand the implications and effects oxygen deprivation has on organisms.Use of C. elegans as a genetic model system allows one to characterize many aspects of hypoxia and anoxia responses including the influence on development, cell cycle progression and organ structure and function.Furthermore, the capacity to use cellular and genetic tools to dissect molecular pathways that are involved with oxygen deprivation survival further underscores the value of C. elegans as a model system.Finally, the ability to generate a state of anoxia tolerance through genetic manipulation or chemical means will aid in understanding how organisms with complex tissues respond to and survive oxygen deprivation. The induction of suspended animation in C. elegans and zebrafish led to the pursuit of molecules that induce suspended animation in more complex systems (Roth and Nystul, 2005).The general idea is to treat individuals experiencing a traumatic event, such as blood loss leading to severe oxygen deprivation to vital organ systems, by inducing a state of suspended animation so that cellular processes (such as cell death) arrest or slow.Induction of suspended animation may "buy time" until other treatments can be administered.One molecule under intense investigation for inducing a state of suspended animation or a hypometabolic state is hydrogen sulfide (Blackstone et al., 2005;Roth and Nystul, 2005).Remarkably, hydrogen sulfide can reversibly induce a hypometabolic state in which core body temperature can be reduced in mammals (Blackstone et al., 2005;Blackstone and Roth, 2007).Hydrogen sulfide, or molecules with similar capabilities, provides a possible therapeutic approach to treating individuals with life-threatening events that compromise oxygen delivery to vital organs (Aslami et al., 2010;Szabo, 2007;Wagner et al., 2009).Like many new ideas that address biologically and medically complex problems, the ability to use basic sciences from model systems to identify treatments and therapeutics for the benefit of human health related issues is going to be costly, perhaps controversial and quite complex at the biological level (Olson, 2011).However, given the profound effect oxygen deprivation, including anoxia, has on living systems it is of great interest to continue the onward march toward understanding the molecular nature of anoxia tolerance in biological systems. Fig. 1 . Fig.1.C. elegans exposed to anoxia enter into a reversible state of suspended animation in which development and cell cycle progression will arrest.Shown are embryos collected from a gravid adult and exposed to normoxia or anoxia for 24 hours.The anoxia-exposed embryo will arrest development for several days and yet remain viable (shown is an embryo exposed to 24 hours of anoxia).The post-anoxia animal will resume development when air is added back to the environment.Scale bar = 10 µm for embryos and 20 µm for post-anoxia larva. Fig. 2 . Fig.2.Prophase blastomeres are altered in response to anoxia.A) Prophase blastomeres of embryos exposed to anoxia have diminished level of the kinetochore protein HCP-1.Embryos were collected from adult hermaphrodites and exposed to either normoxia or a brief period of anoxia and immediately fixed or allowed to recover in 30 minutes of air (post-anoxia) after anoxia treatment.After treatment the embryos were collected fixed and stained with DAPI to recognize DNA, mAb 414 to recognize the nuclear pore complex, anti phosphorylated Histone H3 at Serine 10 (Phos H3) to recognize mitotic nuclei and anti HCP-1 to recognize the kinetochore.Shown are representative prophase blastomeres analyzed using confocal microscopy.Scale bar = 2 µm.B) Lamin localization is diminished in the nucleoplasm of embryos exposed to anoxia.Embryos were exposed to normoxia or anoxia, for the specified time, and stained with DAPI to recognize DNA, mAb 414 to recognize nuclear pore complex and anti Ce-lamin to recognize lamin.Shown is a representative prophase blastomere from embryos exposed to noted environment and analyzed using confocal microscopy.Scale bar = 5µm. Fig. 3 . Fig. 3. Anti-NUP50 localizes to nuclear membrane and has a different pattern in anoxia exposed prophase blastomeres.The anti-human NUP50 antibody detects antigen localized to the nuclear membrane in interphase cells (I) which is then diminished by prophase (P, arrow) in normoxic embryos.In the prophase blastomeres (P) of anoxic embryos the antigen remained associated with the nuclear membrane (arrow head).Shown is a representative embryo analyzed by confocal microscopy. Fig. 4 . Fig. 4. The gene encoding the nucleoporin, npp-16/NUP50, is required for anoxia-induced prophase arrest.npp-16(ok1839) embryos were exposed to normoxia or anoxia and then stained with DAPI to detect DNA, Phos H3 to detect the mitotic marker phosophorylated Histone H3 at Serine 10, and mAb 414 to detect NPC.The npp-16(ok1839) embryos exposed to normoxia contain normal prophase (P) and yet the npp-16(ok1839) embryos exposed to anoxia contain a decrease in prophase blastomeres and an increase in abnormal nuclei (Ab) and NPC structure (arrow).Scale bar = 20 µm. Fig. 5 . Fig. 5. hda-2(RNAi) embryos are sensitive to anoxia.Embryos were obtained from gravid adults and fixed and stained with DAPI to detect DNA and mAb 414 to detect the nuclear pore complex.A) Unlike normoxic controls, the hda-2(RNAi) embryos exposed to anoxia have abnormal blastomeres and prophase (P) blastomeres in which the chromosomes do not associate with the NE.Scale bar = 19 µm.B) Enlarged image of nuclei observed within the hda-2(RNAi) embryos exposed to anoxia.A significant number of the blastomeres contain a variety of abnormal nuclei.Scale bar = 5 µm. Fig. 6 . Fig. 6.Wild-type C. elegans adults survive and fully recover motility after 24 hours of anoxia whereas animals exposed to 72 hours of anoxia have a reduced survival rate and impaired motility.Wild-type animals were raised to one-day old adults then exposed to anoxia for 24 hours or 72 hours.A) One-day old adult hermaphrodite prior to anoxia exposure.B) The same adult following 24 hours of anoxia and given 1-hour of recovery in normoxia.Animal recovered normal pattern of movement and resumed egg-laying within an hour of reoxygenation.C) One-day old adult hermaphrodites in suspended animation immediately following 72 hours of anoxia.Note the slightly curved body posture (arrow).D) Example of an impaired survivor following 72 hours of anoxia and 24 hours recovery in normoxia.Note the impaired animal has consumed the bacterial food in a fan-shaped halo surrounding the anterior head region.All anoxia exposures were conducted at 20˚C.Scale bar = 100 µm (A, B, D); scale bar = 20 µm (C). Fig. 8 . Fig. 8.The anoxia tolerant phenotype is multifactorial in nature.Biological factors that promote enhanced anoxia survival are shown as activating arrows, while factors or conditions that decrease the rate of survival during anoxia exposure are shown as inhibiting blunt-ended lines.We propose that the level of anoxia tolerance for any particular strain is a function of the interaction of the various factors shown in the diagram. Table 1 . Various methodologies used to expose C. elegans to anoxia. -arrested blastomeres and genes required Interphase Nucleus:  Based on centrosome location, a specific stage of interphase arrest is not observed  Genes required for interphase arrest: none identified to date Late Prophase Nucleus: Normoxia Anoxia  Chromosomes fully condensed  Chromosomes "dock" near the inner
2017-09-14T21:17:35.553Z
2012-01-05T00:00:00.000
{ "year": 2012, "sha1": "0dd02832169fc5ed035ca99f3722de1bf1596638", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/25353", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "0dd02832169fc5ed035ca99f3722de1bf1596638", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
246211250
pes2o/s2orc
v3-fos-license
Clinical and laboratory predictors at ICU admission affecting course of illness and mortality rates in a tertiary COVID-19 center Background Survival rates of critically ill COVID-19 patients are affected by various clinical features and laboratory parameters at ICU admission. Some of these predictors are universal but others may be population specific. Objective To determine utility of baseline clinical and laboratory parameters in a multivariate regression model to predict outcomes in critically ill COVID-19 patients in a tertiary hospital in Croatia. Methods 692 critically ill COVID-19 patients treated during a 10-month period were included in this retrospective observational trial to assess the risk factors determining mortality rates. Various anthropometric features, comorbidities, laboratory parameters, clinical features and therapeutic interventions were included in the analysis. ICU mortality rates and length of ICU stay were primary endpoints analyzed in this study. Results After multivariate adjustment, only the SOFA score, PaO2/FiO2 and history of arterial hypertension had an effect on ICU mortality, as well as the need to initiate invasive mechanical ventilation. Increase in PaO2/FiO2 over the first 7 days was present in survivors, while reverse applied to SOFA. Length of ICU stay was 9 (4–14) days. Factors affecting survival times were admission from wards, congestive heart failure, invasive mechanical ventilation, bacterial superinfections, age > 75 years, SOFA score, and serum ferritin, CRP and IL-6 values at ICU admission. Conclusion Elevated inflammatory biomarkers and SOFA score at ICU admission were detected as significant predictors of ICU mortality in this cohort, while initiation of invasive mechanical ventilation is the most relevant interventional mortality risk factor in critically ill COVID-19 patients. Introduction The pandemic of coronavirus disease (COVID-19) struck the world and the healthcare system of almost every country so severely that the World Health Organisation (WHO) declared it as a public health emergency. 1 In a year there were around 300,000 cases of COVID-19 2 recorded in Croatia. Most of them with mild flu-like symptoms or no symptoms at all, and the others requiring hospitalization and approximately 10% of hospitalized patients require ICU admission due to severe course of disease caused by dysregulated immune response which may cause coagulopathy, 3 massive alveolar damage and progressive respiratory failure, 4 all of which are linked to adverse outcomes. Some systematic reviews and meta-analyses have already linked severe COVID-19 to history of arterial hypertension, 5,6 diabetes mellitus, 7 advanced age and male sex 8 in patients with poor outcome. Due to differences in patient population and geographical distribution the percentage of hospitalized COVID-19 patients demanding ICU admission varies from 4% 9 to 32%. 10 The data on clinical characteristics and factors affecting outcomes of critically ill patients with COVID-19 are of great importance in reducing mortality rates which, among ICU admitted patients, vary from 16%, 11 38%, 10 62%, 12 67% 13 to 78%. 14 The first case of coronavirus infection in Croatia was confirmed on February 25, 2020. Following the growing incidence of COVID-19, the number of patients with severe symptoms of COVID-19 started to increase simultaneously, which caused a major challenge for the healthcare system on a national level. By the decision of the Ministry of Health in March 2020, University Hospital Dubrava was repurposed to be the first and, so far, the only national COVID-19 hospital in Croatia. From that point onwards the hospital was organized as the Primary Respiratory Center, taking care of COVID-19 patients from the Zagreb area and surrounding counties. Special subunit Primary Respiratory Intensive Center (PRIC) was formed in order to provide invasive or noninvasive respiratory support and any other form of intensive care. Being so, most COVID-19 patients in the country were admitted to UH Dubrava. Critically ill COVID-19 patients were treated by medical staff (with approximately one third physicians with critical care medicine experience) from University Hospital Dubrava, as well as from University Hospital Center Zagreb, University Hospital Center Sestre Milosrdnice, University Hospital Merkur, University Hospital Sveti Duh and Children's Hospital Zagreb which were deployed to UH Dubrava to provide assistance. Since the outbreak of COVID-19, numerous reports have been published, but more studies focusing on identifying risk factors affecting survival are still needed due to diverging findings in various subpopulations. The aim of this paper was to identify the effect of comorbidities, laboratory parameters and demographic and anthropometric factors on survival rates of critically ill COVID-19 patients treated in a tertiary hospital in continental Croatia. Methods This study was designed as a retrospective observational study and it included COVID-19 patients with a positive polymerase chain reaction (PCR) test admitted to the combined intensive care unit (ICU) organized in specialized PRIC UH Dubrava between April 1, 2020, and February 1, 2021. After institutional ethics board approval, data collection was performed from electronic patient data records (iBIS, IN2, Zagreb, Croatia). Recorded variables were: basic demographic characteristics (gender, age), organizational aspects (patient admitted to the ICU from other departments of PRIC UH Dubrava or admitted directly from ICUs in other hospitals in continental Croatia), anthropomorphic characteristics (body mass index -BMI, kg/m 2 ), presence of major comorbidities (arterial hypertension, diabetes mellitus, congestive heart failure defined as NYHA status > II, chronic kidney disease defined as glomerular filtration rate < 60 ml/min/1.73 m 2 and chronic hematologic disorders), Charlson comorbidity index (CCI), sequential organ failure assessment (SOFA) score, duration of COVID-19 disease before ICU admission, hospital infection rate (stratified by site and type of bacteria or fungi), thromboembolic incident rate (stratified by severity of incident and modality of treatment), and the following laboratory parameters at ICU admission: white blood cell count (WBC, x10 9 / L), neutrophil and lymphocyte percentage in WBC, Horovitz quotient (PaO 2 /FiO 2 , mmHg), serum D-dimer (mg/L), serum ferritin (mg/L), serum procalcitonin (ng/ml), serum C-reactive protein (CRP, mg/L), serum IL-6 (pg/ml), and glomerular filtration rate (ml/min/1.73 m 2 ). Endpoints were defined as ICU and hospital mortality, length of mechanical ventilation and length of ICU stay. Statistical analysis Data is presented as tables and charts. Continuous variables are displayed as either mean and standard deviation (SD) for values with Gaussian distribution, or median and interquartile range for data that does not follow normal distribution. Normality of distribution was assessed using the Shapiro-Wilk test. Categorical variables are displayed as counts and percentages. Differences in independent continuous variables between 2 groups were tested for statistical significance using Student's t-test for independent samples or Mann-Whitney U test, depending on distribution of data. For more than two groups, two-way analysis of variance (ANOVA) was used to test for significance between normally distributed groups and Kruskal Wallis test was used for variables without normal distribution. For dependent continuous variables Student's t-test for paired samples or Wilcoxon rank test were used. Differences in categorical variables were tested for statistical significance usingx 2 or Fisher's exact test for 2 £ 2 tables. Multivariate logistic regression was performed to calculate predictive value of various variables on adjusted odds ratio and 95% confidence interval (CI) on survival rates in the ICU. Selection of variables included in the model was performed by first performing univariate analysis of each variable, and then discarding values for which P values were > 0.2. After selection of variables, variables in the model were tested for multicollinearity and variables with variance inflation factor (VIF) > 5 were flagged for further analysis. Model was then retested with each of the flagged variables excluded, and the model where the remaining variables had VIF < 5 and highest value of receiver operating curve area under the curve (ROC-AUC) was used. Fit of the model was also evaluated using Hosmer-Lemeshow goodness of fit test and Nagelkerke R 2 statistic. Multivariate Cox regression survival analysis was performed to assess the adjusted and non-adjusted hazard ratio (and the 95% CI) of the aforementioned variables on ICU survival times. Change of continuous variables with a statistically significant predictive value of ICU mortality during the first week of ICU stay was tested for statistical significance using repeated measures analysis of variance (RM-ANOVA) with post-hoc Bonferroni correction. P values <0.05 were considered statistically significant. Software packages used for statistical analysis and data visualization were jamovi v1.6.16 15 with survminer 16 and finalfit 17 modules and JASP v0.14.1. 18 Results From March 1, 2020, to February 1, 2021, of 3736 patients admitted to PRIC UH Dubrava because of COVID-19, 692 (18.5%) patients were admitted to PRIC-IC (Fig. 1); 320 (46.2%) from the hospital ward, 134 (19.4%) from the emergency department (ED), and 134 (19.4%) from an ICU in another hospital. Median time elapsed from positive SARS-CoV-2 test to ICU admission was 5 (1À9) days. While most patients had severe ARDS, according to the current definition of ARDS, 19 Horovitz quotients were even lower in patients admitted from wards, while patients admitted from ED had lower duration of illness compared to other groups. There were no differences between these groups in other recorded parameters (Table 1). Ventilatory support A large proportion of patients started with HFNO, of which a large proportion continued with invasive ventilation. Distribution of patients and their ventilatory support, as well as their survival rates are depicted in Fig. 2. Median duration of successful HFNO (in 89 patients) was 6 (4À9) days, median duration of unsuccessful HFNO was 3 (1À5) days. Duration of invasive ventilation was 7 (3À12) days. 6 patients (0.9%) received extracorporeal membrane oxygenation support. Renal replacement therapy 41 patients (5.9%) received renal replacement therapy (RRT). 16 of those patients received intermittent hemodialysis (IHD), 18 received continuous renal replacement therapy (CRRT), 2 received both IHD and CRRT and 5 patients continued with dialysis due to end-stage renal disease. Factors affecting survival Differences in survival rates and various baseline factors between survivors and non-survivors are displayed in Table 2. Factors associated with mortality are shown in Table 3. In multivariate analysis, only the SOFA score, PaO 2 /FiO 2 and history of arterial hypertension had an association with outcome - Fig. 3. Over the first 7 days, survivors' PaO 2 /FiO 2 and SOFA both showed a statistically significant improvement, while there was no statistically significant change of these parameters in non-survivors. Estimated marginal mean PaO 2 /FiO 2 was 121.7 mmHg at admission and 168.3 mmHg at day 7 in survivors vs 96.8 mmHg at admission and 104.3 mmHg at day 7 in non-survivors (p<0.001 between groups and within group). SOFA score at admission was 3.0 and 3.1 at day 7 in survivors and 4.1 at admission and 5.7 at day 7 in non-survivors (p<0.001 between groups and within group) - Fig. 4. After multivariate adjustment for procedures and complications during ICU stay, only the need to initiate invasive mechanical ventilation was a significant predictor of mortality in the ICU (OR 11.8, 95% CI 7.4À19.2, p<0.001), while bacterial superinfection rate and renal replacement therapy were significant factors in univariate analysis, but significance was lost after multivariate adjustment - Table 4 Factors associated with duration of ICU stay Length of ICU stay was 9 (4À14) days. Median survival for mechanically ventilated patients was 11 days, and 24 days for patients that were not mechanically ventilated. For patients with bacterial superinfections median survival was 13 days and 8 days for those without bacterial superinfections. Factors affecting survival times after multivariate adjustment was performed were admission from wards, as opposed to direct transfer from emergency department or ICUs in other hospitals (HR 0.69, p = 0.044 for patients that weren't admitted from hospital wards), congestive heart failure (HR 0.55, p = 0.015 for patients without CHF), invasive mechanical ventilation (HR 0.12, p<0.001 for patients which were not mechanically ventilated), occurrence of bacterial superinfections (HR 2.31, p<0.001 for patients without bacterial superinfections), age > 75 years (HR 3.46, p<0.001 compared to patients between 45 and 65 years of age), SOFA score (HR 1.1, p = 0.016 per each unit increase), serum ferritin (HR 1.03, p<0.001 per each 0.1 mg/L increase), CRP (HR 0.74, p = 0.01 per each 100 mg/l increase) and IL-6 (HR 1.11, p<0.001 per each 0.1 mcg/L increase) - Table 5, Figs. 6-9. Discussion The aim of this retrospective observational study was to assess how the course of illness during ICU stay and risk factors present at ICU admission affect survival rates of 692 COVID-19 patients treated in PRIC-IC in a tertiary institution in continental Croatia. In terms of patient characteristics, certain factors which affect reported survival rates must be stated in order to clarify obtained results. First, since UH Dubrava was repurposed to become a COVID-19 exclusive hospital in order to minimize potential horizontal SARS-CoV-2 spread in other hospitals in continental Croatian; a specialized ward was organized to treat patients which require high-flow nasal oxygen (HFNO) therapy. Because of that, survival rates might be skewed, since only patients with severe clinical presentation and imminent HFNO failure with need to initiate invasive mechanical ventilation (per hospital protocol, ROX indices < 3.8 were used as one of ICU admission criteria 20 ) were admitted to the ICU. Therefore only 89 patients (12.9%) treated with HFNO completed their ICU stay without need for intubation and invasive mechanical ventilation -a number that is in general lower than previously reported, 21À24 but can also be explained with much lower PaO 2 /FiO 2 ratios at ICU admission compared to other studies. 4,21,22,[24][25][26][27][28] Percentage of patients which received invasive mechanical ventilation (IMV) in this study is relatively high -80.5%, which is among the higher ones reported, with other studies reporting varying percentages: from 3% 29 to 87%. 8 Initiation of IMV is one of the most important ICU mortality risk in our study with mortality of 83.8% for mechanically ventilated patients, multivariate OR of survival of 11.80 (7.40À19.21, p<0.001) and HR of 0.12 (0.04À0.39, p<0.001) for patients that weren't mechanically ventilated compared to those who were. These numbers are among the higher ones reported 30À33 when general numbers are analyzed, but it must be stated that patients included in this study are among the oldest ones reported so far. 12 In the cohort analyzed in this study age is one of the defining factors determining mortality rate in the univariate analysis, with survivors being 9 years younger than non-survivors (65 (56À73) vs 74 (67À79) years, P< 0.001), and odds ratio (OR) of 1.06 (1.04À1.07, p<0.001) per each year of age. This finding is in accordance with previously published data. 12 After subdividing the cohort into 4 age groups (< 45, 45À65. 65À75, >75), and multivariate Cox regression survival analysis, patients older than 75 years of age were identified at most risk compared to reference (45À65) with HR of 3.46, a result which is in general agreement with previously published data such as Grasseli et al. 8 where non-survivors had a hazard ratio (HR) of 1.75 per every ten year increase in age, and Wu et al 29 with a HR of 6.75 in group over 65 years of age compared to patients younger than 65 years. In interpreting odds ratios considering case fatality ratios in general, the nature of the regression model used in our study must be taken into account because it also included the CCI which uses age as one of components in calculating the final score. 37 While simultaneous use of both age and CCI may seem to add multicollinearity bias, variance inflation factors for those two parameters were well inside tolerated values -2.43 and 2.17 respectively. 38 It must also be noted that the cohort analyzed in this study was much older compared to population age reported in other studies, with median age of 72 years, vs 63, 8 60.5 39 and 51 29 years of age. While increased BMI has been linked in multiple studies with increased severity of COVID-19 clinical presentation and higher mortality rates 40,41 our findings suggest that there is no statistically significant difference in BMI levels between survivors and nonsurvivors, with both groups falling into the overweight category (29.9 vs 29.1 kg/m 2 , P = 0.219). One of the factors that must not be overlooked when interpreting these results is the increased age of the cohort. As age progresses, muscle mass is gradually lost and replaced with fat 42,43 and at older age BMI is not as reliable a parameter in quantifying obesity as it would be at a younger age. Also, due to general loss of muscle mass, loss of diaphragmatic muscle mass might be one of the factors that contributes to increased case fatality rates of elderly COVID-19 patients, especially those who were mechanically ventilated. 44 Sequential organ failure assessment (SOFA) score, 45 which has become the golden standard in evaluating the severity of organ damage due to dysregulated immune system response to pathogens (i.e. sepsis) has in the studied cohort shown a statistically significant prognostic value in both logistic and Cox regression model (OR 1.6 and HR 1.1 per 1 point SOFA score increase, respectively), which is in concordance with previously published data. 14,46,47 The respiratory component of SOFA score was the prevalent factor affecting the composite score in patients in this study, with a median PaO 2 /FiO 2 of 75 (56À125) mmHg for the whole cohort at ICU admission, and 100 (70À224) for survivors and 69 (55À103) for non-survivors (according to which both subgroups fall into the severe ARDS subgroup according to the Berlin definition 19 ). Compared to other published data, these values were among the lowest ones reported, in comparison to 160 (114À220) mmHg from Italian 8 ICUs, 135 (101À170) for survivors and 121 (85À151) for non-survivors from Spanish 26 ICUs and 124 (86À188) from U.S. 39 ICUs. In the studied population, decrease of PaO 2 /FiO 2 was a significant predictor of ICU mortality with an of OR 0.96 (0.92À1.00) per 10 mmHg change. Severity of blood gas exchange impairment at ICU admission has been confirmed in other studies as a strong predictor of ICU mortality. 23 In the studied population presence of arterial hypertension, while being a risk factor in the univariate analysis, which is in agreement with previously published data, 5,6 showed a reduction of risk in the multivariable model. While these results are baffling, an explanation for this would be presence of other comorbidities and high CCI score in patients with arterial hypertension. While there is no evidence of multicollinearity (as previously stated with low VIF values), these results should still be taken with a grain of salt and further analyses are needed (for example medication regimens of patients with hypertension). Of all the recorded comorbidities, history of congestive heart failure has the most significant effect on survival times in the studied cohort, both in univariate and multivariate analysis. While COVID-19 myocardial injury, which was reported in several other studies 49,50 could be a potential culprit which worsened preexisting cardiac condition, due to organizational difficulties caused by increased influx of patients and lack of specific therapy to treat myocarditis, myocardial biopsies were not performed to confirm or exclude myocardial injury caused by SARS-CoV-2 infection. In terms of biomarkers of inflammation, ferritin with HR of 1.03 per each 0.1 mg/L increase and IL-6 with HR 1.11 per each 0.1 mcg/L increase shortened ICU survival times, while increases in CRP showed a reduction of HR with 0.74 per each 100 mg/l increase (in contrast to having a univariate OR 1.28), which can be linked to increased survival times of patients with bacterial superinfections (where patients without had a HR of 2.31 compared to those with bacterial superinfections). These results are in partial agreement with previously published data, 47,51 where CRP levels at admission were a more significant factor in determining survival rates. In the studied population the CRP cutoff value with highest ability to discriminate between survivors and non-survivors was similar to levels reported by Liu et al. (41.3 vs 41.8 mg/L) but area below the receiver operating curve was much lower (0.58 vs 0.86), limiting its usefulness. Results of this study show certain idiosyncrasies of the Croatian healthcare system and culture of health itself. Obesity and arterial hypertension which have been linked to more severe course of illness are very common in the Croatian population, especially males 2 and arterial hypertension (a well-established factor affecting COVID-19 mortality rates 5,6 ) was present in 71.1% of critically ill COVID-19 patients in our hospital, as well as increased BMI (another factor linked to increased mortality 40,41 ). Compared to other countries in the European Union, Croatia has the highest incidence of overweight population, 52 which may explain one of the highest COVID-19 hospital admission rates in the EU 53 as well as high ICU mortality rates found in the analyzed cohort. Another factor affecting survival rates is the fact that UH Dubrava was re-purposed to become a COVID-19 only hospital which reduced horizontal virus transmission in other hospitals in north-western Croatia (and other healthcare facilities such as palliative facilities) but added additional workload (number of ICU beds were nearly doubled compared to pre-pandemic) which was partially alleviated with personnel from other hospitals in Zagreb of which some never worked in the ICU before the start of the pandemic. One specific event that overburdened the ICU capacity in UH Dubrava was the earthquake in Sisak-Moslavina county on December 28. 2020, in which the county hospital was severely damaged and all the COVID positive patients from that hospital (of which some were admitted with multi-drug resistant strains such as Acinetobacter Baumanii) were admitted during a 24-hour period, which introduced another burden to our hospital which was already functioning at near full capacity. There were certain limitations in this study. First, because of ICU bed allocation and formation of specialized "semi-intensive" wards for treatment of patients receiving HFNO (which were normally treated in the ICU before the pandemic), only patients with the most severe clinical presentation were admitted to the ICU (a fact that is evident when comparing baseline PaO 2 /FiO 2 ratios in this cohort compared to other studies). Also, since there were many admissions from other institutions, with many patients admitted from palliative care facilities (to reduce viral spread among these, most vulnerable patients), which would not normally be admitted to ICUs due to low life expectancy, mortality rates were higher than reported in other studies. Since a large proportion of patients were re-transferred to other, non-COVID ICUs in other hospitals after two successive negative PCR tests, longer period follow-up was not performed. One other significant limitation is the fact that therapeutic regimen (corticosteroids, anticoagulation and anti-aggregation therapy, antiviral, and immunomodulatory drugs) was not recorded electronically but on paper charts, which, due to COVID containment measures, were sealed after patient discharge, and therefore could not be included in the analysis. Conclusion In the studied cohort which included critically ill patients during the first two waves of the COVID-19 pandemic treated in a tertiary institution in continental Croatia, survivors were of significantly lower age, number of comorbidities, CCI, SOFA score, WBC and neutrophil counts as well as serum ferritin, C-reactive protein, D-dimer, procalcitonin, IL-6 and lactate levels at ICU admission. After multivariate adjustment, SOFA score (especially its respiratory component), and the need for initiation of invasive mechanical ventilation were the most important predictive factors of ICU mortality.
2022-01-24T14:09:06.017Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "05a7779112f33684aab86ca5188e3f8e74f788d3", "oa_license": null, "oa_url": "http://www.heartandlung.org/article/S0147956322000140/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e16a39777fd9c1ddd62601407038a029e2b2a37c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6016456
pes2o/s2orc
v3-fos-license
Collapse of attractive Bose-Einstein condensed vortex states in a cylindrical trap Quantized vortex states of weakly interacting Bose-Einstein condensate of atoms with attractive interatomic interaction in an axially symmetric harmonic oscillator trap are investigated using the numerical solution of the time-dependent Gross-Pitaevskii (GP) equation obtained by the semi-implicit Crank-Nicholson method. Collapse of the condensate is studied in the presence of deformed traps with a larger frequency along the radial as well as along the axial directions. The critical number of atoms for collapse is calculated as a function of vortex quantum $L$. The critical number increases with angular momentum $L$ of the vortex state but tends to saturate for large $L$. I. INTRODUCTION Recent experiments [1,2] of Bose-Einstein condensates (BEC) in dilute bosonic atoms employing magnetic traps at ultra-low temperatures have intensified theoretical investigations on various aspects of the condensate [3][4][5][6][7]. The properties of the condensate are usually described by the nonlinear mean-field Gross-Pitaevskii (GP) equation [8], which properly incorporates the trap potential as well as the interaction among the atoms. Two interesting features of BEC are (a) the collapse in the case of attractive atomic interaction [2,7] and (b) the possibility of the formation of a vortex state in harmonic traps with cylindrical [9][10][11][12][13] as well as spherical [14] symmetry. For attractive interatomic interaction [2,7], the condensate is stable for a critical maximum number of atoms. When the number of atoms increases beyond this critical value, due to interatomic attraction the radius of BEC tends to zero and the maximum density of the condensate tends to infinity. Consequently, the condensate collapses emitting atoms until the number of atoms is reduced below the critical number and a stable configuration is reached. With a supply of atoms from an external source the condensate can grow again and thus a series of collapses can take place, which was observed experimentally in the BEC of 7 Li with attractive interaction [2]. Theoretical analyses based on the GP equation also confirm the collapse [7]. The study of superfluid properties of BEC is of great interest to both theoreticians [9][10][11][12][13][14][15][16][17] and experimentalists [18,19]. Quantized vortex state in BEC is intimately connected to the existence of superfluidity. Such quantized vortices are expected in superfluid He II . However, due to strong interaction between the helium atoms there is no reliable mean-field description. On the other hand, a weakly interacting trapped BEC is well-described by the mean-field GP equation which is known to admit vortex solutions for a trap with cylindrical symmetry [9,16], that can be studied numerically. This allows for a controlled theoretical study of quantized vortices in BEC in contrast to superfluid He II . Many different techniques for creating vortex states in BEC have been suggested [12], e.g., stirring the BEC by an external laser at a rate exceeding a critical angular velocity to create a singly quantized vortex line along the axis of rotation [11], spontaneous vortex formation in evaporative cooling [20], controlled excitation to an excited state of atoms [21], and rotation of an axially symmetric trap [22]. Moreover, quantized vortex states in BEC have been observed experimentally in coupled BEC comprising of two spin states of 87 Rb in spherical trap, where angular momentum is generated by a controlled excitation of the atoms between the two states [19]. Vortices have also been detected in a single-state BEC of 87 Rb in cylindrical trap, where angular momentum is generated by a stirring laser beam [18]. After the possibility of continuously changing the interaction between cold 85 Rb atoms by a magnetic-field-induced Feshbach resonance [23,24], one could experimentally form vortex states in repulsive condensates and study their collapse after transforming them to attractive condensates by such a resonance. Because of the intrinsic interest in BEC of vortex states in axially symmetric traps, in this work the formation of such a BEC is studied using the numerical solution of the time-dependent GP equation with special attention to its collapse for attractive interatomic interaction. In general, a vortex line in a nonrotating trapped BEC is expected to be nonstationary. However, it is possible to have dynamically stable vortex BEC states in a nonrotating trap with low quanta of rotational excitation or angular momentum L per particle [9,15,16,22]. Vortex BEC states for large repulsive condensates with high quanta of rotational excitation are expected to be unstable and decay to vortices with low quanta [11,12,14,17]. In the absence of vortex, the stable condensate in an axially symmetric trap has a cylindrical shape. Such a BEC has the largest density on the axis of the trap. For purely attractive interaction, with the increase of the number of atoms the central density of this condensate increases rapidly leading to instability and collapse [7]. In the presence of vortex motion the region of largest density of the BEC with nonzero L is pushed away from the central axial region and the atoms have more space to stabilize. The vortex state of the condensate in a cylindrical trap has the shape of a hollow cylinder with zero density on the axis of symmetry. Because of larger espacial extension of such a condensate, it can accommodate a larger critical number of atoms before the density increases too high to lead to collapse [9]. The higher the angular momentum L in a BEC, the larger is the critical number of atoms. However, the increase of this critical number with L slows down as L increases. The present study is performed with the direct numerical solution of the time-dependent GP equation with an axially symmetric trap. In the time-evolution of the GP equation the radial and axial variables are dealt with in two independent steps. In each step the GP equation is solved by discretization with the Crank-Nicholson rule complimented by the known boundary conditions [25]. We find that this time-dependent approach leads to good convergence. There are several other iterative approaches to the numerical solution of the time-dependent and time-independent GP equation for axially symmetric [4,9,10,26,27] as well as spherically symmetric [3] traps. Of the time-dependent methods, the approach of Refs. [10] uses alternative iterations in radial and axial directions as in this study, whereas Ref. [26] does not give the details of the numerical method employed and Ref. [27] employs a completely different scheme, e.g., uses alternative iterations for the real and imaginary parts of the GP equation. However, Refs. [9,10] do not provide enough details of the numerical scheme. Because of these a meaningful comparison of the present method with those of Refs. [9,10,26,27] is not possible. In Sec. II we describe the time-dependent form of the GP equation including the vortex states for attractive interaction. In Sec. III we describe the numerical method for solving the time-dependent GP equation in some detail. In Sec. IV we report the numerical results for the collapse of the BEC with vortex quantum for attractive interaction and finally, in Sec. V we give a summary of our investigation. II. NONLINEAR GROSS-PITAEVSKII EQUATION At zero temperature, the time-dependent Bose-Einstein condensate wave function Ψ(r, τ ) at position r and time τ may be described by the self-consistent meanfield nonlinear GP equation [8]. In the presence of a magnetic trap of cylindrical symmetry this equation is written as Here m is the mass of a single bosonic atom, N the number of atoms in the condensate, V (r) the attractive harmonic-oscillator trap potential with cylindrical symmetry, g = 4πh 2 a/m the strength of interatomic interaction, with a the atomic scattering length. A positive a corresponds to a repulsive interaction and a negative a to an attractive interaction. The normalization condition of the wave function is dr|Ψ(r, t)| 2 = 1. (2. 2) The trap potential with cylindrical symmetry may be written as V (r) = 1 2 mω 2 (r 2 + λ 2 z 2 ) where ω is the angular frequency of the potential in the radial direction r and λ is the ratio of the axial to radial frequencies. We are using the cylindrical coordinate system r ≡ (r, θ, z) and in case of cylindrical symmetry the wave function is taken to be independent of θ in the absence of vortex states of the condensate: Ψ(r, τ ) = ψ(r, z, τ ). ( 2.3) The GP equation with a cylindrically symmetric trap can easily accommodate quantized vortex states with rotational motion of the condensate around the z axis without any added complication. In such a vortex the atoms flow with tangential velocity Lh/(mr) such that each atom has quantized angular momentum Lh along z axis. This corresponds to an angular dependence of Ψ(r, τ ) = ψ(r, z, τ ) exp(iLθ) (2.4) of the wave function, where exp(iLθ) are the circular harmonics in two dimensions. Equation (2.3) is the zero angular momentum version of (2.4). Substituting Eq. (2.4) into Eq. (2.1), one obtains the following GP equation in partial-wave form with quantized angular momentum L along the z axis +gN |ψ(r, z, τ )| 2 − ih ∂ ∂τ ψ(r, z, τ ) = 0, (2.5) with L = 0, 1, 2, ... The nonzero values of L corresponds to vortex states. The L 2 /r 2 term in Eq. (2.5) is the vortex contribution to the Hamiltonian of the GP equation. This is also the centrifugal barrier term in the partial-wave linear Schrödinger equation. The limitation to cylindrical symmetry reduces the GP equation in three space dimensions to a two-dimensional partial differential equation. We shall study numerically this equation in this paper to understand the effect of the L 2 /r 2 term to collapse in the case of attractive atomic interaction. It is convenient to use dimensionless variables defined by x = √ 2r/l, y = √ 2z/l, t = τ ω, and where l ≡ h/(mω). Although φ(x, y, t) is the dimensionless wave function, for calculational purpose we shall be using ϕ(x, y, t) in the following. In terms of these variables Eq. (2.5) becomes where n = N a/l. A reduced number of particles is defined as |n|. The normalization condition (2.2) of the wave function become However, physically it could be more interesting to define the reduced number of particles in terms of a geometrically averaged frequency ω 0 = λ 1/3 ω and a length l 0 = h/mω 0 , so that a new reduced number k(λ) is defined via [26] k We shall study this number in the present paper. For a stationary solution the time dependence of the wave function is given by ϕ(x, y, t) = exp(−iµt)ϕ(x, y) where µ is the chemical potential of the condensate in units ofhω. If we use this form of the wave function in Eq. (2.7), we obtain the following stationary nonlinear time-independent GP equation [8]: Equation (2.10) is the stationary version of the timedependent Eq. (2.7). However, Eq. (2.7) is equally useful for obtaining a stationary solution with trivial time dependence as well as for studying evolution processes with explicit time dependence and we shall be directly solving Eq. (2.7) numerically in this paper. Two interesting properties of the condensate wave function are the mean-square sizes in the radial and axial directions defined, respectively, by and (2.12) III. NUMERICAL METHOD To solve the time-independent GP equation we need the boundary conditions of the wave function as x → 0 and ∞ and |y| → ∞. For a confined condensate, for a sufficiently large x and |y|, ϕ(x, y) must vanish asymptotically. Hence the cubic nonlinear term can eventually be neglected in the GP equation for large x and |y| and Eq. (2.10) becomes This is the equation for the free oscillator with cylindrical symmetry in partial-wave form. The wave function for a general state of this oscillator and the corresponding energy are given, respectively, by [28] ϕ and with L = 0, ±1, ±2, ..., n x = 0, 2, 4, ..., and n y = 0, 1, 2, ... Here H ny is the usual Hermite polynomial, and F |L|,nx is another polynomial defined recursively [28,29], and N is the normalization. The first few of these polynomials are: [29]. In this paper we shall be interested in angular momentum (vortex) excitation, opposed to radial excitation via n x or axial excitation via n y , of the following normalized ground state wave function for n x = n y = 0 is a good starting point for a iterative method for solving the time-dependent GP equation (2.7) for small values of nonlinearity n as in this paper. Alternatively, to solve the GP equation for large nonlinearity n, one may start with the Thomas-Fermi approximation for the wave function obtained by setting all the derivatives in the GP equation to zero [6], which is a good approximation for large nonlinearity. Next we consider Eq. (2.7) as x → 0. The nonlinear term approaches a constant in this limit because of the regularity of the wave function at x = 0. Then one has the following condition ϕ(0, y) = 0, (3.6) as in the case of the harmonic oscillator wave function (3.4). Both the small-and large-x behaviors of the wave function are necessary for a numerical solution of the time-dependent GP equation (2.7). The large-x and large-|y| behaviors of the wave function are given by Eq. (3.4), e.g., A convenient way to solve Eq. (2.7) numerically is to discretize it in both space and time and reduce it to a set of algebraic equations which could then be solved by using the known asymptotic boundary conditions. The method of solution using one space derivative is well under control [3,25]. The GP equation (2.7) can be written formally as where H is the time-independent quantity in the square brackets of Eq. (2.7). The integration in time is effected via the following semi-implicit Crank-Nicholson algorithm [25] (3.10) where ∆ is the constant time step used to calculate the time derivative, ϕ n is the discretized wave function at time t n = n∆, and where the space variables x and y are suppressed. The derivatives in the operator H are discretized by the finite difference scheme [25]. The formal solution to Eq. (3.10) is given by 3.11) so that if ϕ n is known at time t n one can find ϕ n+1 at the next time step t n+1 . This procedure is used to solve the GP equation involving one space variable [3]. In that case after proper discretization in space using a finite difference scheme Eq. (3.11) becomes a tridiagonal set of equations in discrete space observables at time t n+1 which is solved by Gaussian elimination method and back substitution [25] using the known boundary conditions (3.6), (3.7), and (3.8). Unfortunately, a similar straightforward discretization of Eq. (2.7) in two space observables using a finite difference scheme in this case does not lead to a tridiagonal set of equation but rather to a unmanageable set of equations [25]. To circumvent this problem the full H operator in this case is conveniently broken up into radial and axial components H x and H y , respectively, where H x contains the terms dependent on x and H y the terms dependent on y with the nonlinear term 8 √ 2πn|ϕ(x, y)/x| 2 involving both x and y contributing equally to both. Specifically, we take with H = H x + H y . However, the numerical result of the present scheme is independent of a specific breakup. The procedure is then to define the unknown wave function on a two-dimensional mesh in the x − y plane. The time evolution is then performed in two steps. First the time evolution is effected using the operator H x setting H y = 0 along lines of constant y with i∂ϕ/∂t = H x ϕ. Next the time evolution is effected using the operator H y setting H x = 0 along lines of constant x with i∂ϕ/∂t = H y ϕ. This procedure is repeated alternatively. This scheme is conveniently represented in terms of an auxiliary function ϕ n+ 1 2 by so that where n = 0, 1, 2, ... denotes the number of iterations. For a small time step ∆, if we neglect terms quadratic in ∆, Eq. (3.15) is equivalent to (3.11). Hence for numerical purpose we have been able to reduce the GP equation in two space dimensions, x and y, into a series of GP equations in one space variable, either x or y. The GP equations in one space variable can be dealt with numerically in a standard fashion using Crank-Nicholson discretization and subsequent solution by the Gaussian elimination method. This scheme is stable independent of time step employed. The time-dependent GP equation (2.7) is solved by time iteration by mapping the solution on a twodimensional grid of points N x × N y in x and y. First Eq. (2.7) with H x is discretized using the following finite difference scheme along the x direction within the semi-implicit Crank-Nicholson rule [31]: i(ϕ n+1 j,p − ϕ n j,p ) ∆ = − 1 2h 2 (ϕ n+1 j+1,p − 2ϕ n+1 j,p + ϕ n+1 j−1,p ) +(ϕ n j+1,p − 2ϕ n j,p + ϕ n j−1,p ) (3.16) where the discretized wave function ϕ n j,p ≡ ϕ(x j , y p , t n ) refers to a fixed y = y p = ph, p = 1, 2, ..., N y at different x = x j = jh, j = 1, 2, ..., N x , and h is the space step. This procedure results in a series of tridiagonal sets of equations (3.16) in ϕ n+1 j+1,p , ϕ n+1 j,p , and ϕ n+1 j−1,p at time t n+1 for each y p , which are solved by Gaussian elimination and back substitution [25] starting with the initial harmonic oscillator solution (3.4) at t 0 = 0 and n = 0. Then Eq. (2.7) with H y is discretized using the following finite difference scheme along the y direction: (3.17) where now ϕ n j,p refers to a fixed x j = jh for all y p = ph. Using the solution obtained after x iteration as input, the discretized tridiagonal equations (3.17) along the y direction for constant x are solved similarly. This twostep procedure corresponds to a full iteration of the GP equation and the resultant solution corresponds to time t 1 = ∆, and n = 1. This scheme is repeated for about 500 times to yield the final solution of the GP equation. The normalization condition (2.8) is preserved during time iteration due to the unitarity of the time-evolution operator. However, it is convenient to reenforce it numerically after each iteration in order to maintain a high level of precision. Also, the solution at each time step will satisfy the boundary conditions (3.6), (3.7), and (3.8). At each iteration the strength of the non-linear term is increased by a small amount so that after about 500 time iterations the full strength is attained and the required solution of the GP equation obtained. The solution so obtained is iterated several times (between 20 to 50 times) until an equilibrated final result is obtained. This solution is the ground state of the condensate corresponding to the specific nonlinear constant k and L. We found the convergence of the two-step iteration scheme to be fast for small |n|. However, the final convergence of the scheme breaks down if |n| is too large. For an attractive interaction there is no such problem as the GP equation does not sustain a large nonlinearity |n|. Typical values of the parameters used in this paper for discretization along x and y directions are N x = 400, N y = 800, respectively, with x max = 8, |y| max = 8, and ∆ = 0.05 for λ > 0.5. For smaller λ(< 0.5) the wave function extends further along the y axis and larger |y max | and N y = 800 are to be employed for obtaining converged result. The above choice of parameters corresponds to a typical space step of h = 0.02 along both radial and axial directions. These parameters were obtained after some experimentation and are found to lead to good convergence. As the time dependence of the stationary states is trivial − ϕ(x, y, t) = ϕ(x, y) exp(−iµt) − the chemical potential µ can be obtained from the propagation of the converged ground-state solution at two times, e.g., ϕ(x, y, t n ) and ϕ(x, y, t n+n ′ ). From the numerically obtained ratio ϕ(x, y, t n )/ϕ(x, y, t n+n ′ ) = exp(iµn ′ ∆), µ can be obtained as the time step ∆ is known. In the calculation of µ an average over relatively large values of n ′ leads to stable result. IV. NUMERICAL RESULT Using the numerical method described in Sec. III we present results for the numerical solution of the timedependent GP equation in the following for attractive interatomic interaction with special attention to the collapse of the condensate. To assure that we are on the correct track, using the present program first we solved the GP equation for the spherically symmetric case with λ = 1 and L = 0, and compare with the calculation of Ref. [30]. As an additional check we also solved the GP equation in two space dimensions with λ = 0 and without the d 2 /dy 2 term in Eq. (2.7) and compare with the calculation of Ref. [31]. In both cases the present calculation agrees with these previous ones. Before describing the results for nonzero L first we compare the present results for L = 0 with those of Ref. [26] for a cylindrically symmetric trap. For the spherically symmetric case λ = 1, and the critical number k c (λ) of Eq. (2.9) for collapse is found to be 0.575 in agreement with Refs. [6,26,30]. In a recent experiment using λ = 0.3919, the critical reduced number for collapse for an attractive condensate of 85 Rb atoms formed using a Feshbach resonance was found to be k c = 0.459 ± 0.012 ± 0.054 [32]. In their calculation Gammal et al [26] obtained k c = 0.550 for λ = 0.3919. In the present calculation we obtain k c = 0.553 in excellent agreement with Ref. [26] using an entirely different numerical routine. However, the disagreement with the experimental result [32] remains. We also calculated the critical number k c (λ) for some other values of λ. For λ = 5, 2, 0.2 we obtain k c = 0.50, 0.56, and 0.52, respectively, compared to 0.498, 0.561, and 0.509 obtained in Ref. [26]. The small difference between the results of the two calculations seems to be a consequence of numerical error. Also, as in Ref. [26] we note that for λ not so different from unity (5 > λ > 0.2) the critical reduced number for collapse k c (λ) satisfies k c (λ) ≈ k c (1/λ), and attains a maximum at λ = 1 corresponding to the spherically symmetric situation. However, this symmetry is broken for large values of λ, e.g., for λ > 5 while we have k c (λ) < k c (1/λ). Moreover, we find in the following that this symmetry is also broken for nonzero L, where, however, for λ > 1 k c (λ) > k c (1/λ). Next we comment on the discrepancy between the experimental critical number of atoms for collapse for an attractive BEC of 85 Rb atoms formed using a Feshbach resonance [32] on one hand and the theoretical results of Ref. [26] and the present calculation on the other hand. In view of the success of the mean-field GP equation to explain many stationary results and time-evolution phenomena of the attractive BEC of 7 Li atoms with an almost spherical trap [2,7], it seems that this description is perfectly appropriate for attractive condensates. Hence, we do not believe that a relatively small deviation from spherical symmetry as in the experiment of Ref. [32] would invalidate the applicability of the GP equation to an attractive condensate. Whether the inclusion of higher order interaction terms in the mean-field GP equation could account for the observed data [26] yet remains to be established. To resolve the discrepancy we advocate further experimental study of collapse for attractive condensates after changing the trap symmetry (λ). After the above preliminary comparative study, we present results for the numerical solution of the GP equation (2.7) for nonzero L = 0, 1, 2, .., 8 and λ = √ 8 and 1/ √ 8 for different k(λ). We recall that λ = √ 8 corresponds to the experiment of Ensher et al. [1] for the BEC of 87 Rb atoms. These two possibilities of λ correspond to axial compression (λ > 1) and elongation (λ < 1) of the condensate, respectively. For each L we increase k from 0 and calculate the chemical potential µ. With the increase of k the wave function becomes more and more localized in space and beyond a certain value of k the density at the peak of the wave-function diverges and no stable normalizable solution of the GP equation with a well defined µ can be obtained. In Fig. 1 we plot µ vs. k(λ) for λ = √ 8 and 1/ √ 8 for different L. We also exhibit the result for the spherically symmetric case λ = 1 (L = 0) for comparison. The curves are plotted for all allowed values of k for the ground state in each case. The curves go up to a maximum critical value k c of k which defines the critical number N c of atoms in that particular case via k c = N c |a|/l 0 . We find that (i) k c for a particular λ increases with L and that (ii) k c for a particular nonzero L increases as λ increases from 1/ √ 8 to √ 8, which demonstrates the breakdown of the numerically noted symmetry k c (λ) ≈ k c (1/λ) for L = 0. To demonstrate these two effects in an explicit fashion we plot in Fig. 2 k c vs. L for λ = √ 8, 1, and 1/ √ 8. The three curves intersect approximately at L = 0 which demonstrates that k c (λ = √ 8) ≈ k c (λ = 1/ √ 8) < k c (λ = 1) for L = 0, with k c (λ = √ 8) = 0.54, k c (λ = 1/ √ 8) = 0.55, and k c (λ = 1) = 0.575. However, this symmetry is broken for nonzero L while k c (λ = √ 8) > k c (λ = 1) > k c (λ = 1/ √ 8). The critical number k c (λ) increases with L for all λ, and we see from Fig. 2 that this rate of increase slows down as L increases. In Figs. 3 and 4 we plot the wave function |φ(x, y)| ≡ |ϕ(x, y)/x| in dimensionless variables of Eq. (2.6). In Figs. 3 (a) − (c) we show the wave function for λ = 1/ √ 8 and L = 0, 2 and 4, respectively, where the parameter k is chosen to be very close to the critical value k c for collapse. The nature of the wave function is qualitatively different for zero and nonzero L. For L = 0 the wave function is peaked on the y axis; whereas for nonzero L it is zero on the y axis and is peaked at some finite x. In all cases the peak is sharp and the density of atoms is very large on the peak. The BEC collapses with a slight increase in the parameter k. For smaller k the wave function has a much broader maximum. When k approaches k c a sharp maximum of the wave function appears very rapidly. To illustrate this in Fig. 3 (d) we plot the L = 2 wave function for k = 2.5. If we compare this with the wave function of Fig. 3 (b) for L = 2 and k = 2.58 ≈ k c , the change in the shape is explicit. In Figs. 4 (a) − (e) we plot the wave function for λ = √ 8 and for L = 0, 2, 4, 6, and 8, respectively, for k ≈ k c . If we compare Figs. 3 with 4 for same L we find that for λ = 1/ √ 8 the wave functions extend over a larger region along the y axis compared to those for λ = √ 8. This is apparent if we compare Fig. 3 (a) with 4 (a), and is expected as λ = √ 8 corresponds to a stronger harmonic oscillator potential in the y direction responsible for axial compression. From Figs. 3 and 4 we find that for both λ, the peak in the wave function moves further away from the y axis as L increases. To understand some aspects of the variation of k c with L and λ exhibited in Fig. 2, we plot in Figs. 5 (a) and (b) the mean-square sizes x 2 and y 2 vs. k for different L and for λ = 1/ √ 8 and √ 8, respectively. The results for vortex states (L > 0) in the spherically symmetric case with λ = 1, remain between the those for λ = 1/ √ 8 and √ 8 and are not explicitly shown here. For nonzero L the system acquires a positive rotational energy L 2 /x 2 which allows it to move away from the axial direction y. For L = 0 the region of highest density is the y axis. For L = 0 the density is zero on the y axis and has a maximum at some finite x. Consequently, the condensate has the shape of a hollow cylinder. Because of vortex motion the condensate swells and has more space to stabilize. Hence for L > 0 the density does not go to an unbearable level with the same number of atoms as for L = 0, and k c increases with L for all λ. However, for all L and λ with the increase of nonlinearity k (or n) in the GP equation (2.7), the attractive nonlinear interaction term takes control and eventually the mean-square sizes ( x 2 and y 2 ) reduce as can be seen from Figs. 5. This eventual shrinking in size with the increase of the number of atoms for all L and λ together with the outward push due to vortex motion for nonzero L takes the density of the BEC at the maximum of the wave function to an unbearably high level at some critical value k c of k leading to collapse. Although, for a fixed λ, k c increases with L, the rate of increase slows down for large L. As k (or n) increases sufficiently for large L (> 8), the nonlinear term containing n becomes the deciding factor in the GP equation and the L 2 /r 2 term starts to play a secondary role. Consequently, the increase in the critical number k c with L slows down as L increases and the number k c tends to saturate as can be seen clearly in Fig. 2. In all cases (λ = √ 8, 1 and 1/ √ 8) this tendency of saturation is visible beyond L = 4. V. SUMMARY In this paper we present a numerical study of the timedependent Gross-Pitaevskii equation under the action of a harmonic oscillator trap with cylindrical symmetry with attractive interparticle interaction to obtain insight into the collapse of vortex states of BEC. The timedependent GP equation is solved iteratively by discretization using a two-step Crank-Nicholson scheme. We obtain the boundary conditions (3.6), (3.7), and (3.8) of the solution of the dimensionless GP equation (2.7) and use them for its solution. The solution procedure is applicable for both attractive and repulsive atomic interactions as well as for both stationary and time evolution problems. It is expected that numerical difficulty should appear for large nonlinearity or large values of reduced number of particles k and large vortex quantum L. For medium nonlinearity, as in this paper, the accuracy of the time-independent method can be increased by reducing the space step used in discretization. The ground-state wave function for each L is found to be sharply peaked for attractive interatomic interaction with the parameters set close to those for collapse. In the case of an attractive interaction, the mean square sizes x 2 and y 2 decrease as the number of particles in the condensate increases towards the critical number for collapse. Consequently, the density increases rapidly signaling the on-set of collapse beyond a critical reduced number k c . The presence of the quantized vortex states increases the stability of the BEC with attractive interaction. The critical number k c (λ) for L = 0 is largest in the spherically symmetric case λ = 1. For vortex states (L = 0), k c (λ) increases with λ. As the vortex quantum L increases, k c also increases. However, in the present calculation a tendency of saturation in the value of k c is noted with the increase of L. As the parameter n or k in the GP equation increases, the nonlinear term starts to play the dominating role in the GP equation compared to the angular momentum term L 2 /x 2 . Once this happens, the rate of increase of k c with L slows down and it is not unlikely that the critical number attains a limiting maximum value for a larger L(> 8) than those considered in this paper. This and other investigations on the collapse of vortex states are welcome in the future. The work is supported in part by the Conselho Nacional de Desenvolvimento Científico e Tecnológico and Fundação de Amparoà Pesquisa do Estado de São Paulo of Brazil.
2017-06-09T22:28:30.740Z
2001-07-13T00:00:00.000
{ "year": 2001, "sha1": "a555532d741fb7cb84f56face3fd91b419568dbe", "oa_license": null, "oa_url": "https://repositorio.unesp.br/bitstream/11449/23897/1/WOS000173407500098.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "58e399fd25c7ccb20ac4e7509d55826ab683c72c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
257403183
pes2o/s2orc
v3-fos-license
Recognising stillbirth as a loss of life and not a baby born without life Correspondence to Dr Rakhi Dandona; rakhi. dandona@ phfi. org © Author(s) (or their employer(s)) 2023. Reuse permitted under CC BY. Published by BMJ. The proposition of the Lancet Commission on the Value of Death is that our relationship with death and dying needs rebalancing because how people die has changed radically over recent generations as death comes later in life for many and dying is often prolonged, and has moved from a family and community setting to primarily the domain of health systems. They argue that rebalancing death and dying will depend on changes across death systems—the many interrelated social, cultural, economic, religious and political factors that determine how death, dying and bereavement are understood, experienced and managed. We support this rebalancing of death and dying and suggest a broader scope for it by the inclusion of stillbirths—babies born dead. The incident of ‘death’ (loss of one’s life) impacts the friends and family left behind in addition to the individual who loses his/her own life. We argue that this type of impact is also true for stillbirths because a stillbirth is still a birth. Despite several calls to address preventable stillbirths, the acknowledgement that these babies ‘die’ and hence are born dead, and that some of them could and should have been born alive continues to be neglected by health practitioners, policy makers and in health metrics indicators. 3 The recent UNICEFIGME report estimated nearly 2 million stillbirths globally in 2021, defined as fetal death at or after the 28th gestational week but before birth. In comparison, an estimated 2.4 million neonatal deaths occurred globally in 2019, which is the death of a newborn (live birth) between birth and the first 28 days of postpartum life. The most disabilityadjusted lifeyears (DALYs), approximately 86 DALYs, in the Global Burden of Disease Study arise from neonatal death, most of which are early neonatal deaths that occur at birth (intrapartum complications) or within the first 6 days postpartum. Notably, many neonatal deaths result from preterm birth— that is, birth earlier than 37 weeks of gestation. Therefore, in terms of the burden of disease, a baby born alive and prematurely at the 24th gestational age who dies at birth or right after birth is registered as the worst possible tragedy with 86 DALYs. In contrast, the death of a baby in the womb at 40th week of gestation just before birth (stillbirth) is not assigned any disease burden. Today’s majority view for contemporary philosophers is that death is comparatively harmful to the individual who dies, 7 and the years of life lost component of the DALY relies on such counterfactual reasoning. 9 In this philosophical reasoning, death implies a loss of a future, and generally, death at a young age results in losing a more extensive future than death at an older age. If taken seriously, such a comparative account of the harm of death implies that neonatal death is considered not just death of the neonate but death of ‘a future like ours’ with all that life has to offer. That is to say, the death of a baby implies the loss of not only the baby itself but also the child and adult person that it could have been had it not died. However, the dichotomous view that birth itself constitutes the difference between a seemingly morally SUMMARY The proposition of the Lancet Commission on the Value of Death is that our relationship with death and dying needs rebalancing because how people die has changed radically over recent generations as death comes later in life for many and dying is often prolonged, and has moved from a family and community setting to primarily the domain of health systems. 1 They argue that rebalancing death and dying will depend on changes across death systems-the many interrelated social, cultural, economic, religious and political factors that determine how death, dying and bereavement are understood, experienced and managed. We support this rebalancing of death and dying and suggest a broader scope for it by the inclusion of stillbirths-babies born dead. The incident of 'death' (loss of one's life) impacts the friends and family left behind in addition to the individual who loses his/her own life. We argue that this type of impact is also true for stillbirths because a stillbirth is still a birth. Despite several calls to address preventable stillbirths, the acknowledgement that these babies 'die' and hence are born dead, and that some of them could and should have been born alive continues to be neglected by health practitioners, policy makers and in health metrics indicators. 2 3 The recent UNICEF-IGME report estimated nearly 2 million stillbirths globally in 2021, defined as fetal death at or after the 28th gestational week but before birth. 3 In comparison, an estimated 2.4 million neonatal deaths occurred globally in 2019, which is the death of a newborn (live birth) between birth and the first 28 days of postpartum life. 4 The most disability-adjusted life-years (DALYs), approximately 86 DALYs, in the Global Burden of Disease Study arise from neonatal death, most of which are early neonatal deaths that occur at birth (intrapartum complications) or within the first 6 days postpartum. Notably, many neonatal deaths result from preterm birththat is, birth earlier than 37 weeks of gestation. Therefore, in terms of the burden of disease, a baby born alive and prematurely at the 24th gestational age who dies at birth or right after birth is registered as the worst possible tragedy with 86 DALYs. In contrast, the death of a baby in the womb at 40th week of gestation just before birth (stillbirth) is not assigned any disease burden. 5 Today's majority view for contemporary philosophers is that death is comparatively harmful to the individual who dies, 6 7 and the years of life lost component of the DALY relies on such counterfactual reasoning. 8 9 In this philosophical reasoning, death implies a loss of a future, and generally, death at a young age results in losing a more extensive future than death at an older age. If taken seriously, such a comparative account of the harm of death implies that neonatal death is considered not just death of the neonate but death of 'a future like ours' with all that life has to offer. That is to say, the death of a baby implies the loss of not only the baby itself but also the child and adult person that it could have been had it not died. However, the dichotomous view that birth itself constitutes the difference between a seemingly morally SUMMARY ⇒ Stillbirths and their families continue to be neglected despite several calls to address preventable stillbirths. ⇒ The dichotomy between stillbirth and neonatal death in the quantification of loss does not comply well with the societal burden of perinatal deaths or with the philosophical accounts of death's individual harm. ⇒ Grief is a natural emotional consequence of attachment and loss, whether the loss of a limb, country, employment, marriage or other crucial relationships. We argue that giving birth to a baby bearing no signs of life is grief unlike any other. Grieving for death must be rebalanced to include stillbirths. ⇒ Recognising stillbirth as a loss of life and not a baby born without life is important for the global child survival initiatives to be effective in reducing preventable stillbirths. BMJ Global Health insignificant event (ie, stillbirth) and the worst tragedy we can think of (ie, neonatal death) neither complies well with the philosophical perspective nor with the empirical literature on the societal burden of perinatal deaths. 6 10 There is also no birth dichotomy in perinatal medicine but rather a set of overlapping pathologies that can occur both before and after birth. The built-in ethical tension of perinatal deaths is also well reflected in the etymology of 'burden' itself, which can mean both 'to bear children' and 'that is borne'. Thus, we believe that our concept of disease burden should ideally reflect not only the harm of perinatal deaths that occur after birth but also those that occur before birth. The babies who are stillborn are real babies, and just because they died before birth does not mean they did not exist. And yet stillbirths are also overlooked in fertility indicators such as the crude birth rate which is based only on livebirths, 11 and in vital registration systems in many countries. 12 The Lancet Commission describes grief as the natural emotional consequence of attachment and loss, whether the loss of a limb, country, employment, marriage or other crucial relationships and mourning as the public face of this grief. 1 Similarly, the devastating incomprehension of giving birth to a baby bearing no signs of life is unexplainable. There is no greater pain that a parent can feel than leaving the hospital with empty arms without the baby and coming home to a house prepared for a baby that did not make it home. However, the invisibility of stillbirths is apparent even in grief and mourning, as individual feelings of guilt or shame prevent public mourning of their loss. 10 This lack of opportunity to publicly grieve fuels the cycle of stillbirths being considered of less consequence and without merit of grieving, contributing to their invisibility. Furthermore, bereavement refers to losing an important relationship through death and can be associated with many physical and mental health problems. The loss of a baby born dead reaches far beyond the loss of life. The psychological costs, including maternal depression and its impact on fathers, family and siblings, are profound and long-lasting. 10 13 During the COVID-19 pandemic, the world saw people dying alone and families unable to say goodbye and being prevented from coming together in grief. 14 This has been the case since long for many stillborn babies as they are not given proper burial or goodbyes. 13 The birth of a dead baby impacts families, and the most impact is on the mother. She enters the hospital pregnant but leaves with a box or empty arms. With women traditionally viewed as caregivers at times of ill health and dying, it is estimated that women contribute almost 5% of the global gross domestic product through health caring. 15 However, caregiving support is not always available to the mother of a baby born dead, who feels undervalued and unsupported having given birth to a baby born dead. 16 If current trends continue, an additional 20 million stillbirths are estimated to occur before 2030, placing an immense burden on women, families and society. 3 Therefore, there are reasons to argue that death's harm to others implies that there should be no prebirth and postbirth dichotomy for either quantifying the disease burden or being able to grieve and be supported. The world suffered an estimated 48 million stillbirths in the past two decades. The health community recognises the urgent need to prevent stillbirths, and stillbirth prevention has become an essential part of global child survival initiatives. 3 The UN-IGME report has highlighted urgent actions to prevent an estimated 20 million more stillbirths by 2030. 3 Importantly, this death toll could likely be higher because of the impact of COVID-19. 17 The Lancet Commission emphasises that grieving must be rebalanced and calls on the society to respond to this challenge. 1 We respectfully extend this challenge and call on society to embrace stillbirths as the death of a baby, many of whom should have been born alive, which is essential not only for the global child survival initiatives to be effective in preventing further loss of lives but also for providing support for those grieving the loss of lives of their babies. In conclusion, real progress in stillbirth prevention can be made by simply recognising stillbirth as a loss of life and not a baby born without life. There is still a pregnancy, still a baby, still a mother, still a father-a stillbirth is still a birth. Let's grieve for a whole life lost. Contributors RD and CTS contributed equally to this work. Funding This work was supported by the Bill & Melinda Gates Foundation. Disclaimer The funder had no role in the writing of the report or in the decision to submit the paper for publication. Competing interests RD is on the Board of the International Stillbirth Alliance. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement No data are available. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/ licenses/by/4.0/.
2023-03-09T06:16:34.947Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "b4985626fe36cc4acee118c1f96e0bc5e8d08c14", "oa_license": "CCBY", "oa_url": "https://gh.bmj.com/content/bmjgh/8/3/e011815.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7392d7b054ae9d36eb92533fa79e37bb09aa71c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55085
pes2o/s2orc
v3-fos-license
Early HIV-1 envelope-specific cytotoxic T lymphocyte responses in vertically infected infants. High frequencies of cytotoxic T lymphocyte precursors (CTLp) recognizing HIV-1 laboratory strain gene products have been detected in adults within weeks of primary infection. In contrast, HIV-1-specific CTLp are uncommonly detected in infants younger than 6 mo. To address the hypothesis that the use of target cells expressing laboratory strain env gene products might limit the detection of HIV-1 env-specific CTLp in early infancy, recombinant vaccinia vectors (vv) expressing HIV-1 env genes from early isolates of four vertically infected infants were generated. The frequencies of CTLp recognizing target cells infected with vv-expressing env gene products from early isolates and HIV-1 IIIB were serially measured using limiting dilution followed by in vitro stimulation with mAb to CD3. In one infant, the detection of early isolate env-specific CTLp preceded the detection of IIIB-specific CTLp. CTLp recognizing HIV-1 IIIB and infant isolate env were detected by 6 mo of age in two infants. In a fourth infant, HIV-1 IIIB env and early isolate env-specific CTLp were simultaneously detected at 12 mo of age. These results provide evidence that young infants can generate HIV-1-specific CTL responses and provide support for the concept of neonatal vaccination to prevent HIV-1 transmission. However, the early predominance of type-specific CTL detected in some young infants suggests that the use of vaccines based on laboratory strains of HIV-1 may not protect against vertical infection. T he vertical transmission of HIV-1 from an infected woman to her infant is the predominant mode of pediatric infection. Prospective studies estimate the rate of vertical transmission to be 14-40% (1). Vertical HIV-1 transmission can occur in utero, during delivery (intrapartum), or after birth through breastfeeding (1). In nonbreastfed populations, 45-70% of vertical infections occur in the intrapartum period. In developing countries, where breastfeeding is necessary and encouraged, up to 75% of infants may be infected at or after delivery. In recent years, the incidence of vertical HIV-1 transmission has sharply increased (2) resulting in an urgent need to develop effective strategies to prevent vertical HIV-1 infection. Although the administration of zidovudine to mother-infant pairs has been shown to reduce significantly the risk of vertical HIV-1 transmission (3), the cost and intensity of the regimen render it impractical for use in developing nations, where most pediatric infections occur. Additionally, perinatal antiretroviral therapy would not prevent the vertical transmission of HIV-1 through breastfeeding beyond the neonatal period. A safe and effective active/ passive vaccine regimen, begun at birth, therefore would be a more attractive strategy. Better understanding of the pathogenesis of vertical HIV-1 infection and the capability of the young human infant to generate HIV-1-specific immune responses is crucial for the development of a vaccine to prevent vertical HIV-1 infection. We and others have previously reported that HIV-1-specific cytotoxic T lymphocyte precursors (CTLp) 1 are uncommonly detected in early infancy (4,5). Viral genotype/ phenotype, early viral load, host genotype, timing of infection, and history of prior infections appear to be important factors in the generation of virus-specific CTL responses. Two lines of evidence suggested to us that the use of labstrain vaccinia vectors (vv) to detect HIV-1-specific CTL might underestimate the CTL repertoire in early infancy. First, analysis of vertically transmitted HIV-1 env sequences suggests that limited viral genotypes are transmitted or amplified after infection (6,7). Second, Selin et al. (8) have reported higher frequencies of lymphocytic choriomeningitis (LCMV)-specific CTLp in animals who have experienced prior heterologous viral infections than in immunologically naive animals. Therefore, we hypothesized that type-specific responses might predominate in early infancy and that 1154 Type versus Group-specific CTL in Early Vertical HIV-1 Infection the use of target cells expressing laboratory isolate gene products might limit the detection of HIV-1 env-specific CTLp in early infancy. To address this hypothesis, HIV-1 env genes from early isolates of four vertically infected infants were PCR amplified, cloned, and used to generate recombinant vv. CTLp frequencies recognizing target cells infected with vv-expressing env gene products from early isolates and HIV-1 IIIB were serially measured from early to late infancy using limiting dilution followed by in vitro stimulation with mAb to CD3; split-well analysis allowed the evaluation of crossreactivity of detected CTLp. In one infant, the detection of CTLp recognizing target cells expressing early isolate env preceded the detection of CTLp recognizing target cells expressing IIIB env. These type-specific CTLp detected in early infancy were later replaced by cross-reactive groupspecific CTL. Cross-reactive env -specific CTLp recognizing HIV-1 IIIB and infant isolate env were detected by 6 mo of age in two infants. In a fourth infant, CTLp recognizing target cells infected with HIV-1 IIIB env and early isolate env were simultaneously detected at 12 mo of age. Implications for neonatal HIV-1 vaccine development are discussed. Materials and Methods Patients. Four infants with defined timing of infection and previously characterized CTL responses to HIV-1 IIIB env (4) were chosen for these studies. HIV-1 culture and DNA PCR were positive in blood specimens obtained at birth and in all subsequent specimens from two infants (VI-05 and VI-06), suggesting in utero infection (9). HIV-1 culture and DNA PCR were negative on specimens obtained from two other infants (VI-08 and VI-11) at birth but were positive by 1 mo of age, suggesting late in utero or intrapartum infection. None of the infants were on antiretroviral therapy at the time that isolates were obtained for use in the preparation of vv constructs. HIV-1 IIIB env -specific CTLp were previously detected in blood samples obtained during early infancy from only 1 (VI-06) of the 4 infants (4). HIV-1 CTLp were detected in cord blood and subsequent specimens from VI-06. HIV-1 gag-specific CTLp were detected as early as 3 mo of age in VI-11; env -specific CTLp were not detected through 12 mo of age. In VI-05 and VI-08, HIV-1 env -specific CTLp were not detected until 10-11 mo of age. Human Studies Committee approval and individual informed consent from the guardian of each infant were obtained before we conducted these studies. Table 1 summarizes sequential measures of peripheral blood HIV-1 load and CD4 counts of each infant studied as well as the ages at which env genes were amplified from infant viral isolates for cloning and insertion into vv. Lymphocyte Separation and Cryopreservation. PBMC were isolated from freshly drawn heparinized blood by Ficoll-Paque (Pharmacia, Piscataway, NJ) density centrifugation (10). PBMC were viably cryopreserved using a KRYO 10 Series cell freezer and stored in liquid nitrogen until use. Preparation of Genomic DNA. PBMC cultures were performed according to the AIDS Clinical Trials Group virology consensus protocols (11). Supernatants from these cultures were used to establish low (1)(2) passage viral cultures at the timepoints specified in Table 1. In brief, 10 7 HIV-1 seronegative donor PHA blasts The ages at which env genes were amplified from infant viral isolates for cloning and insertion into vv are denoted in bold type. * TCID 50 /ml plasma. ‡ TCID 50 /10 6 PBMC. § RNA copies/ml plasma. cultured in complete media (CM) containing RPMI, 10% FCS, 50 g/ml gentamicin, and 5 U/ml IL-2 were pelleted, resuspended in 1 ml of viral culture supernatant, and incubated for 1 h at 37 Њ C in 5% ambient CO 2 . The cells were then washed and resuspended in CM; the p24 antigen content of the supernatant was monitored daily by enzyme immunoassay. On day 5 of culture, high molecular weight DNA was purified by a standard proteinase K digestion-isopropanol precipitation technique. PCR Primers and Conditions. Oligonucleotide primer sequences were chosen from regions directly upstream and downstream of the initiation and termination codons of the env gene, respectively. The following two primers were used: MNA, 5 Ј -GCG-AAAGAGCAGAAGACAGTGGC-3 Ј (corresponding to positions 6197-6220 of the NL4-3 genome) and MN13, 5 Ј -CAGCTC-GTCTCATTCTTTCCC-3 Ј (positions 8836-8857). PCR mixtures consisted of 10 mM Tris (pH 8.3), 50 mM KCl, 0.2 mM each of the four deoxynucleoside triphosphates, 2.5 mM MgCl 2 , 10 pmol of each primer, 200 ng of DNA, and 2.5 U of ampliTaq polymerase (Perkin-Elmer, Foster City, CA), which was added during an initial 3-min denaturation step. Amplification was conducted for 30 cycles in a thermal cycler (Perkin-Elmer). Each cycle consisted of three steps: denaturation (94 Њ C for 60 s), primer annealing (60 Њ C for 90 s), and extension (72 Њ C for 3 min), with a final extension at 72 Њ C for 10 min. Heteroduplex Formation and Sequence Diversity Determination. The V1 to V3 region of env was PCR amplified from the cloned PCR env products using primers 209 (positions 6453-6470) and 218 (positions 7382-7399). PCR conditions were identical to those described above except for a MgCl 2 concentration of 4 mM, an annealing temperature of 55 Њ C, and the absence of a hot start. After the internal labeling of PCR products with [ 32 P]dCTP, heteroduplexes were formed between labeled and unlabeled products and DNA fragments were separated in a neutral 5% polyacrylamide gel (12). DNA fragments were separated using the Model S2 Sequencing Gel Electrophoresis Apparatus (GIBCO BRL, Gaithersburg, MD) at 30 mA for 12 h. Sequence comparisons and diversity estimates were determined using best-fit analysis in GCG using the Genetics Computer Group's Wisconsin Sequence Analysis Package version 8.0.1 with the use of the default program parameters. Molecular Cloning and Recombinant vv Generation. The entire 2.6kb coding region of env was PCR amplified from proviral DNA at each of two early timepoints (first and 3-6-mo isolates) from each patient; the PCR product was gel purified with Eluquick (Schleicher & Schuell, Keene, NH) and ligated to the plasmid vector pCR3 (InVitrogen, San Diego, CA). After ligation, the reaction mixture was introduced into TOP10F Ј cells by CaCl 2 transformation. Cells were plated on LB-carbenicillin agar plates and incubated overnight at 37 Њ C; selected colonies were then expanded in 3-ml cultures containing LB-ampicillin overnight at 37 Њ C. Clones with inserts of appropriate size, as judged by restriction enzyme digestion, were sequenced at the SP6 or T7 promoter sites within pCR3 adjacent to the insert to determine orientation using the Sequenase Version 2.0 sequencing kit (United States Biochemicals, Cleveland, OH). Clones were then grouped according to digestion patterns obtained with StuI, AvaI, BamHI, KpnI BglI, and HindIII and a predominant clone was selected from each timepoint for subcloning ( Table 2). The predominant clone from each timepoint was digested with BamHI and XhoI and ligated into the BamHI-XhoI site of pAbT4587A (created by the addition of an XhoI site within the pAbT4587 vector provided by Therion Biologics, Cambridge, MA). After ligation, the reaction mixture was introduced into DH5a cells (Therion Biologics, Cambridge, MA) by CaCl 2 transformation. Cells were grown as described above and clones were again screened by restriction enzyme digests. Table 2. Grouping of Patient Isolate env Clones within pCR3 Based upon Restriction Digests and Choice of Predominant Clones for Preparation of vv Recombinant vv expressing infant isolate env gene products were generated, amplified, and titered according to the methods of Mazzara et al. (13). Each env-recombinant vv expressed gp160 and its cleavage products as determined by radio immunoprecipitation. In addition, each of these vv was able to sensitize target cells to antibody-dependent cell-mediated cytotoxicity (ADCC) lysis (Pugatch, D., K. Luzuriaga, and J.L. Sullivan, manuscript submitted for publication). Limiting Dilution Assays of CTL Precursors. HIV-1 env-specific CTLp frequencies were estimated using previously described methods (4,14). To minimize potential variability in assay conditions and to allow comparison of results between the timepoints studied for each infant, all limiting dilution cultures for each infant were set up and all CTLp assays were performed at the same time and with the same reagents. Cryopreserved PBMC were thawed and diluted at 16,000 to 250 lymphocytes per well in 24 replicate wells of 96-well microtiter plates; 2.5-5.0 ϫ 10 4 irradiated PHA blasts from HIV-1-uninfected donors and mAb to CD3 (12 F6; 0.1 g/ml; provided by Dr. J.T. Wong, Massachusetts General Hospital, Boston, MA) were added to each well and the plates were incubated at 37 Њ C in R10 with 30 U/ml IL-2 for 7-10 d. Wells were then split and assayed for cytotoxicity on 51 Cr-labeled autologous B lymphoblastoid cell lines infected either with vac alone or with vv-expressing IIIB or patient isolate env proteins (M.O.I. ϭ 5:1). The fraction of nonresponding wells was defined as the number of wells in which lysis did not exceed 10% (VI-11) or 20% (VI-05, VI-06, VI-08). Percentage of lysis was calculated for each well using the formula 100 ϫ (test cpm Ϫ spontaneous cpm) / (maximal cpm Ϫ spontaneous cpm); the CTLp frequency and 95% confidence limits were calculated using the maximum likelihood method (15) and a spreadsheet provided by Dr. S. Kalams (Massachusetts General Hospital, Boston, MA). Split well analysis was used to examine the cross-reactivity of CTLp. Quantitation of Plasma HIV-1 RNA. Plasma HIV-1 RNA levels were measured after reverse transcription and PCR amplification using a commercial assay (Roche Diagnostic Systems, Branchburg, NJ), as directed by the manufacturer. Results HIV-1 Infant Isolate env Sequences Are Homogeneous and between 8-10% Divergent from IIIB env. We began by examining the degree of diversity between infant isolate env and IIIB env. Examination of sequences through the V3 loops suggested relative homogeneity of first isolates (data not shown). Divergence in nucleotide sequence between HIV-1 IIIB and individual infant isolate env was then estimated using heteroduplex tracking assays and sequencing. Fig. 1 demonstrates that heteroduplexes formed by combining individual infant isolate and IIIB env migrate at rates between those of heteroduplexes formed by combining IIIB and MN env or IIIB and p08-6-20D (an env clone derived from the virus of patient VI-08). Sequence com- Figure 1. Heteroduplex analysis of homology between patient isolate DNA and IIIB env sequences. Env fragments (946 bp in the V1-V3 regions) were PCR amplified from genomic DNA extracted from PBMC infected with patient HIV-1 isolates or from a plasmid containing BH10 (IIIB) env (pAbT4603). The PCR product of pAbT44603 was internally radiolabeled during the PCR and combined with the PCR products of the proviral templates of the patient to form heteroduplexes as described in Materials and Methods. Lanes 1-3 are clones of known sequence probed with pAbT4603. Lane 1 is a PCR product of the NL 4-3 strain of HIV-1, lane 2 is a PCR product of the MN strain of HIV-1 (prototypic clade B strain of HIV-1), and lane 3 is a PCR product from an env clone derived from patient VI-08 at 1 d of age (pJA-6-1D). Lanes 4-11 are PCR products amplified from patient proviral DNA (VI-06-1D and VI-06; VI-05; VI-05-6M; VI-08-20D and VI-08-6M; VI-11-1M and VI-11-3M; Table 2). Lane 12 is a negative control of probe alone. parisons of a 946-bp region spanning V1 to V3 indicate that IIIB and MN env differ from each other by 8.3%, whereas IIIB and p08-6-20D differ by ‫.%01ف‬ Therefore, the degree of heterogeneity between individual infant isolate and IIIB env sequences is between 8-10%. The Generation of Recombinant vv from Early Infant Viral Isolates. The entire 2.6-kb coding region of env was amplified from proviral DNA at two early timepoints (and first 3-6-mo isolates) from each infant, ligated into pCR3 (In-Vitrogen), and digested with StuI, AvaI, BamHI, KpnI, BglI, and HindIII. One to four distinct patterns of restriction digests were obtained from each set of clones derived from virus from each patient at a single timepoint and were designated as groups A, B, C, and D. The group containing the largest number of clones was considered to be the predominant group and one clone was then chosen from each predominant group for subcloning ( Table 2). The instability of the env gene from the viral isolate of patient VI-06 at 6 mo of age in pAbT4587A prevented its subcloning. Therefore, only one env subclone, from 1 d of age, was derived from virus of infant VI-06. The gel isolation of multiple env genes for subcloning could potentially result in contamination of one env gene with another. Heteroduplex analysis was employed to ascertain whether the subcloned envs were identical to the original clone from which they were derived. Absence of contaminating env sequences was shown by the lack of heteroduplex formation between cloned env fragments and their identical subclone (Fig. 2). In addition, env coding sequences from pNL4-3, a molecular clone used frequently in our laboratory, were also used to form heteroduplexes with the env clones derived from infant viral isolates to en- Figure 2. Heteroduplex analysis of cloned and subcloned env genes. env fragments (946 bp in the V1-V3 regions) were PCR amplified from env clones within the pCR3 and pAbT4587A backbones and from pNL4-3. The PCR product of env clones within pCR3 and pNL4-3 were internally radiolabeled during the PCR. The PCR products of env clones within pCR3 were combined with the PCR products of the identical env subclones within pAbT4587A to form heteroduplexes as described in Materials and Methods. The radiolabeled PCR product of pNL4-3 was also combined with PCR products of the env clones within pAbT4587A to form heteroduplexes. Lanes 1-7 are subcloned env genes probed with identical genes ligated into pCR3. Lanes 8-14 are subcloned patient isolate env genes probed with pNL4-3. Lanes 15-22 are negative controls containing radiolabeled PCR products alone. sure that none contained the pNL4-3 env sequence. The formation of heteroduplexes of varying mobility between pNL4-3 and the cloned env fragments further verified lack of contamination and demonstrated that each of the env clones was unique. HIV-1 env-specific CTL Responses in Early Infancy. Beginning in early infancy, CTLp frequencies were measured sequentially in the peripheral blood of the four infants studied. In three infants, env-specific CTLp were detected by 6 mo of age (Table 3). HIV-1-specific CTLp recognizing autologous env gene products were detected in the peripheral blood of a second infant (VI-11) at 3, 7, and 12 mo of age and preceded the first detection of CTLp recognizing HIV-1 IIIB env gene products at 12 mo of age (Table 3). HIV-1-specific CTLp recognizing first (20-d) isolate and IIIB env gene products were detected by 4 mo in another intrapartum-infected infant (VI-08). In infant VI-06, HIV-1-specific CTLp recognizing first isolate and IIIB env gene products were detected by 6 mo of age. HIV-1-specific CTLp were not detected in the cord blood of infant VI-05, but CTLp recognizing IIIB and infant isolate env gene products were detected at 12 mo of age. Evaluation of Cross-reactivity of HIV-1 env-specific CTLp Using Split-well Analysis. Genotypic analysis of sexually (16) and vertically (6, 7) transmitted HIV-1 strains suggests that limited viral genotypes appear to be transmitted or amplified after infection. After infection, diversification of viral species may occur through reverse transcriptase error and various selective pressures (for review see reference 17). New populations of CTL may expand in an infected indi-vidual in response to evolving quasispecies of HIV-1, resulting in a broadening of the HIV-1 immune response. Split-well analysis allowed us to examine CTLp cross-reactivity on a clonal level. Fig. 3 illustrates CTLp cross-reactivity over time. HIV-1 env-specific CTLp detected in VI-11 in early infancy exclusively recognized early infant isolate env, whereas HIV-1-specific CTLp detected in later infancy were primarily cross-reactive. In VI-06 and VI-08, cross-reactive CTLp recognizing early infant isolate and IIIB env were detected as early as 6 and 4 mo of age, respectively. At 6 mo of age, only CTLp recognizing target cells infected with vv expressing the 20-d isolate were detected in VI-08, while cross-reactive CTLp were detected again at 19 mo. Since virus-independent in vitro stimulation was used to expand CTLp, the CTLp frequencies detected are likely reflective of the CTLp frequencies in vivo. However, at the relatively low CTLp frequencies detected at 6 mo, sampling error might explain the sole detection of type-specific CTLp. HIV-1-specific CTLp Activity Detected in Early Infancy Is CD8 T Cell Mediated. HIV-1 env-specific cytolysis may be CD8 T cell mediated and HLA class I-restricted or NK cell-mediated cytolysis through ADCC (18). Unfortunately, the limited PBMC repository available from early infancy precluded the use of CD4, CD8, or NK-depleted PBMC populations in the limiting dilution assays. To determine the phenotype of the effector cells in the limiting dilution assays, we performed limiting dilution assays in which a bispecific (anti-CD3, CD8) mAb was used instead of mAb to CD3 (19). Use of this antibody led to the depletion of CD8 T cells from the LDA wells (Ͻ1% CD8 T cells compared with 45% CD8 T cells in the wells treated with mAb to CD3 alone) and a 55-90% reduction in env-specific CTLp frequency compared with wells treated with mAb to CD3 alone. These studies suggest that the env-specific CTLp detected in the assays were mediated by CD8 T cells. Detection of HIV-1 Env-specific CTLp in Early Vertical Infection, Viral Load, and Clinical Disease Progression. In three infants, HIV-1-specific CTL recognizing early infant isolate gene products were detected before 6 mo of age. Infant VI-11, in whom HIV-1 gag-and early isolate env-specific CTL were detected by 3 mo of age, has remained only mildly symptomatic without evidence of immune suppression (CDC A1; reference 20) at 3 yr of age. However, as previously described (4), infant VI-06 experienced a rapid increase in viral load and a rapid decline in peripheral blood CD4 T cell numbers after birth despite the detection of HIV-1 gag-and env-specific CTL in cord blood and 3-wk specimens. While env-specific CTLp were detected in infant VI-08 by 4 mo of age, this infant developed HIV-1 encephalopathy and severe CD4 depletion (CDC C3) by 3 yr of age. The early detection of HIV-1 env-specific CTLp in the latter two infants suggest that HIV-1-specific CTL alone may not protect against CD4 depletion or disease progression. Discussion In assays using target cells expressing laboratory strain HIV-1 gene products, HIV-1-specific CTLp have been uncommonly detected in early infancy. This study addressed the hypothesis that type-specific responses might predominate in early infancy and that the use of target cells expressing laboratory isolate gene products might limit the detection of HIV-1-specific CTL in early infancy. To address this hypothesis, HIV-1 env genes from early isolates of four vertically infected infants were PCR amplified, cloned, and used to generate recombinant vv. The frequencies of CTLp recognizing target cells infected with vv-expressing env gene products from early isolates and HIV-1 IIIB were serially measured from early to late infancy using limiting di-lution followed by in vitro stimulation with mAb to CD3. HIV-1-specific CTLp were detected before 6 mo of age in three infants. In one infant, the detection of CTLp recognizing target cells expressing early isolate env preceded the detection of CTLp recognizing target cells expressing IIIB env; these type-specific CTLp detected in early infancy were later replaced by cross-reactive group-specific CTL. In two other infants, early group-specific responses were detected. In a fourth infant, CTLp recognizing target cells infected with HIV-1 IIIB env and early isolate env were simultaneously detected at 12 mo of age. These results reconfirm that young infants can generate HIV-1-specific CTL responses and provide support for the concept of perinatal vaccination to prevent HIV-1 transmission. The generation of vv-expressing infecting strain gene products was central to our studies. To minimize the chance of amplifying defective HIV-1 env sequences, we chose to use env genes amplified from cultured virus to construct the env-expressing recombinant vv. Whereas several studies have demonstrated the outgrowth of variant strains of limited diversity in viral coculture that did not predominate in vivo at the timepoint at which the culture was established (21,22), several lines of evidence suggest that the isolates used were representative of the in vivo population of viruses. First, studies that have compared the selection of variants in culture to those present in vivo have maintained these viral cultures up to at least 28 d. The env genes used to generate recombinant vv in our studies were amplified from minimally passaged virus cultured for less than 1 wk. Second, the absence of anti-retroviral therapy in these patients at each of the timepoints from which the env vaccinia recombinants were generated supports the notion that these variants were not suppressed in vivo and expanded in vitro in the absence of drug. Third, the ability of CTL precursors from these patients to recognize env protein products from the env sequences derived from cultured virus suggests that these sequences were present in vivo. Minimal diversity in env sequences of HIV-1 strains isolated from adults and infants early in primary infection has been reported (6,7,16). Although HIV-1-specific CTL expand in response to a homogeneous population of variants in both populations of infected individuals, group-specific CTL appear to be detected more commonly in adult primary infection than in vertical primary infection (23,24). Therefore, the type specificity of early infant CTL responses is not likely due to a homogeneous starting population of virus. Adults possess memory CTL that may cross-react with HIV-1 proteins and expand in response to HIV-1 infection in concert with antigen-naive CTL precursors. CTL expansion upon acute HIV-1 infection in adults may be similar to that observed by Selin et al. (8) in mice. They observed that acute Pichinde virus infection in LCMV immune mice resulted in expansion of CTL that were cross-reactive with both LCMV and Pichinde virus. Crossreactive memory CTL originally expanded in response to a previous non-HIV-1 antigenic encounter may possess TCRs with lower affinity for HIV-1 antigen. A lower affinity interaction may allow for a greater degree of promiscuity in recognition of epitopes of HIV-1 gene products on various HIV-1 strains. The group-specific CTL responses detected in infected adults during or shortly after seroconversion may be due in part to the contribution of this expanded non-HIV-1-specific memory pool of CTL. In contrast, HIV-1 infection occurs in infants whose immune systems have not been primed by previous antigenic exposure. Therefore, potentially cross-reactive memory CTL may not exist. Diversification of the viral variant population over time may eventually result in the expansion of CTLp populations recognizing a greater number of viral epitopes with the concomitant ability to recognize these epitopes on more than one strain of HIV-1. Alternatively, type-specific responses detected in some young HIV-1 vertically infected infants may be due to partial tolerance. Vertical HIV-1 infection occurs at a time when the cellular immune system is being vigorously edited on the basis of its ability to discriminate between self and nonself. HIV-1 antigen may be viewed as self-antigen during this process and HIV-1-reactive CTL may be deleted or anergized. The ability of HIV-1 to infect and replicate within professional antigen presenting cells such as macrophages and its presence on dendritic cells may allow for activation of a small population of HIV-1-reactive CTL that escape tolerance induction. This partial break in tolerance may originally be directed at a limited array of viral epitopes, resulting in a type-specific CTL response. Recognition of these initial epitopes by CTL may lead to an expansion of CTL responsiveness in which a progressively greater number of viral epitopes may be recognized. Expansion in epitope recognition may subsequently result from the phenomenon of epitope spreading (25) in concert with a diversification of HIV-1 variants toward viral gene products whose sequences bear less resemblance to the original tolerizing variants. The group-specific CTL responses detected by 12 mo of age may evolve in this manner. In summary, our studies indicate that young infants are capable of generating virus-specific CTL in response to viral infection and some support the development of a neonatal HIV-1 vaccine. However, the detection of type-specific immunity in some young infants suggests that a vaccine based upon laboratory strains of HIV-1 may not be protective. For optimal efficacy, it will likely be necessary to use gene products from viral strains isolated from patients in targeted geographical regions as immunogens.
2014-10-01T00:00:00.000Z
1997-04-07T00:00:00.000
{ "year": 1997, "sha1": "32843b02e4441928f96cb6fa9e3b007b137b0930", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/185/7/1153.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ea33ca55b1ddf513c7e78a2e961818e9376fa69f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267628457
pes2o/s2orc
v3-fos-license
Early and annual projected savings from anti-CGRP monoclonal antibodies in migraine prevention: a cost-benefit analysis in the working-age population Background Migraine is one of the main causes of disability worldwide. Anti-CGRP monoclonal antibodies (MAbs) have proven to be safe and efficacious as preventive migraine treatments. However, their use is restricted in many countries due to their apparently high cost. Cost-benefit studies are needed. Objective To study the cost-benefit of anti-CGRP MAbs in working-age patients with migraine. Methods This is a prospective cohort study of consecutive migraine patients treated with anti-CGRP MAbs (erenumab, fremanezumab and galcanezumab) following National reimbursement policy in a specialized headache clinic. Migraine characteristics and the work impact scale (WPAI) were compared between baseline (M0) and after 3 (M3) and 6 months (M6) of treatment. Using WPAI and the municipal average hourly wage, we calculated indirect costs (absenteeism and presenteeism) at each time point. Direct costs (emergency visits, acute medication use) were also analysed. A cost-benefit study was performed considering the different costs and savings of treating with MAbs. Based on these data an annual projection was conducted. Results From 256 treated working-age patients, 148 were employed (89.2% women; mean age 48.0 ± 8.5 years), of which 41.2% (61/148) were responders (> 50% reduction in monthly headache days (MHD)). Statistically significant reductions between M0 and M3/M6 were found in absenteeism (p < 0.001) and presenteeism (p < 0.001). Average savings in indirect costs per patient at M3 were absenteeism 105.4 euros/month and presenteeism 394.3 euros/month, similar for M6. Considering the monthly cost of anti-CGRP MAbs, the cost-benefit analysis showed savings of 159.8 euros per patient at M3, with an annual projected savings of 639.2 euros/patient. Both responders and partial responders (30–50% reduction in MHD) presented a positive cost-benefit balance. The overall savings of the cohort at M3/M6 compensated the negative cost-benefit balance for non-responders (< 30% reduction in MHD). Conclusion Anti-CGRP MAbs have a positive impact in the workforce significantly reducing absenteeism and presenteeism. In Spain, this benefit overcomes the expenses derived from their use already at 3 months and is potentially sustainable at longer term; also in patients who are only partial responders, prompting reconsideration of current reimbursement criteria and motivating the extension of similar cost-benefit studies in other countries. Supplementary Information The online version contains supplementary material available at 10.1186/s10194-024-01727-0. sustainable at longer term; also in patients who are only partial responders, prompting reconsideration of current reimbursement criteria and motivating the extension of similar cost-benefit studies in other countries. Background Migraine is a highly prevalent disease which peaks during the most professionally productive years of our lives [1].According to the last report of the Global Burden of Disease (GBD) study, headache rates as the second most disabling disease in terms of disability-adjusted life years (DALYs) in people under the age of 50 [1,2] and, especially, in Western Europe [2].It represents the 5% of the DALYs between 10 and 24 years of age and 3.7% between 25 and 49 years [1].This leads to huge direct and indirect costs for the society reaching the 111 billion annually in Europe [3,4].Indirect costs (absenteeism and presenteeism) account for the biggest part of this economic burden [3], making the actively working population a relevant target for migraine preventive strategies. Oral preventive drugs (such as beta-blockers, antiepileptics and antidepressants) are the most widely available and used treatments for migraine prevention.However, they are non-specific and often associated with poor tolerability [5].This fact leads to frequent treatment discontinuation and therefore inadequate disease management; with a consequent poor effect in reducing the personal, social, and economic burden of migraine [5,6]. In the last years, migraine-specific preventive treatments targeting the calcitonin-gene related peptide (CGRP) pathway have been approved [7][8][9].Specifically, anti-CGRP monoclonal antibodies (MAbs) have a wellestablished effectiveness and tolerability [10], including compared to conventional oral preventive drugs [11] and therefore they represent a valuable option for migraine prevention [9].However, their use in clinical practice is limited due to National Reimbursement policies which are driven by the apparent thought that they are expensive [12].Specifically, in Spain, anti-CGRP MAbs can be prescribed after failure to three or more preventive treatments, one of them being onabotulinumtoxinA in case of chronic migraine. Considering the need to reduce migraine burden for people and, in macroeconomic terms, for society, it is fundamental to understand the cost-benefit of anti-CGRP MAbs, especially in the actively working population, as it may help redefining the current migraine care and reimbursement policies. Given that anti-CGRP MAbs are clinically effective already at short-term [13,14], we aimed to evaluate their cost-benefit and work impact in a cohort of actively working patients with migraine at 3 and 6 months of treatment. Methods This is a prospective study conducted in a Spanish headache clinic, between Feb 1st, 2020 and Jan 31st, 2023.We screened all consecutive migraine patients treated with anti-CGRP MAbs according to the Spanish national guidelines [7] and European Headache Federation (EHF) recommendations [8] (erenumab 140 mg monthly, fremanezumab 675 quarterly and galcanezumab 120 mg monthly + 240 mg loading dose).In Spain, National reimbursement policy allows for patients to be prescribed an anti-CGRP MAb when they suffer from at least 8 monthly migraine days (MMD) and have failed 3 or more migraine preventive drugs, being one of them onabotulinumtoxinA if chronic migraine (CM) [12].To be included in the study participants had to be in the working age population group between 18 to 65 years old.Migraine Diagnosis was done according to the third edition of the International Classification of Headache Disorders criteria (ICHD-3) [15]. We assessed demographic data, medical history, migraine characteristics and preventive treatments as well as working conditions (part time/full time, number of hours/week) at an initial visit.We used electronic headache diaries (eDiary) to prospectively collect the monthly migraine days (MMD), monthly headache days (MHD) and monthly acute medication days (MAMD).Emergency room consultations during the study period were obtained from the health care resource utilization scale (HCRU).Before starting the treatment, each participant had completed at least one-month of baseline eDiary.Concomitant treatments were allowed if they were stable for at least one month before starting anti-CGRP MAbs.Patients were followed up every 3 months at an in-person visit, since the first administration of the anti-CGRP MAbs (M0).The work productivity and activity impairment questionnaire (WPAI) [16] was used to assess the employment status and was administered at each follow-up visit. The primary outcome of this study was to assess the cost-benefit of anti-CGRP MAbs at 3 months (M3).Secondary outcomes were: 1) cost-benefit at 6 months, 2) changes in working status between M0 and M3, 3) costbenefit at 3 months according to responder status 4) projected cost-benefit at one year per person.We defined as responders (RE) those patients with ≥ 50% reduction in MHD, partial responders (PR) between 30-49% reduction in MHD and non-responders (NR) < 30%. For the cost-benefit analysis at 3 and 6 months, we used the variables reported in Supplementary Table 1.We calculated the cost-benefit as the difference of overall costs between M0 and M3 (or M6).Costs at each timepoint included direct and indirect costs. For direct costs, we considered the prices of the anti-CGRP MAbs as medication notified prices from the Spanish National Drug Registry of the Ministry of Health [12].Consultation costs related to follow-up (an outpatient visit every three months) were obtained from the Catalan Healthcare Institution [17].Finally, using the MAMD and the number of emergency visits from the HCRU, we calculated the costs related to acute medication use and healthcare resource utilization [18,19]. For indirect costs, data from the WPAI questionnaire were used to calculate work time loss (absenteeism) and work impairment (presenteeism).Using the average hourly salary published by the Institute of statistics of Catalonia [20] we calculated the indirect working costs attributed to headache at M3 and M6.[21].The indirect costs related to time spent to get to the hospital, visits at the clinic and drug administration were considered negligible. Furthermore, recent studies have reported that anti-CGRP MAbs are effective at longer term in real world [22].Based on this assumption, we projected the savings achieved after 3-6 months of treatment over a year. Statistical analysis Nominal variables, including sex, diagnosis, the presence of aura, the type of anti-CGRP MAb prescribed, and the emergency visits are presented as frequencies and percentages.Conversely, quantitative variables like age, HDM, MDM, MAMD are described using mean and standard deviation.Additionally, the total working hours, absenteeism and presentism are reported by the mean percentage present in our cohort. We analysed the longitudinal variances with a 3-month and 6-month interval.Differences in MHD, MMD, MAMD, absenteeism and presenteeism over time (M0-M3; M0-M6) were compared using a paired Wilcoxon signed-rank test, after checking their distribution.In this way each patient constituted his own control.All patients including those who discontinued were included in the cost analysis. Considering the simultaneous analysis of multiple statistical tests, we used the False Discovery Rate (FDR) method to adjust the p-values.Our significance threshold for determining statistical significance was set at p < 0.05 after applying this adjustment.Due to the exploratory nature of our research and the limited available data, no statistical power analysis was conducted prior to the analysis and the sample size was determined based on the available data. Statistical analysis was performed using the version 4.2.2 of R software and figures were produced using the package ggplot2. Characteristics of the working-age cohort From 470 patients treated with anti-CGRP MAbs, 426 were at working age.Of these, 256 participants had complete data, the 57.8% (148/256) were employed at both M0 and M3.Reasons for not working at M0 (96/256) were: 53% unemployed (n = 51, from which a 55% (n = 28) reported migraine as the main reason for this condition), 29.1% homemakers (n = 28), 8.3% students (n = 8) and 9.4% unknown (n = 9).If we consider patients who could potentially be employed (patients in working age without students and homemakers), the unemployment rate in our cohort stands at 24.5% (51/208).Reasons for not working are graphically represented in the Supplementary Fig. 1. From the final cohort of employed participants, 89.2% (132/148) were women and mean age was 48.0 ± 8.5 years.Sixty-two-point two percent (92/148) had CM. Figure 1 shows the participants flowchart, whereas Table 1 cohort baseline characteristics. Patients who persisted with the treatment at month 6 demonstrated a sustained improvement in the analysed response parameters: headache days/month, acute medication days/month, absenteeism and presenteeism (Supplementary Table 2). Cost-benefit analysis In relation to the work impact, we observed approximately a 40% reduction in work impact variables at 3 months (37% reduction in absenteeism, M0: 13.4% vs. M3 8.4%; p = 0.001; and 43% reduction in presenteeism, M0: 42.7% vs. M3: 24.3%; p < 0.001).Consultations at the emergency room (ER) decreased by 55.9% (M0: 47.7 emergency visits/month vs. M3: 21 emergency visits/ month; p < 0.001) (Table 2).Figure 2 and Table 3 show all direct and indirect costs at M0 and M3 as well as savings after anti-CGRP treatment per patient.The improvement in absenteeism and in presenteeism allowed a mean saving of 105.4 euros/month per patient and 394.3 euros/ month per patient, respectively, with an overall 3-month saving in indirect costs of 1499.1 euros/patient.Savings in direct costs, due to reduction in acute treatment use and ER visits, accounted for another 33.9 euros/month per patient.Thus, savings after 3 months of anti-CGRP treatment reached a mean of 1600.8 euros/patient.However, considering that the costs of anti-CGRP MAbs for 3 months of treatment are 1361.0euros/patient (453.67 × 3) and that of the outpatient visits are 80 euros/ patient, the cost-saving analysis found a final mean saving of 159.8 euros/patient for the firsts 3 months of treatment.For the overall cohort, the savings reached 23,650.4euros in the first 3 months (Table 3). At month 6, applying the same analysis to patients who continued anti-CGRP MAbs and had data available in all M0, M3 and M6 (n = 83), we observed an increase in savings at month 6 of 26.7 euros/month in indirect costs compared to month 3.There was also an increase in savings of 24.9 euros/month in direct cost.The total increase in savings reported at month 6 was 159.9 euros/patient compared to month 3. Which added to the previously calculated savings sums up to a total of 314.7 euros per patient during the second trimester, and an overall saving of 26,120.1 euros.Supplementary Table 3 reports all costs and savings at M3 and M6.After observing that savings calculated for month 3 remained stable in month 6, we estimated the annual The main cost of these therapies are in terms of direct expenses (the drugs costs).However, the 93% of the savings are indirect (reduction in absenteeism and presenteeism).This makes the benefit that they socially produce in economic terms not so evident in the first instance in comparison to their costs, but not less important.Image generated using BioRender savings per patient after a year of treatment to be 639.2euros. Changes in work status Of the 256 patients, 17 were students and homemakers.Therefore, 239 were the patients susceptible of changing employment status.The 94.5% (226/256) of the patients did not present changes in their employment status three months after starting the treatment.From the 61/256 unemployed patients, five patients got employed by month 3: 40% (2/5) reported the improvement in their migraine as the main reason for this change, 40% (2/5) for other reasons and in one case it was unknown. Discussion The working-age population represents the main target for migraine care because it is the most affected in terms of disease incidence and prevalence [1,23,24].Illness during working ages had a great economic impact on society [24].Our study is the first one to analyse in a prospective way the impact of starting anti-CGRP MAbs in working-age migraine patients to evaluate the costbenefit of these new, and apparently more expensive, migraine-specific preventive treatments.These are our findings: First, in patients actively working, anti-CGRP MAbs are cost-effective already at 3 months, mainly because the drug costs are compensated by the savings obtained by the reduction in absenteeism and presenteeism.Additionally, we observed a stability of this effect in patients who maintained the treatment up to 6 months and our projected annual cost-analysis also could lead to longterm savings.Other studies, using specific models based on clinical trials outcomes, have estimated that anti-CGRP drugs are in general cost-effective [25][26][27][28].Our study, based on real-world data, has a direct approach on how we measure work-related costs and supports those estimations at least in the group of patients that are employed.Our findings provide evidence that the current criteria for reimbursement and prescription in Spain, requiring 3 previous treatment failures, are no longer supported by economic reasons.Instead, we propose that anti-CGRP MAbs should be considered as an earlier line of treatment, as also recommended by the EHF [8]. Second, a significant proportion of our patients treated with anti-CGRP MAbs, around 24.5%, (51/208) that Table 3 Costs and savings at 3 months per patient a Patients treated with MAbs received an additional outpatient visit every 3 months during the follow-up, which was encountered in the cost-benefit analysis.The rest of clinical visits were the same as patients with migraine and without this treatment could potentially work, are unemployed.One of the factors that could contribute to this working status is the difficult-to-treat migraine they suffer, and have suffered during their life, where they had to study and position themselves professionally while having migraine, and currently have.This finding reflects the fact that the population fulfilling criteria for anti-CGRP MAbs is treatment-resistant [29].Interestingly, whether they respond or not, the working status does not seem to change.This could either be due to the short timepoint of our study (3 months) or to personal, social and community factors that prevent work reintegration after being undertreated for migraine for a long time.Considering these findings, it would be interesting to assess in the future whether treating patients earlier may avoid a prolonged unemployment status caused by the disease, with a beneficial impact not only in reducing the personal migraine burden but also potential societal costs related to loss of productivity. Third, the overall treatment benefit is able to compensate the costs for those who are non-responders.We analysed the baseline characteristics of non-responders' , finding no statistically significant differences when compared to rest of the cohort.Since there are no predictors of response at present [30,31], our study provides the evidence that all patients can and should be treated.Moreover, the economic impact of people discontinuing the treatment is higher initially, but it will be mitigated by the time as only those patients who respond to anti-CGRP MAbs will continue the treatment.This fact may also contribute to underestimating the potential savings achievable at a longer-term, since the follow up of our study is limited to 6 months.Thus, our annual projection of savings may also be underestimated. Finally, we observed that partial responders significantly improve as well in terms of absenteeism and presenteeism.As these variables are the main determinants for the treatment-related savings, even in < 50% responders who are actively working, anti-CGRP MAbs could be cost-effective.A 50%-threshold that qualifies non-responders should probably considered no longer representative from both the clinical and economic perspective and lowering the threshold to 30%, at least in real-world, should be considered. Overall, our study, coupling precisely collected realworld data with economic evaluations, demonstrates that treatment with anti-CGRP MAbs is sustainable in the actively working migraine population and should be offered earlier.Although our study was only carried out on working patients, we believe that the treatment should be equally available to unemployed patients, since economics is just one facet of the potential benefits that treatment with MAbs provides.The present study is not exempt of limitations: First, it has been conducted in a specialized headache clinic and a selection bias may have occurred, possibly including more severe and resistant patients but with greater potential for improvement.So, our results cannot be generalized to low-frequency episodic migraine. Second, as we used WPAI for the economic assessment, we were able to estimate the cost-benefit only for those patients actively working, and future studies should have to determine with proper real-data if the treatment is cost-effective in the overall population, including people unemployed or not in working age.Additionally, despite the WPAI scale has been validated in migraine, it remains a self-reported scale, introducing subjectivity especially when assessing productivity.Nevertheless, this potential bias is mitigated as we are evaluating changes of WPAI over time (paired samples), where each individual serves as his own control. Another limitation is that a non-negligible proportion of patients had to be excluded from the study due to incomplete WPAI data at M0 or M3, rendering their inclusion in the analysis impossible.We have analysed the response rates of these patients and have not identified significant variations when compared to the rest of the cohort (Supplementary Table 7).Although the exclusion of these patients constitutes a potential limitation of the study, there is no evidence to suggest that the final analysed cohort is not a representative sample. Finally, because of the economic evaluation, our results are applicable only in our country, but we urge the need of replicating these assessments worldwide. Conclusions In our real-world study, anti-CGRP MAbs reduce absenteeism and presenteeism in people actively working, with related savings that overcome the costs of the drug.Considering that nowadays their cost is the main determinant that limits and delays their prescription, our results open up the possibility of earlier prescription of these treatments for migraine, as in this population they are economically sustainable. Fig. 1 Fig. 1 Flowchart of study participants.From 470 patients treated with anti-CGRP MAbs, 426 were at working age.Of these, 256 participants had complete data, of which 57.8% (148/256) were employed at both M0 and M3 Fig. 2 Fig.2Graphical representation of the economic balance and the relative importance of the savings in indirect costs in comparison with the savings in direct costs derived from the treatment with anti-CGRP MAbs.The main cost of these therapies are in terms of direct expenses (the drugs costs).However, the 93% of the savings are indirect (reduction in absenteeism and presenteeism).This makes the benefit that they socially produce in economic terms not so evident in the first instance in comparison to their costs, but not less important.Image generated using BioRender MAMD Monthly acute medication days RE Responders (> 50% reduction in monthly headache days) PR Partial responders (30-49% reduction in monthly headache days) NR Non-responders (< 30% reduction in monthly headache days) SD Standard deviation Q1-Q3 Interquartile range Table 2 Comparison between M0 (baseline) and M3 (3 months after treatment with MAbs) Absenteeism: % of hours not worked due to headache.Presenteeism: laboral productivity affected by headache.Abbreviations: SD Standard deviation, Q1-Q3 Interquartile range, WPAI Work productivity and activity impairment, HIT-6 score Headache impact severity level, MIDAS Migraine Disability Assessment
2024-02-13T14:07:28.458Z
2024-02-12T00:00:00.000
{ "year": 2024, "sha1": "28036598aa6a1166576c2239cbf47ccef4c152a8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1e47b6123d15e914642c2ce2f9be67cff71e12e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195767456
pes2o/s2orc
v3-fos-license
GPT-based Generation for Classical Chinese Poetry We present a simple yet effective method for generating high quality classical Chinese poetry with Generative Pre-trained Language Model (GPT). The method adopts a simple GPT model, without using any human crafted rules or features, or designing any additional neural components. While the proposed model learns to generate various forms of classical Chinese poems, including Jueju, L\"{u}shi, various Cipai and Couples, the generated poems are of very high quality. We also propose and implement a method to fine-tune the model to generate acrostic poetry. To the best of our knowledge, this is the first to employ GPT in developing a poetry generation system. We have released an online mini demonstration program on Wechat to show the generation capability of the proposed method for classical Chinese poetry. Introduction Classical Chinese poetry generation is an interesting challenge of natural language generation. Unlike free text generation, a classical Chinese poem should normally meet both form and content requirements [1]. The form requirements includes the regulations on the number of words (字数), rhyming (押韵), tone patterns (平仄), pairing (对仗), etc . The other requirement is regarding content, which requires that the theme of a poem is consistent and coherent throughout the poem. Various methods e.g., [1,2] have been proposed to generate classical Chinese poetry. However, these methods are somewhat complicated so as to satisfy the aforementioned requirements in both form and content. For example, templatebased or constraint checking method is employed to guarantee the correctness of the form of the generated poetry. Key-words based mechanism is proposed to guarantee the consistency and coherency of a poem. In this paper, we study the problem of poetry generation given a specific type of form requirement and a specific theme. In contrast with the existing methods, we propose a poetry generation method based on the pre-trained model GPT. The underlying model of our proposed method is simply a GPT language model fine-tuned with a classical Chinese poetry corpus, without any additional modifications. All we need to do is to serialize the training poems into formatted text sequences as training data. Poems are generated by sampling from the language model token by token without any constraint to meet the form and content requirements. In addition, we proposed a fine-tune method to train a model to generate acrostic poetry (藏头诗), where some characters in given positions are predefined. The proposed method can guarantee that specific tokens can be generated by the language model in the corresponding positions. Compared with existing methods, our propose method has below characteristics: 1. Model Conciseness. The proposed method is a simple Transformer model without additional variables. However, it is powerful enough to guarantee the form and content requirements. Neither we use any human-defined rules or features, nor we define any specific designed neural networks rather the standard GPT. 2. Well-formedness. We surprisingly observe that although we did not explicitly feed the model with any rules or features about classic Chinese poetry, such as the number of characters, rythming, tune patterns and coupling, the model is able to generate poems that automatically meet these rules very well for the tens of forms, even for some fairly complicated "Cipai" like "Shuidiaogetou" which contain around 100 characters. Actually, even for ordinary Chinese people,it is quite hard to master the skills to write well formed classical poems. 3. Poetry Diversity. We employ truncated top-k sampling strategy during the generation process. Hence the generated poems are highly diverse in different runs given the same form and theme. 4. Poetry Artistry. We observe that the model have a fair chance to generate high quality poems that express the poetry themes artistically, which is close to one written by specialized poets. Table 1 shows four poems, among which only one was written by a Chinese poet more than one thousand years ago, while the remaining three poems are generated by our system. We refer the readers to the blog 3 or the papers [3,5,4] to get a basic understanding of Transformer, which is the underlying model of the proposed method. Figure 1 depicts the process of training the poetry generation model. We implement our own GPT model based on the source code of BERT 4 . The configuration of the size of the transformer is identical to the BERT-Base. We also adopt the tokenization script and Chinese vocab released in BERT. For text generation, we implement truncated top-k sampling instead of beam-search to generate diverse text [6]. Data processing The training process includes two phases: pre-training and fine-tuning. Our GPT model is pre-trained with a Chinese news corpus. For fine-tuning, we collect publicly available classical Chinese poems. As shown in Figure 1, a sample poem is first transformed into a formatted sequence. A special case is couplets. In most case couplets do not have a theme, we use the first line as the theme and the second line as the body. The form is automatically filled with "对联" (couplets). So the generation of a couplet becomes to generate the second line given the first line, which exactly mimics the activity of Duiduizi (对对子). The statistics of the pre-training data and fine-tuning data of our model are given in Table 2. Model Training Pre-training: We pre-trained our GPT model on Huawei Cloud Service with a news corpus which is detailed in Table 2. We trained it in 8 Nvidia V100 (16 GB) GPUs for 4 epochs. The pre-training takes totally 90 hours 5 . Chinese Wikipedia can be an alternative training corpus. Fine-tuning: We feed the all the training sequences of poems into the transformer and train a auto-regressive language model. The objective is to maximize the probability of observing any sequence X = {x 1 , x 2 , ..., x |X| }: where p(x i |x 1 , ..., x i−1 ) is the probability that the token x i will be generated given all the historical tokens. The fine-tune process takes much less time as the model gets overfitted if trained too long. When the model is overfitted, it tends to retrieve raw sentences from the corpus during the generation process. Poetry Generation Once the training is completed, we apply the model to generate poems given a form requirement and a theme in the following process. We first transform the form and theme into an initial sequence as [form, identifier 1, theme, identifier 2 ], then the initial sequence is fed into the model and the remaining field of body is decoded token-by-token. Note that we do not apply hard constraint during the decoding process to guarantee the correctness of the form. Instead, the model is able to automatically assign high probabilities to commas and periods in certain positions when decoding. The decoding process is end when we reach the end of the body, which is recognized by an "EOS" token. Truncated top-k sampling: Instead of beam-search during the decoding process, we apply truncated top-k sampling strategy to obtain diverse poems. Each time to sample a token, tokens with top-k largest probabilities are first selected and then a specific token is sampled from the top-k tokens. We observe that the generated poems are in correct form even though truncated top-k sampling strategy is applied. Train a model for acrostic poetry generation We employ the same method for train a model for generating acrostic poetry. While the training and generation processes are exactly the same, we format the sequence in a slightly different way. Specifically, we replace the original theme of a poem with the combinations of the first character in each line. In the example, the first characters are "床", "疑", "举", and "低". Then the original them "静夜思" is replaced with "床疑举低". With the new data processing method, this training sample becomes " 五言绝句(格式)床疑举低(藏头诗)床前 明月光,疑. . . 月,低头思故乡。" 5 The model underfitted the data when we employed it for fine-tuning Generated examples and observations A selected set of generated examples by our model is given in Table 3 -6. We observed that: • The method performs consistently well in generating couplets(对联), jueju(绝 句) and lüshi(律诗). For couplets shown in Table 3, almost all the generated second line (下联) are paired with their corresponding first line (上 联) in terms of characters in the same position. For Lüshi as shown in the third and forth poetry in Table 4, it pairs the third sentences and the forth sentences, and pairs the fifth and sixth sentences, while the remaining sentences are not paired. The observation is quite surprising as the model learns the much complicated pairing rules for Lüshi, which are hard to grasp even for normal educated Chinese native speakers. Pairing sentences in certain places in a poem greatly improves its the quality and beauty. • Well-formedness. More than 95% of the generated Jueju and Lüshi are well-formed as the form requirement of these poetry categories are relatively simple compared with Cipai. In terms of Cipai, the method does not performs as good as Jueju and Lüshi regarding well-formedness. Possible reasons may include the complexity of the forms and the lack of sufficient data for each type of Cipai. There are tens of thousands of training samples for each category of Jueju/Lüshi, while for Cipai, there are totally 882 types of Cipai in the training corpus but only 104 of them have more than one hundred training samples each and the largest type contains only 816 training samples. The possibilities to generate correct Cis also vary for different types of Cipai, which could be attributed to the differences in complexity as well as the numbers of training samples of that type in the training corpus. The Cipai Shuidiaogetou, which contains 744 training samples, is relatively difficult as the length requirement for each line varies. Roughly 70% of the generated Cis for Shuidiaogetou are correct in form. One of the example is shown in Table 5. • Although not explicitly modelled, the rhyming (平仄) and tone patterns (押韵) of the generated poems are also fairly good. • Diversity. As we adopt sampling strategy while decoding, our method can generate highly diverse output in different runs. As shown in Table 3 and Table 4, the model generate totally different second lines (下联) for the same first line (上联) and different poems for the same theme in multiple runs. We also notice that when the given first line is in the training corpus, it is possible that the model retrieves the whole original second line from the training corpus. However, for poetry generation, it generates new sentences even though the given theme is in the training corpus. • Artistry. We observe that the model can sometimes generate high quality poems that express the poetry themes artistically. However, while the generation quality for some given themes are constantly good in multiple runs, some themes, such as "机器翻译", which appear rarely in the training corpus are less likely to generate good poems. "秋思" is a good theme to generate high quality poems. The examples in Table 4 are generated in one run without manual selection. Related Work Early works [9,10,7,8] on Chinese poetry generation have been mostly rulebased or template-based. Recurrent Neural Network (RNN) [11] was recently introduced as it has been proved to be effective in generation tasks such as machine translation and dialog generation. However, few researchers have adopted the latest self-attention models for generating poems. As far as we know, we are the first to employ GPT in developing a poetry generation system. GPT has been famous for having the capability to generate text that can hardly be distinguished even by human beings. As a natural consequence, a GPT-based method could potentially write high quality poems. Existing methods have been focusing on improve the well-formedness and content coherence of generated poems. To make sure the rhyming, the tone patterns and the pairing of a generation are correct, various strategies have been adopted. For example, Yan (2016) [12] proposes an iterative polishing schema, which refines the generated poem until a well-formed one is obtained. In the mean time, some other works have been investigating the coherence of content throughout a poem. For example, Yi et al. (2018) [1] propose a salientclue mechanism which automatically selects the most salient characters from the so-far generated lines as a theme clue for generating the next line. Yang et al. (2017) [14] and Wang et al. (2016) [13] employ a two-stage approach, Table 4: Theme "秋思" with different form requirements. Generated in one run without manual selection where a set of keywords are planned first and are then fed into the generation of different lines sequentially. Besides above works, some researchers investigate other interesting topics on poetry generation. For example, Yang et al. (2018) [15] propose a model for stylistic Chinese poetry generation . Compared with the existing methods, the major advantage of our proposed approach is the conciseness and simplicity of the model. Meanwhile, it still exhibits strong, or sometimes even better ability in generating well-formed and coherent poems. For example, it is easy for our proposed method to generate well-paired sentences at once, especially for Lüshi, which, however, is relatively difficult for the existing method unless multiple times of polishing are adopted. Regarding the content coherence, in rare cases, our proposed method is even beyond relying on keywords to make the poetry coherent. Rather, it expresses either the story, the scene, or the emotion to describe a deeply coherent poetry comprehensively and artistically. Conclusions and Future Works We present a classical Chinese poetry generation method based on pre-trained language model. The proposed method is far simpler than existing method based on recurrent neural networks and can generate better poems in some Table 6: Examples for Acrostic Poetry perspectives. Though the generated poems are not perfect all the time, our preliminary experiments have shown that GPT provides a good start to promote the overall quality of generated poems. That is, how to express the scene, the story, the emotions, and so on in a natural and artistry way. We present this report in the hope of helping researchers in understanding the capability of GPT as well as developing better poetry generation systems. A Forms of Classical Chinese Poetry According to the form requirements, there are different categories of classical Chinese poetry which could be summarized as follows: • Couplets (对联): A couplet is a pair of sentences in classical Chinese poems which follow strict rules on lengths, rhyming, tone patterns and paring. Couplets are normally not regarded as poems, however they can also be used independently. Here we treat couplets as a category of poetry just for convinience. -Seven-character Gushi (七言古诗), contains variable number of sentence pairs where each sentence has 7 characters. Its forms are relatively flexible without strict regulations on lengths, rhyming, tone patterns and pairing. There are strict rules on these forms regarding lengths, rhyming, tone patterns and paring. Ci also has very strict rules on lengths, rhyming, tone patterns and paring.
2019-07-02T13:43:23.620Z
2019-06-29T00:00:00.000
{ "year": 2019, "sha1": "83b56c3c7a61767bd88d85796aa5dbc4976912c3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7946c93b715fc6235e40fafc7add033b6ea33de1", "s2fieldsofstudy": [ "Computer Science", "Linguistics", "Art" ], "extfieldsofstudy": [ "Computer Science" ] }
19681676
pes2o/s2orc
v3-fos-license
Cell ontology in an age of data-driven cell classification Background Data-driven cell classification is becoming common and is now being implemented on a massive scale by projects such as the Human Cell Atlas. The scale of these efforts poses a challenge. How can the results be made searchable and accessible to biologists in general? How can they be related back to the rich classical knowledge of cell-types, anatomy and development? How will data from the various types of single cell analysis be made cross-searchable? Structured annotation with ontology terms provides a potential solution to these problems. In turn, there is great potential for using the outputs of data-driven cell classification to structure ontologies and integrate them with data-driven cell query systems. Results Focusing on examples from the mouse retina and Drosophila olfactory system, I present worked examples illustrating how formalization of cell ontologies can enhance querying of data-driven cell-classifications and how ontologies can be extended by integrating the outputs of data-driven cell classifications. Conclusions Annotation with ontology terms can play an important role in making data driven classifications searchable and query-able, but fulfilling this potential requires standardized formal patterns for structuring ontologies and annotations and for linking ontologies to the outputs of data-driven classification. Data-driven classification of cell types Data driven classification of cell types via unsupervised or semi-supervised clustering is becoming common. Examples include classifications derived from transcriptomic profiles from single cell RNAseq [1] and seqFISH [2], from neuronal morphology [3] and neurophysiology [4]. Other methods are likely to follow with the collection of other large datasets profiling single cells including single cell metabolomic data [5] and complete connectomic profiles of cells [6,7]. Classification from transcriptomic profiles is likely to become dominant via large scale projects including cell atlases for Humans [8] and Drosophila [9]. It is still an open question whether these different approaches to classification will produce multiple, orthogonal classifications, distinct from classical classifications, but early results suggest not. For example, the unsupervised classification of retinal bipolar cells using single cell RNAseq data recapitulates and further subdivides classical classifications of these cell types, rather than being consistent with a novel classification scheme [1]. Similarly, unsupervised clustering of imaged single Drosophila neurons using a similarity score for morphology and location (NBLAST) identifies many well-known Drosophila neuron types [3]. These results and others are consistent with the existence of cell types corresponding to stable states in which cells have characteristic morphology, gene expression profile, and functional characteristics etc. Data-driven queries for cell types With data driven classification comes the possibility of data-driven queries for cell-types. NBLAST is already in use as a query tool allowing users to use a suitablyprepared neuron image to query for neurons with similar morphology, with results ranked, as for BLAST, using a similarity score. BLAST-like techniques are also being developed to automatically map cell identity between single cell RNAseq experiments. For example, SCMAP [10] can map between cell clusters from two different single cell RNAseq analyses, or from clusters in one experiment to single cells in another. Unsupervised clustering of transcriptomic profiles to predict cell-types also produces a simpler type of data that might be used for data-driven queries for cell-types: small sets of marker genes whose expression can be used to uniquely identify cell-types within the context of a clustering experiment. A clustering experiment that uses CD4 positive T-cells or retinal bipolar cells as an input may provide unique sets of markers for subtypes of these cells. Where these correspond to known markers of subtypes of CD4 positive T-cells or retinal bipolar cells they can be used directly for mapping, where not they can be used to define new cell types. Coping with the deluge These new single-cell techniques hold enormous promise for providing detailed profiles of known cell types and identifying many new cell types. In combination with targeted genetic manipulation, they promise to unlock a transcriptome level view of changes in cell state and differentiation [11]. But this work faces a problem, especially when carried out on a scale as large as the Human Cell Atlas. How can the results be made searchable and accessible to biologists in general? How can they be related back to the rich classical knowledge of cell-types, anatomy and development? How will data from the various types of single cell analysis be made cross-searchable? Clearly datadriven queries for cell-type will be an important part of the solution, but to be useful to biologists, single cell data needs to be attached to human-readable labels using wellestablished classical nomenclature. Where new cell-types are described, we need standard ways to record the anatomical origin of the analyzed cells as well as the developmental stage and characteristics of the donor organism (age, sex, disease state etc). Classification and annotation of cell types by ontologies We already have computer-readable representations of classical classifications of cell types in the form of celltype and anatomy ontologies. The Cell Ontology is a (mostly) species-neutral ontology of cell types [12]. Species-specific cell-type classifications exist in in a number of single-species anatomy ontologies including ontologies of zebrafish (Zebrafish anatomy ontology [13]), Drosophila (Drosophila anatomy ontology [14]) and human anatomy (Foundational Model of Anatomy [15]). Each of these ontologies provides a controlled vocabulary for referring to cell-types and a mapping to commonly-used synonyms. Each also provides a nested classification of cell-types and records their part relationships to gross anatomy. They are commonly used to annotate gene expression, phenotypes and images. These class and part hierarchies are commonly used for grouping annotations. For example, if a gene is annotated as expressed in a retinal bipolar neuron we might use classification and part relationships in an ontology to infer that it is expressed in the retina and expressed in a (type of ) neuron. It is, of course, not always clear precisely what known cell type, if any, corresponds to a single cell whose image or transcriptome we have or corresponds to a cluster of similar cells predicted by unsupervised clustering. In this case, ontologies can be a source of more general cell classifications that may applicable (lymphocyte; cortical interneuron; epithelial cell). Along with other information, they can also be used to describe the properties of unidentified cells. For example, virtual Fly Brain records the location of the various parts of unidentified neurons depicted in single cell images on the site, as well as the transgenes they express. Specifying context in this way can be very useful to working with the outputs of unsupervised clustering of trancriptomic databy providing a way to specify a context within which sets of marker genes defined by this analysis can be used to uniquely identify cell-types. Conversely, the knowledge recorded in ontologies (part relationships, developmental stage, records of function) and in annotations may also be useful in homing in on candidate mappings for unmapped single cells. For example, the Drosophila anatomy ontology has been used to record the expression of transgenes in specific neuron types in the Drosophila brain and to record which brain regions these neuron-types overlap. Both these types of information are recorded for individual neurons. In as far as these ontologies accurately record nomenclature, classification and part relationships to anatomy they are ideally suited to provide a mechanism for annotation of single-cell experiments. But cell ontologies will only be able to play this role if they are sufficiently accurate, flexible and scalable enough to keep up with the flood of new data. Making cell ontologies scalable and query-able with design patterns Scalability and accuracy and query-ability of ontologies depends on formalization. All except the human-specific Foundational Model of Anatomy (FMA) are expressed in Web Ontology Language 2 (OWL2). OWL2 is a description logic that allows the expression of assertions about classes (the class of all neurons) and individuals (the individual neuron depicted in Fig. 3b) using quantified logic [16]. For example, we might assert that all retinal bipolar neurons are synapsed by a photoreceptor cell, or that any neuron that secretes glutamate as part of synaptic transmission is a type of glutamatergic neuron. These types of assertions are used to automatically OWL classes in a large and increasing number of ontologies (e.g. [12,14,17] In some resources, such as Virtual Fly Brain (VFB), they are used to classify individuals and to drive query systems [14,[18][19][20][21]. Multiple axes of classification are required for cell ontologies to be useful to biologists: A single neuron may be classified by structure (pseudo-bipolar), electrophysiology (spiking), neurotransmitter (glutamatergic), sensory modality (secondary olfactory neuron), location(s) within the brain (antennal lobe projection neuron, mushroom body extrinsic neuron), etc. But manually maintaining these multiple axes of classification simply doesn't scale: adding new terms requires (human) editors to know all of the appropriate classifications to add and how to rearrange existing classifications to fit the new term. It also requires them to understand the intent behind existing manually asserted classifications, which is typically partially documented at best. To cope with this, many ontologies have gradually moved over to using something approximating 'Rector' normalization [22]: minimizing the use of asserted classification in favor of automatically inferred classification driven by OWL equivalence axioms specifying necessary and sufficient conditions for class membership. Consistency is maintained by the use of standard design patterns for representing class properties. The same design patterns can be used to annotate individuals allowing crossquerying of the ontology and individuals and autoclassification of individuals. This approach has been used for a wide range of ontologies including the Gene Ontology [17], the Drosophila Anatomy Ontology [14] and the Cell Ontology [12]. In the Drosophila anatomy ontology, which includes 4767 cell classes, 48% of classifications (5893/12233) are automated via 2807 equivalent class axioms. In the Cell Ontology 59% of classifications (1910/3253) are inferred based on 2907 equivalent class axioms. The strength of this approach is that it can be used to integrate diverse types of knowledge and data into a single query-able classification. An ontology might record information about the structure, function, lineage, location, connectivity and gene expression of some class of neuron or of an individual neuron and use one or more of these properties to classify it. A potential weakness is the mismatch between quantified logic, which records assertions about all members of a class, and the messy, noisy reality of biology and the data we collect about it. For example, when single cell transcriptomics and unsupervised clustering are used to find and predict cell types, the same experiments identify markers that can be used to distinguish them from other cell-types identified in the same experiment. These markers could be used to formally define cell-types. But, either through natural variation, or noisy data, these markers are not perfectall have some level of false positive and false negatives when judged against clusters mapped to cell types [1,23]. Here I present two case studies of how formalizing cell ontologies and using them to annotate the results of single cells analysis can improve the searchability and query-ability of the single cell data. In both cases I explore how we might use the outputs of single-cell analysis to extend cell ontologies and link them to data that can be used for data-driven queries for cell types. Case study: Mouse retinal bipolar neurons Background Retinal bipolar cells (RBCs) are a well characterized class of neurons of that transduce and process signals from the rod and cone photoreceptor cells of the vertebrate retina. RBCs are classically divided into classes based on whether they are synapsed by rod or cone cells (and if so by which types of cone cell), which laminas of the inner plexiform layer of the retina their axons arborize in and on the morphology of their axonal arbor [24]. Mammalian RBCs can also be divided into functional groups depending on whether they depolarize in response to a light stimulus (ON) or to the removal of a light stimulus (OFF) and whether they carry chromatic or achromatic information. A complete connectome for a single region of the mouse retina provides connectomic profiles and circuit context for over 400 RBCs [7]. A classification derived from unsupervised clustering of 25,000 single mouse RPC transcriptomes by Shekhar and colleagues [1] found 15 cell types distinguishable by transcriptome. This study also identified marker genes for each cell type which they then used in microscopy studies to determine morphologies of cells corresponding to each type. This, along with mapping of previously known marker genes to transcriptomes, showed that the transcriptomic derived types recapitulated and further subdivided classical classifications. Formalizing the representation of retinal bipolar neurons to enhance querying and grouping of transcriptomic data The cell ontology already contains terms for the major subclasses of RBCs in the mouse (see Fig. 1) along with manual classification by photoreceptor cell input (rod vs cone) and by function (ON vs OFF). However, prior to this work, these terms lacked formal definitions useful for automated classification and querying. Figure 2 shows extensions to the cell ontology which formalize classification the general RBC class (retinal bipolar neuron) and its major subclasses. RBCs are known to be to be glutamatergic and to form excitatory synapses to their target cells. Fig. 2 shows axiomatization of the general RBC class (retinal bipolar neuron) leading to classification under glutamatergic and excitatory. The former classification is likely to correlate with the expression of genes involved in glutamate synthesis transport and secretion and so is a potentially a useful classification for cross-querying transcriptomic data. The new axiomatization also deploys standard patterns for recording sensory modality [20] to classify RBCs as visual system neurons. To formalize classification of OFF vs ON responsive RBCs, I added new terms on the response branch of the Gene Ontology covering response to light-dark transition and response to dark-light transition. I then used these to compose formal axioms referring to the response to these transitions as part of visual perception, using these axioms to automate classification. This major functional subdivision of RBCs is likely to be reflected in transcriptomic differences and so is a potentially a useful classification for cross-querying transcriptomic data. I also used standard relationships for modelling neuroanatomy [18] to record which laminas of the plexiform layer each RBC innervates, making this information queryable. Using the outputs of data driven classification to structure an ontology of retinal bipolar neurons What outputs of transcriptomic, data driven classification might we usefully incorporate into ontologies? Assertions about marker expression are an obvious candidate. These are potentially very valuable to biologists seeking reliable markers for identifying specific neuronal classes in their experiments. If used to construct equivalent class expressions, they are also potentially useful for providing formal definitions for classes newly identified by transcriptomic analysis. They can also be useful for automated classification of cell-types from minimal data. For example, Shekhar and colleagues identify Igfn1 as a marker that distinguishes type 7 RBCs from other RBC Fig. 1 Classification of retinal bipolar cells in the cell ontology. Note that general types (rod, cone, ON, OFF) are non-species specific, whereas specific types are specified for mouse. This is necessary because morphologically defined classes vary between species types. On this basis, we could add an equivalent class axiom recording that any RBC that expresses Igfn1 is a type 7 RBC. Where multiple markers of a cell types are identified multiple equivalence-axioms could be added. This process of generating equivalence axioms could potentially be automated using mappings of cell ontology terms to data-derived clusters. A closer look at the data reveals a potential problem: within clustered transcriptomes there are small numbers of cells that fail to express an identified maker, or express a marker diagnostic of another type. In the case of Igfn1 and type 7 RBCs the percentage of false positives appears very low, and may be acceptable. In other cases (Nnat in type 3B RBCs) the potential level of false positives is very high. There are a number of possible strategies for dealing with this. Mappings could be limited to cases where the expected false positive rate is below some cut-off. Axioms could be annotated to include a record of the expected false positive rate. A more conservative approach would be, wherever possible, to generate equivalence axioms combining multiple gene expression assertions per cell type. I have taken this approach in extending the cell ontology (Fig. 2). However, with this pattern, automated classification from data will only be possible for experiments where expression of all marker genes is assayed. Case study: Drosophila antennal lobe projection neurons Background The antennal lobe of Drosophila is made up of 50 glomeruli, each of which receives input from a single type of olfactory receptor neuron. Each glomerulus is also innervated by uniglomerular projection neurons that carry olfactory information to higher brain centers [25]. The NBLAST algorithm [3] measures how similar two neurons are with respect to their morphology and location. Using co-registered single-cell image data for over 16,000 individual neurons, Costa and colleagues generated a matrix of pairwise NBLAST similarity scores for all neurons and then used unsupervised clustering to find potential cell types. Many of these clusters correspond to classically defined neuron types in the Drosophila brain, including many types of antennal lobe projection neurons [3]. In an independent study, Li and colleagues generated a classification for antennal lobe projection neurons using unbiased clustering based on transcriptome profiles from several thousand projection neurons at various stages of their development [23]. Cells for this study were isolated based on expression of a transgenic marker expression (GH146). VFB and FlyBase have extensive annotation of expression of this marker to cell types, providing one possible route to candidate terms for mapping transcriptomic clusters. This study didn't identify single marker genes that could uniquely distinguish clusters, but rather identified broader markers. The question of which data-type provides the most detailed classification is likely to vary with cell type. For example, automated classification from single-cell RNAseq profiling of Drosophila olfactory projection neurons shows that some neurons are indistinguishable at the transcriptomic level belong to different classes defined by location, morphology, lineage and odor response [23]. Their distinct odor response functions are likely to be conferred by their connectivity. Formalizing the representation of drosophila antennal lobe projection neurons The Drosophila anatomy ontology already includes richly axiomatised classes for all 50 known uniglomerular projection neurons defined by a combination of lineage and glomerulus innervated. It also includes classification of these neurons by sensory modality and neurotransmitter released. It captures the tract through which each projection neuron type projects and the higher brain regions that they innervate. Annotation of clusters of single neuron images with the ontology terms enriches the annotated image data by linking it to formal, query-able descriptions of its relationships to gross anatomy (innervation, fasciculation). This allows, for example, queries for images of neurons in a specified tract, or that innervate one or more specified brain regions. Li and colleagues find similarity in gene expression profiles between cells sharing the same lineage. The query-able lineage information encoded in the ontology will make it easy to explore this further. The Drosophila anatomy ontology also encodes and growing set of query-able formal assertions of neurotransmitter for each class of neuron and direct records of known synaptic connections between neuron types. With this information, it is possible to group transcriptomes of neurons by neurotransmitter to look for patterns of gene expression which correlate this, and to group transcriptomes of cells synapsed to these neurons to search for expression of relevant neurotransmitter receptors and associated proteins. Using the outputs of data driven classification to structure the ontological representation of antennal lobe projection neurons What outputs of NBLAST based clustering might we usefully incorporate into ontologies? It would be useful to provide a link to data that could be used for subsequent queries. The clustering algorithm used in this study identified an exemplar (most typical) neuron for each cluster. Where clusters are mapped to ontology classes, the image of a cluster exemplar can serve as an exemplar for the classserving a role similar to that of a type specimen in taxonomy. This can be used both as a visual reference for the morphology and location of the neuron type, and as a substrate for future queries with NBLAST or any other search tool that can use image data. The exemplar approach has already been used by VFB to define the boundaries of brain regions via links to image data. It may also prove useful for the outputs of other clustering methods, for example, a link from a cell-type classes to an exemplar transcriptomic profile might provide a substrate for SCMAP queries to identify clusters corresponding to the same or similar neuron types in other clustering experiments. Figure 3, panel a shows the axiomatization of a uniglomerular projection neuron class (DL2d adPN) along with a formal link to an exemplar neuron (VGlut-F-400462) illustrated in panel b. Future challenges The examples given here are well axiomatised, but the degree of effort put in to axiomatising will, of course, depend on use cases and resources in individual projects. Much annotation of classifications from unsupervised clusterings are likely to be simpler and more generalparticularly when less well studied tissues are being characterized. Given the huge scale of major efforts to automatically characterize and classify cell types, annotation efforts will need to be efficient and flexible. The same will apply to efforts to make use of the outputs of unsupervised clusterings to extend and refine ontology terms. For example, efficient mapping of markers to cell types would require semi-automated pipelines that can run as soon as mappings are generated. It should be possible to use machine learning methods to determine the most informative set of markers to use in classification of each cluster in the context of a single clustering analysis. Patterns of axiomatization Equivalence axioms are now widely used within biomedical ontologies as a means of automating classification both within ontologies and of individuals. The success of this effort depends on devising equivalent class axioms with the minimal commitment necessary for correct classification and using standard design patterns. With this approach, it is possible for ontology editors to keep track of the basic properties and patterns needed to drive classification. The rise of complete profiles of cell types poses some dilemmas for this approach. If, as seems likely, there are multiple sets of criteria that can be used to distinguish cell types, should this be reflected in the use of multiple equivalence axioms? To what extent should we record additional properties of classes as simple subclassing axioms? The combination of equivalence and subclassing (restriction) axioms generates hidden General Class Inclusion axiomslogically associating sets of properties with each other in a way that can be hard to keep track of. [29]) is the exemplar (most typical neuron) from the cluster in panel A is shown in yellow. It has arborizes in the antennal lobe (AL; red), calyx of adult mushroom body (MB calyx; purple), lateral horn in (LH; blue). Image generated in VFB 2.0 alpha (unpublished). Panel c: OWL Axiomatization defining 'adult antennal lobe projection neuron DL2 adPN', which the cluster in panel a was manually mapped to. A minimal-commitment equivalent class axiom defines the class my lineage and innervated glomerulus. Innervation of the MB calyx and LH are recorded in subclass axioms. The axiom in blue links this class to the exemplar of the cluster, providing a standard reference for morphology and a substrate for future NBLAST queries of co-registered neurons Thomas Gruber's 'principle of minimal commitment' [26] is particularly relevant to this discussion. This principle suggests that: "An ontology should require the minimal ontological commitment sufficient to support the intended knowledge sharing activities. A shared ontology need only describe a vocabulary for talking about a domain whereas a knowledge base may include the knowledge needed to solve a problem or answer arbitrary queries about a domain." The examples in this paper illustrate how knowledge embedded in ontologies can enrich querying of datasets that provide 'omics profiles of cell types. But we need to avoid bloating ontologies with information that allows 'arbitrary queries about a domain' , especially where such queries could better be served via queries of annotated data. For example, while it may be useful to include qualitative assertions about marker gene expression in ontologies, arbitrary queries for cell types by gene expression should involve direct queries of transcriptomic data. Devising strategies to keep this balance sustainable will be one of the major challenges for the future development of cell ontologies. Linking ontologies to data-driven queries Where ontology annotation provides broad contextual information about an individual cell-type identified by unsupervised clustering, it serves to narrow down the input data to a data-driven query for similar cell types. This is important because data-driven querying can be very compute-intensive [3,10] making scaling across a growing dataset potentially limiting. Where more precise annotation of cell-type is possible, linking cell-types to data that can be used in data-driven queries can help users find potential matches and is potentially a source of automated annotation. Conclusions Annotation with ontology terms can play an important role in making data driven classifications searchable and query-able. This role requires attention to both the lexical and formal aspects of ontology development. Extensive synonym collection is necessary to maximize findability. Formalization is needed to support multiple inheritance classification querying and automated classification of individuals from annotation. Successful formalization requires the development of clear, well documented design patterns in which equivalent class axioms are kept minimalwith clear aims in mind for use. By supporting general assertions about cell-types and their properties, ontologies and the application of standard design patterns to annotation can support the description of single cell data at multiple levels of precision, depending on available data. This can be used to specify the context in which marker genes uniquely identify a cell type, or to provide lists of candidate cell-types for mapping to a single cell or predicted cell type from data-driven classification. The relevance and usefulness of annotation with ontologies can be increased by suitable strategies for linking ontology term to data useful for data-driven queries for cell type. Funding This work and publication were funded by a Wellcome Trust grant to the Virtual Fly Brain consortium: WT105023MA. Availability of data and materials The ontology edits described here were incorporated in the Gene Ontology (available from http://purl.obolibrary.org/obo/go/extensions/go-plus.owl) and the Cell Ontology (available at http://purl.obolibrary.org/obo/cl.obo). Efforts to link Drosophila neuron classes to exemplar neurons are ongoing as part of the Virtual Fly Brain project. About this supplement This article has been published as part of BMC Bioinformatics Volume 18 Supplement 17, 2017: Proceedings from the 2017 International Conference on Biomedical Ontology (ICBO 2017). The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/ supplements/volume-18-supplement-17. Author's contributions David Osumi-Sutherland wrote this paper and carried out the work described in the results section. The images used in Fig. 2 panels A and B were generated as part of the Virtual Fly Brain project (acknowledged below). Ethics approval and consent to participate Not applicable Consent for publication Third party image data displayed in this publication is available under an open license. Competing interests The author declares that he has no competing interests.
2017-12-22T03:59:53.775Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "f7f7c9881c5b4dd7f69e07ccbebd4ebe89d5c4de", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-017-1980-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7f7c9881c5b4dd7f69e07ccbebd4ebe89d5c4de", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
148245222
pes2o/s2orc
v3-fos-license
Are non-financial ( CSR ) reports trustworthy ? A study of the extent to which non-financial reports reflect the media ' perception of the company ' s behaviour This study examines the strategies companies have adopted in their CSR or nonfinancial reporting when responding to media criticism related to poor CSR performance. Seven companies operation internationally and which have been criticized for irresponsible behavior (like environmental spills, child labor, poor working conditions, corruption, etc.) are identified. The Wilson response model, "Philosophy of Social Responsiveness," which suggests four distinct corporate responses to criticism (Reaction, Defense, Accommodation and Proaction), is applied. These four responses occupy a continuum with ‘low response’ on one end and ‘encompassing response’ on the other end. The findings reveal that, in contrast to the Wilson model, which proposes various degrees of response engagement, companies adopted an either/or response strategy (0-1). They either ignore the criticism (0) or, if they recognize the criticism (1), they respond in all four of the categories suggested by Wilson. Six of the companies chose the 1 approach. The remaining company chose the 0 response; ignoring the criticism. The 0 response strategy is not presented as an option in the Wilson model, but it is clearly an alternative that companies can take into consideration when evaluating and choosing strategies for non-financial reporting. Introduction Corporate social responsibility (CSR) is receiving increased attention. Today, companies are expected to take on responsibilities beyond regulatory compliance and posting profits (Brammer andPavelin 2004, Samuel andIoanna 2007). How companies engage the environment, human rights, ethics, corruption, employee rights, donations, volunteer work, contributions to the community and relationships with suppliers are typically viewed as components of CSR. There are many different definitions of CSR, but a frequently used definition is that of the European Union (EU); a "concept whereby companies integrate social and environmental concern in their business operations and in the interaction with their stakeholders on a voluntary basis." (European Commission 2001) CSR has received increased interest in media. Fig. 1 shows this growth by tracking the use of "Corporate social responsibility" in the media from 1989-2012. It is evident that interest in CSR grew dramatically at the turn of the century. The media coverage represented here includes both positive and negative coverage, though the coverage has mostly been negative. Alongside this increased media interest in CSR, we have the past 30 years seen a sharp growth in so-called non-financial reporting. While fewer than 50 companies provided nonfinancial reports in 1992, nearly 5,000 companies did so in 2010. Most of the largest companies in the world (around 90% of the FT 500) report on CSR. In some cases the reports are hundreds of pages long. This paper will investigate the extent to which companies react to negative media coverage, and how they react. The remaining pages are organized as follows. I will start by providing an overview of prior research in this field. Following this, I will present the theoretical perspective applied in the paper and explain the methodology and data collection. The seven cases investigated will then be presented, and the findings conveyed in a common framework. Finally I will discuss the findings and the conclusions I've drawn. Literature review Given the increases in media attention and the volume of CSR reporting, it is not surprising that considerable research has been directed toward understanding the CSR phenomenon. A number of theories and approaches have been applied to address the issue, but the results have varied and have been largely inconsistent. The study "What Motivates Managers to Pursue Corporate [social] Responsibility?" (Ditlev-Simonsen and Midttun 2011) compares the different theoretical approaches to understanding the CSR phenomenon. What we know for certain is that, in addition to the increases in media attention and corporate reporting in the field of CSR, new and CSR reporting initiatives, both voluntary and mandatory, have emerged. An introduction to these will help to shed light on the corporate framework for CSR reporting. There have been a number of new voluntary international initiatives related to nonfinancial reporting. The most significant of these are the UN Global Compact (UNGC) (www.unglobalcompact.org), launched by Secretary General Kofi Annan in 1999, and the Global Reporting Initiative (GRI) (www.globalreporting.org), launched in 2002. The UNGC outlines 10 principles of behavior in the fields of human rights, labor, environment and anticorruption, while the GRI provides more than 80 indicators for financial, environmental and social reporting. Other initiatives related to non-financial reporting include the UN Principles for Responsible Investment, the OECD guidelines for multinational corporations and the Carbon Disclosure Project. In addition to voluntary initiatives, increasingly stringent requirements have been imposed from a government point of view with respect to non-financial reporting. In Sweden, for example, state-owned companies have been required to adhere to the Global Reporting Initiative (GRI) format since 2007. In the UK, according to the Companies Act of 2006, companies listed on the stock exchange are required to include information about environmental matters, employees, and social and community issues in their annual reports. In Norway, companies are required to report on non-financial matters related to the environment and social issues including gender equality, discrimination and employment. In "The Consequences of Mandatory Corporate Sustainability Reporting," Ioannou and Serafeims provide, through a country-level analysis, a good overview of the development of CSR regulations related to international reporting since 1998 (Ioannou and Serafeim 2012). Their study concludes that, in some related areas, mandatory corporate sustainability reporting has improved corporate performance. Other studies have found, though that an increase in CSR reporting does not necessarily improve the responsibility performance of the company. Fry, for example, found an inverse relationship between volume of reporting and CSR performance: the more the company reported, the poorer its CSR performance (Fry and Hock 1976). The study "From Corporate Social Responsibility Awareness to Action?" shows that a focus on and increase in CSR reporting do not necessarily increase the company's CSR performance (Ditlev-Simonsen 2010). So why publish non-financial reports that go beyond what is legally required? Again, many approaches have been tried to answer this question. Applying organizational sense making to CSR does to some extent capture this variety of theoretical approaches to CSR. The sense making approach recognizes that managers have, choose and face alternative paths to CSR -and that the path the manager chooses impacts his or her CSR outcome. Here, identity orientation, legitimacy, justification, and transparency are some of the possible reasons managers might choose the paths that they do (Basu and Palazzo 2008). One important reason, in addition to the facts that it is expected and that it signals openness and presents the company as a responsible actor, is that investments in companies are increasingly tied to ethical investment criteria, known as Socially Responsible Investment (SRI). Investors expect companies in which they invest to provide documentation confirming that the companies comply with ethical requirements. "Nearly one out of every eight dollar under professional management in the United States today [] is involved in sustainable and responsible investing." (US SIF 2012) Between 2007 and 2010 social investing had a growth rate of over 13 percent. What does this suggest about claims made in non-financial reports? Companies are not required to have their non-financial reports verified, and technically they can write anything they want in the reports. Intentions about proper behavior does not necessarily imply proper behavior. While regular annual reports generally must be verified by an auditor, this is not the case with non-financial reports (although there are a few companies that voluntarily have the reports verified). Naturally, businesses do not want to write anything negative about their activities. Therefore they focus on presenting themselves in a positive light. To what extent, then, can community (and investors) rely on what they read in these reports? A review of various non-financial reports found very little mention of the dilemmas that businesses face, and very little mention of CSR-related issues for which companies have been criticized in the media. From an academic perspective, two relevant studies investigating the media impact on corporate CSR reporting have been consulted. One of these investigated H&M and Nike CSR disclosures from 1987-2005 and found that the more negative media coverage the company received, the more positively the companies reported their own CSR performance (Islam and Deegan 2010), The other reviewed the relationship between media coverage related to environmental issues and annual report disclosure in nine energy and/or resource-intensive industries from 1981-1994 (Brown and Deegan 1998). Both of these studies were based on legitimacy theory and media-related theory, and both found that media attention on a CSR-related topic was significantly associated with increased corporate disclosure on the same topic. In this study, we will approach the topic in differently inasmuch as we will focus on specific events or scandals where particular companies were criticized for irresponsible behavior-i.e. not reporting using a timeline format but rather using a point-in-time approach. Seven companies which are operating internationally and have been criticized for irresponsible behavior (violation of human rights, pollution, poor working conditions, etc.) will be investigated. We will explore the extent to which the issues for which these companies have been criticized have been represented and reflected in financial and non-financial reports representing the year of the most issue hits in media. From a theoretical perspective, the study will place the sense making and legitimacy approach to corporate disclosure in the framework of the five steps presented in Wilson & Carrolls's "Philosophy of the Social Responsiveness" (Carroll 1979). The focus, therefore, will be on the company's social responsiveness (CSR2), which is not necessarily, as addressed previously, the same as the company's degree of responsibility (CSR1). Accordingly, we will not investigate the extent to which the "media scandals" have led to actual changes in responsibility in the companies. Rather, we will study how the companies have responded to the criticism through disclosure using CSR or nonfinancial reporting. The study has both academic and practical implications. On the academic side, it will test Wilson & Carroll's "Philosophy of the Social Responsiveness" model. At the same time, it will be useful for businesses (to evaluate different strategies and establish benchmarks) and authorities (to the extent that non-financial reporting has any value as long as they are not required to be verified). Response Theory Various methods are available for sorting and categorizing the ways in which businesses communicate. In his article "A Three-Dimensional Conceptual Model of Corporate Performance" Carroll gives an overview that describes types of corporate performance ranging from "do nothing" to "do much" (Carroll 1979). Carroll refers to Ian Wilson's "Philosophy of Social Responsiveness" model, originally presented by Wilson in 1975 in the chapter "What one company is doing about today's demands on business" in the book Changing business-society interrelationship (Wilson 1975). Here, he claims that "questions of social responsibility are, therefore, no longer peripheral, but central to decisions about corporate planning and performance (ibid page 25). (Given that this is very much in line with today's views of managers in leading companies, one wonders whether Wilson was ahead of time in 1975, or if little has happened in the CSR field since then). Wilson concludes that corporate social responsibility is, in effect, "essentially and primarily a matter of 'social responsiveness' " (ibid page 25). The importance of social responsiveness in the CSR setting is a key element of this study, with a focus on CSR-related responses to media criticism in non-financial reporting. Wilson's model describes the continuity of response, and therefore fits well in this study as a means to assess companies' responsiveness to criticisms of CSR reporting. Other studies have also used this model to evaluate corporate social behavior (Clarkson 1995) and one of the most popular books for teaching CSR and ethics at the university level today, "Business Ethics" by Crane and Matten, uses the model when describing CSR and strategycorporate social responsiveness (Crane and Matten 2007). In his study of Nike's responsiveness to critics in a given period of time, Zadek (2004) found evidence of a similar, five-stage transformation process (Zadek 2004). This frequently application of the model is a relevant argument for testing it in this study. The model is based on four types of responses to criticism. As neither Wilson or Carroll elaborate on the four strategies, a brief description of how they apply to response to media criticism in corporate non-financial reporting is suggested in parentheses. 1. Reaction (the company reflects the media criticism in its non-financial report) 2. Defense (the company defends itself against the criticism in the non-financial report) 3. Accomodation (the company acknowledges the criticism and reports that it will improve its behavior) 4. Proaction (the company acknowledges the criticism and sets out to improve its behavior beyond what is expected) Figure 2 illustrates the degree of responsiveness. In this study, we categorized the seven companies studied based on their response strategy according to this model. We will provide practical examples of this categorization, and then consider whether Carroll and Wilson's model addresses the appropriate alternative response options, through their nonfinancial reporting. Methodology and data collection In a study such as this, it would be optimal to consider a multitude of companies and compare the responses in various industries relative to nationality and size. This would be an extensive undertaking and since there is currently relatively little knowledge in this area it may be more appropriate to start by concentrating on a few companies to test Carroll and Wilson's model. This is why we chose the below described case study method. Case study method The case study method can be useful when conducting qualitative research. There are a variety of approaches in this method and a variety of ways to categorize the different types of cases. Case study research is a good approach to testing, revising and building theories. Such theory-building research can lead to new insights (Eisenhardt 1989), which is the purpose of this study. As the purpose of this study is to examine whether companies response to media criticisms, it is natural to use multiple cases and apply a so-called comparative case approach. Since we will examine the question of whether Carroll and Wilson's model covers alternative corporate responses to media criticism related to CSR, the study may be described as presenting evolving theories (Andersen 2003). This type of research design is based on a particular theory or concept, which is then developed or fine tuned during the study. This can be accomplished by addressing how a theory is applied to a particular area and testing whether this also applies to the cases under study. Also, this approach can help to clarify and deepen an existing theory. Moving from one to several cases allows us to generalize about a particular question based on the findings. Selection of cases The goal of this study is to assess how companies respond to media criticism of CSR in their non-financial reports. The first criterion in selecting companies, therefore, was that they had been subject to CSR-related criticism. As many companies have been criticized for unethical behavior, the selection of an "appropriate" subset of such companies was a challenge. To this end, companies in bank and investment sector were consulted. In January 2012 the Norwegian Financial Services Association held a meeting with stakeholders (representing leading international financial institutions based in Norway: DnBNor, Storebrand and KLP) to solicit feedback about possible case studies. Five of the companies suggested were used in the study: Statoil, Intex, Lundin, Ericsson and Telenor. Two more companies -Vale and Alstrom -were then added to represent a more international collection of case studies. Table 1 includes a list of the companies as well as sector, country of origin, the issue the company has been criticized for, the non-financial reporting form, and company size (number of employees and sales). The seven case companies in this study are located in different countries: Norway, Sweden, Brazil and France. These countries have varying regulations pertaining to mandatory non-financial reporting (Ioannou and Serafeim 2012). There is contradictory findings concerning the extent to which such mandatory regulations related to CSR reporting actually change corporate behavior (Ioannou and Serafeim 2012, Ditlev-Simonsen 2012, Ditlev-Simonsen 2010. The "scandals" and media criticisms surrounding the case companies go beyond the non-financial reporting regulations, and I have therefore chosen not to address issues related to country wise reporting regulations. I have, however, included in the table whether and when the companies have signed up to the voluntary initiative UN Global Compact (UNGC). Again, it is not clear to what extent such support actually changes a company's responsibility behavior. Unlike local regulations, though, the UNGC initiative is international. Thus it is easier to compare from company to company than it is with national regulations. Whereas the previously presented Brown & Deegan study investigated the response of resource-intensive B2B industries to negative media coverage, and Islam & Deegan investigated similar responses among companies that sell to end-users (H&M and Nike), the present study includes both resource-intensive B2B industries (Statoil, Intex, Lundin, Vale and Alstom) and companies that sell to end users (Ericsson and Telenor). Furthermore, also as previously mentioned, the study will identify the point in time at which the company was most criticized in the media and review its disclosure response to the criticism. Evaluation method A review of the seven companies was conducted based on the following procedure; 5. In many cases the "issue" continued for several years. To limit the study to the year the "issue" received the most media coverage, we used Faktiva to provide an overview of annual media coverage. The company name, location and subject matter of the criticism were used as search terms. For example, the Statoil search used the terms "Statoil" and "Canada" and "oil sands." We selected and focused on the year the issue received the most "hits" on Factiva. Source: Factiva.com 6. When the year in which the issue had received the most hits on Factiva was identified, the financial and or non-financial report for this year was investigated. Source: the company's own reporting. 7. How the company addressed the criticisms in its reporting and how this related to Wilson and Carroll's model was studied. We developed a database for this containing electronic copies of pages that describe how the company has dealt with the issue criticized. This report is available from the author. The above database is used to categorize responses according to the "Philosophy of Social Responsiveness" model by Wilson and Carroll and to document this through examples. Findings In this section, we first describe what the company is criticized for and how it has responded to this criticism in its non-financial reporting. Thereafter we categorize their response in accordance with the Wilson and Carroll model in the spectrum between "do nothing" and "do much." It is important to note, however, that in this analysis that we apply the companies' self-descriptions. It is debatable whether such self-descriptions are true; one can expect an unwarranted amount of self-praise. At the same time, it is also possible that companies are doing more than what they describe, that they are more socially responsible than may be gleaned from reading annual and non-financial reports. This study therefore does not consider the degree of "truth" of what is reported in the annual financial and non-financial reports, but only what the company has written, that is, how the company presents itself. It is important to note that the companies may have changed their reporting and response strategies in the wake of the criticisms and the responses described, so that reporting after and prior to the one with most media hits may be based on a different reporting strategy. This will not be covered here, because the scope of the study would otherwise be too complex and answer a different research question. Criticized for negative environmental impacts of oil sand extraction. A search for "Statoil " and "Canada" and "oil sands" in Factiva yielded the most hits in 2007, which was 433 hits. A search of the company's non-financial report "Going North -Sustainable Development 2007" yielded hits for "oil sands" on seven pages, but only four of these noted something relevant about the oil sands-related criticism. The report confirms that there has been considerable debate and criticism about the production and refinement of oil sands -"Our acquisition of a large oil sand deposit further west in Canada has been the subject of much debate and criticism, both in Norway and internationally" -and Statoil defends the company's response and work in this manner "We have started a comprehensive project in which we will study all possible options for reducing or offsetting carbon dioxide emissions" (p 15). In addition, the company has taken the initiative to ensure that their operations are as environmentally friendly as possible: "Extensive environmental monitoring is used to evaluate relevant impacts of discharge of emissions, both through legally required surveys and through other initiatives such as the global scientific and environmental ROV partnership overusing existing industrial technology (Serpent)" and "We have also continued to pursue an extensive portfolio of R&D projects for tailoring such response to Arctic regions" (p 28). Statoil is thus open to the criticism it has received for its investment in the oil sands, though it defends its actions and argues that it has behaved properly. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness" model, this reflects the reaction, defense and accommodation strategies. Moreover, the company argues that it has gone beyond the statutory requirements and notes specific examples as referred to above. This reflects a proaction strategy as well. Intex Criticized for its nickel production destroying ecosystems and water supplies in the Philippines. A search for "Intex Resources" and "Mindoro" and "environment" in Factiva yielded the most hits in 2009, which was 115 hits. Intex has no separate non-financial report, but a search in the annual report for 2009 yielded hits on 14 pages, three of which were relevant to the Mindoro criticism case (the remaining matches did not address the environmental or social impacts of the project). In this non-financial section of the annual report, the company is very open about the criticism it has received. "In 2009, the company has faced opposition from anti-mining groups in the Philippines, Norway and internationally. This culminated with a 90-day suspension of the environmental permission ECC one month after the project had received the permission" (the report is in Norwegian and has been translated). Also, "Opposition to the project has also led to the group Future in Our Hands submitting a complaint to the Norwegian OECD Contact Point (NCP), a process that is ongoing. The Board considers this complaint unfounded" (page 5). The company defends itself: "The company has a comprehensive environmental program" (Page 7-2). Also, "Intex Resources wants to help develop sustainable communities. The program for good community activities for Mindoro Nickel includes five key areas: education and scholarships, health, water and sanitation, agriculture and livelihood, initiative for capacity development and support for the local infrastructure" (page 7-3). The report continues: "In the Mindoro Nickel project the company policy is to employ people from the local population with equal pay for equal work. The company has always had this policy, and was apparently the first company to introduce such a policy on the island of Mindoro" (page 7-4). The various parties' negative views on Intex's behavior are clearly presented as the company defends its actions and argues that it has behaved properly. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness" model, this reflects the reaction, defense and accommodation strategies. Moreover, the company maintains that it has gone beyond the statutory requirements and provides specific examples of this. This reflects a proaction strategy. Lundin Petroleum Criticized for possible involvement in war crimes in Sudan. A search for "Lundin" and "Sudan" and "war" in Factiva yielded the most hits in 2001, which was 147 hits. Lundin's 2001 annual report included 19 pages addressing Sudan. Seven of those pages addressed the criticism leveled at the company. In the report, the company recognizes that its engagement in Sudan "has also raised ethical issues, due to the ongoing conflict in that country. The question being asked is whether oil fuels the war or sets the conditions for peace by providing the country with the necessary means to lift itself out of poverty. We believe the latter" (page 2-3). In addition, the company describes its efforts to help the local population: "To try to enhance the well-being of this community and raise its living standards, Lundin Petroleum has initiated a Community Development and Humanitarian Assistance Program (CDHAP). After consulting with local leaders and development experts, it devised the following projects aiming at meeting some of the inhabitants' basic needs" (page 14-15). These initiatives include infrastructure development, water supply, health treatment of 6000 patients, education for over 500 pupils, capacity building and humanitarian assistance. Through CDHAP, "Lundin Petroleum remains committed to finding ways to help the local community achieve long-term economic self-sufficiency." (page 14-15) Lundin recognizes having being criticized for its operations in Sudan, defends its operations and describes the ways in which the company has exceeded what is legally required related to social behavior, by initiating the CDHAP. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness," this reflects the reaction, defense and accommodation strategies. Moreover, the company claims to have gone beyond the statutory requirements and mention specific examples of how it has implemented voluntary measures to help the local community. This reflects a proaction strategy. Criticized for hazardous working conditions as well as child labor at a sub-supplier in Bangladesh. A search for "Ericsson" and "Bangladesh" and "child" in Factiva yielded the most hits in 2008, which was 10 hits. In Ericsson's non-financial report entitled "Ericsson corporate responsibility and sustainability report 2008," searches for "Bangladesh" yielded hits on six pages (a search for "child labor" yielded 0 hits). Five of the pages contained information that was relevant to the criticism. The company recognizes early in the report that it has done something wrong "Our commitment to the UN Global Compact and human rights includes reinforcing human rights along the supply chain. We became aware that some of our suppliers in Bangladesh were not meeting our high social and environmental standards. This experience served to sharpen top management focus on this issue, and strengthened our approach to monitoring and engaging our supply chain on improvements" (page 3). Headings such as "Engaging stakeholders" (page 11), "Learning from Bangladesh" (page 14) and "New approach, changed mindset" (page 15) show that the company acknowledges its mistakes and is making changes. According to the report, Ericsson is now operating more "appropriately" through "Mitigating risk through audits and training" (page 15-1). Ericsson is open to the criticism they received for poor working conditions with its suppliers, apologized and promised to address the issue, and thereby avoiding similar problems in the future. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness" model, this reflects the reaction, defense and accommodation strategies. Moreover, the company says that it has gone beyond the statutory requirements and mentions specific examples of auditing and training fore example "Ericsson is a founding member of GeSI, the Global e-Sustainability Initiative. A multi-stakeholder organization. Its aim is to promote sustainability within our sector's sphere…" (p. 15-1). This reflects a proaction strategy. Criticized for hazardous working conditions as well as child labor at a sub-supplier in Bangladesh. A search for "Telenor " and "Bangladesh" and "child" in Factiva yielded the most hits in 2008, which was 37 hits. Telenor 2008 annual report offered limited coverage of CSR. A search for "Bangladesh" in this report yielded results on 13 pages. Four of these pages dealt with the issue of criticism in Bangladesh. The company is open and responsive to the criticism: "In April 2008, Telenor became aware of unacceptable working conditions at several suppliers to its subsidiary Grameenphone in Bangladesh. In response, Telenor initiated a group-wide project to review and improve health, safety, security and environmental standards across the supply chain" (Page 2). "Telenor has also initiated awareness building programs with suppliers in order to increase awareness of HSSE challenges" (p. 6). "We further strengthened our process for monitoring compliance through both announced and unannounced supplier visits" (p. 7). Like Ericsson, Telenor is open to the criticism pertaining to the poor working conditions at its suppliers, apologizing and promising to address the issue, thus avoiding similar problems in the future. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness" model, this reflects the reaction, defense and accommodations strategies. Moreover, the company holds that it has exceeded the statutory requirements and mention specific examples of awareness building and of extensive control. This reflects the proaction strategy. Criticized for inhuman conditions and reckless exploitation of nature in connection with hydropower plants in the Amazon. A search for "Vale" and "Belo Monte" and "environment" in Factiva yielded the most hits in 2011, which was 20 hits. Search on "Vale" and "Belo Monte" and "human rights," yielded nine hits the same year. Vale offered a separate non-financial report, the 2011 Sustainability Report, and a search for "Belo Monte" in this report yielded hits on four pages. The company writes about its focus on sustainability, but also describes the criticism it has received and acknowledges the need for improvement in this area: "Vale needs to assume its role as a major player and be committed to supporting best practices to ensure that Belo Monte is a sustainable project. What today is a cost can become, with excellent management, a positive return."This is a quote from Sergi Bessserman, a professor of economics and ecology (page 8). "Vale is aware that the project [Belo Monte] has caused adverse reactions with regard to its social and environmental impacts, and the well-being of the indigenous communities in the region, during the construction and operational stages. Vale believes that the project will leave a positive legacy for the region. Vale is acting proactively to implement best practices, particularly concerning issues related to sustainability" (page 74). The company documents its focus on and recognition for sustainability, among others through examples like "At the start of 2012 Vale was awarded the Sustainable Biofuels Award by World Biofuels Markets" (page 74). Vale ads to the project by "Strengthening of the project's public image and reputation as a result of proactive action to achieve continuous improvement in the quality of environmental attributes of the ecosystems in the region" (page 76). Vale is thus open to criticism of its unsustainable behaviour in the Amazon. Even if the response is not comprehensive, the company still promises improvement. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness" model, this reflects reaction, defence and accommodations strategies. Moreover, the company says that it has gone beyond the statutory requirements by proactive actions and mentions specific examples of this. This reflects a proaction strategy. Criticized for several cases of gross corruption over several years in different countries. A search for "Alstom" and "corruption" in Factiva yielded the most hits in 2011, which was 199 hits. A search for "corruption" in Alstom's separate non-financial report, Sustainable Development and Social Responsibility Report 2010/11, yielded hits on two pages. However, none of the results is related to allegations of corruption. Instead, the report describes the company's good behavior, for example "All employees are free to trigger a confidential alert if they suspect a violation of the rules with respect to securities, accounting, competition or corruption prevention" (page 26). Searching for "bribe," for more detailed information related to the criticism the company received, did not yield any hits. Alstom does not acknowledge the criticism leveled at the company with respect to the alleged corruption. With respect to Carroll and Wilson's "Philosophy of Social Responsiveness" model, Alstrom cannot be placed in the model because it did not exhibit reaction to the criticism by recognizing it. The company's response strategy in the annual report is to ignore the criticism issue by not addressing it. Moreover, the company claims to be a frontrunner in the anti-corruption work. "The Group is a member of the United Nations Global Compact Working Group on Principle 10 and of Brazil's Ethos Institute, where it is a signatory of the anti-corruption" (page 26). The importance of this strategy will be analyzed further in the Discussion section of the article. Discussion and conclusion Based on a study of these seven companies that have been criticized for irresponsible behavior, it appears that distinguishing between the different levels of response as suggested by Wilson and Carroll's model, is not the most useful approach to understand and categorize corporate response to negative media coverage in financial and non-financial reports. The most appropriate way to categorize the responses is not to place it on a spectrum from "little response" to "much response", but rather to note whether the companies chose to respond at all. Six of the seven companies (Statoil, Intex, Lundin, Ericsson, Telenor and Vale) did respond to the criticism. They acknowledged the criticism (react), immediately went on to explain why they had done what they had, and to some extent defend and excuse their actions (defense), and then explained what they had done to improve (accommodation) and how they had gone beyond what was required to improve the situation and prevent similar misconduct in the future (proaction). The extent of the review (the number of pages allocated) varied between the companies, though this can be tied to the size of the report (financial and nonfinancial) in which it appeared. The most important related strategies is not how many pages are used to respond to the criticism, but rather whether the criticism is acknowledged at all. One of the seven companies, Alstom, chose a different response than did the companies noted above. This company choose not to acknowledge the criticism in its non-financial report. Alstom does not mention the fact that it has been criticized, but rather focuses on the positive aspects of its work related to social responsibility. To some extent, this supports Brown & Deegan's and Islam & Deegan's findings, that negative media coverage results in positive non-financial disclosure from the company (Islam andDeegan 2010, Brown andDeegan 1998). However, contrary to these studies, which followed the companies over time, the present study only investigated the company's response the year of most negative coverage. Alstom reacted differently than the other six companies in that the company did not recognize or address the CSR performance criticism. Still, it can be argued that this supports the sense making model described by Basu & Palazzo: Alstom's management might not identify itself as a socially responsible company, but rather as an investor-responsible company -as long as the negative media coverage did not impact its share price, it did not need to respond (Basu and Palazzo 2008). The same is true for legitimacy: if the key stakeholders are the shareholders, focusing on other stakeholders (like society in general or critical NGOs) might not seem as relevant. To answer these questions, further studies are needed to follow up Alstom's management strategy related to the media criticism. It might well be, though, that the company did not have a specific strategy related to the criticism; it may have simply ignored it. Previous studies have shown that CSR engagement and disclosure can be very person-related (Ditlev-Simonsen 2010, 2009. It is very possible that in the next year, or in the previous year, the company's CSR disclosure was different and the criticism was recognized and responded to. From a corporate perspective, it is interesting to note that it is possible to ignore the criticism in the annual report and proceed as if it never occurred. There are several possible reasons a company might choose such a strategy. Not least, it is "easier" -or more convenient. The company will not have to engage in the area and will avoid negative consequences of admitting to inappropriate behavior. At least in the short term, it does not appear as if the company has "lost" anything in choosing this strategy. From an academic perspective, it is interesting to note that Carroll and Wilson's philosophy of Social Responsiveness model is not as relevant in categorizing corporate strategies to meet societal criticism to which they are exposed. It is generally the case that, after the company has acknowledged the criticism (react), it proceeds immediately to the defense, accommodation and proaction stages. The degree and focus of each of these is related to the nature of the criticism, and of the behavior that led to the criticism. For Telenor and Ericsson, for whom working conditions at suppliers were below the quality the company wanted, it was acceptable to admit they had made a mistake and to claim that will not happen again. Statoil, Intex, Lundin and Vale however, must focus more on defending its operations as the company plans to continue with its operation. All of these responses, however, could support Basu & Palazzo's cognitive identity and legitimacy approach of sense making, with a linguistic balance of the justification and transparency approach, which might be linked to management strategy or to the person in charge of developing the non-financial disclosure (Basu andPalazzo 2008, Ditlev-Simonsen 2010) Even faced with very different criticism, the companies that responded immediately, adopting all four strategies proposed by Carroll and Wilson. These findings support the argument that if a company reacts to criticism, it immediately have to go all the way applying a combination of these four strategies simultaneously (react, defense, accomodation or proaction). There is alternatively a different approach; null response, that is, not acknowledge the criticism of in non-financial reporting. Thus, from a theoretical perspective, this study has followed the Process of Building Theory from Case Study Research (Eisenhardt 1989), contributing to extending the Carroll and Wilson model to reflect new dimensions with an either or response: Either ignore criticism or fulfill the four strategies simultaneously. This theory extension also supports the sense making theory and extends the variety of responses that can make sense for companies. Further research could help to shed light on why companies might choose this strategy and what the effect of this might be, compared to those companies that chose to react and respond to criticism in their non-financial reporting. This study suggest that if a company decided to recognize the criticism, it has to go "all the way" (react, defense, accommodation and proaction), but it is also possible to entirely ignore the criticism and not mentioning at all. It is also possible, like Alstom, to portray itself as a deeply engaged in an area where it has been criticized without recognizing the criticism. Maybe then, another way of categorizing response to media criticism is to reflect how "deep" into regret and remorse the company goes. Like Ericsson and Telenor, which can "afford" to be humble and poor working conditions were perceived as an "error", other companies as Statoil, Intex, Lundin and Vale want to continue to performing the operation they have been criticized for, and therefore have to apply a different strategy for response, not as "deep" on regret and remorse. I want to extend my gratitude to research assistant Elisabeth Støve who has conducted the Factiva and corporate report search. The study is partly supported by grant from Fondet til fremme av bank-og finansstudier (The Norwegian foundation in support of bank and finance studies)
2018-10-23T07:44:07.999Z
2014-06-30T00:00:00.000
{ "year": 2014, "sha1": "8bc98679039acd14b77fbde8c75506d09208a4c9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.22164/isea.v8i2.85", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8bc98679039acd14b77fbde8c75506d09208a4c9", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Sociology" ] }
262966898
pes2o/s2orc
v3-fos-license
Environmental health in space. physical limitations of Earth to include the outer reaches of space. Over the past 40 years, beginning with the former Soviet astronaut Yuri Gagarin, we have been exploring this previously unknown and inaccessible realm. Although the number of people who have so far experienced the environment of space is limited (some 300 astronauts), the lay public is expected to join the ranks of these welltrained and well-educated space travelers. This has already begun with the inclusion of Dennis Tito on board a Russian rocket to the international space station (ISS). In the near future, it is expected that the lay public will be able to travel and stay in space for extended periods. In any case, many medical capabilities are needed for those who travel and stay at a significant distance from Earth, such as an extended sojourn in the ISS or participation in interplanetary missions—a more distant probability. Environmental health in space may yet become a field of study in its own right. One of the main avenues of research to be undertaken at the ISS will be the elucidation of biomedical risks and hazards relating to space habitation. Space medicine—the ability to deliver high quality health care in space—is in its earliest stages of development; it is hampered by limited flight opportunities, few clinical incidents, and competition for resources among various disciplines (1). As a result, few publications exist in this field as compared to other areas of medicine (2), and it is uncertain whether all potential problems that may arise from long-term space habitation have been anticipated and tabulated (3). Space medicine, therefore, will play an important role in biomedical research in space in the 21st century, as well as in the prevention and treatment of medical problems that will arise in a space environment. The space environment encompasses unique characteristics, forcing scientists to investigate a variety of subjects by using this interesting environment—an environment that is not easily reproduced on Earth. From a cosmic point of view, we have to consider the prevention of space pollution produced by human activity, such as solid wastes and trace contaminants. This “space garbage” includes breakaway parts of rockets, spent satellites, paint flecks, and other hardware. These pieces of trash travel in orbit at high speed (~30,000 km/hr), posing a potential hazard to spacecraft and astronauts. In space there are varying primary cosmic rays. These cosmic rays are continuously penetrating Earth’s magnetosphere. However, we are usually protected from galactic cosmic radiation and solar particle radiation by a double radiation shielding, namely, the atmosphere and the magnetosphere. Therefore, the trapped particle radiation confined by the magnetosphere in the Van Allen belt is the major source of exposure in the low-Earth orbit where the ISS is located. This is especially the case in the South Atlantic Anomaly, where the Van Allen belt is shifted to a low-Earth orbit. The resulting increase in solar activity might lead to a 10–100-fold increase in radiation originating from solar flares (4). In addition, this increased solar activity influences the distribution and intensity of the geomagnetic field through an increase in plasma jet to the earth, causing a rise in galactic cosmic radiation and trapped particle radiation as well as solar particle radiation. We must consider periodic and accidental solar activities when contemplating cosmic radiation in a space environment. In addition to existing cosmic rays, there is a possibility of the production of secondary rays, which could result from the interaction of primary cosmic rays and the structural materials of the spacecraft or space station. We also have to consider single-particle effects produced by heavy ions (high Z and energy particles), although there are several conflicting reports in this regard. [See Nelson et al. (5) for an affirmative view, and Krebs et al. (6) for one that disputes these effects.] Potentially, as well, there might be a synergistic action between radiation and microgravity (7). In any case, we must make every effort to reduce the strength and quantity of potentially hazardous radiation. Other physiologic problems of weightlessness are motion sickness, a fluid shift to the upper part of the body due to a loss of hydrostatic pressure, and decreased physical fitness. A prolonged stay in space results in a decrease in blood volume and red blood cell mass, muscle atrophy, a loss of bone mass, and autonomic system disturbance causing orthostatic intolerance. These symptoms are not extremely severe in terms of being life threatening; however, they should be taken into account for the efficient and safe operation of the spacecraft or space station. Exposure to microgravity produces a number of physiologic changes of metabolic and environmental origins that increase the potential for renal stone formation. Although we do not have adequate information as to the changes in immune function caused by being in space, there are reductions in the quantity and reactivity of T lymphocytes, the activity of helper cells and natural killer cells, and the synthetic activity of the principal lymphokines (8), and a decrease in interferon production. The pathogenicity of microorganisms is altered, and some microorganisms have shown a resistance to antibiotics after long flights (8). Immune suppression could impair both physical and mental performance by increasing susceptibility to opportunistic microorganisms. Appropriate onboard exercise is believed to be an effective countermeasure against a decrease in immune function (8). In the 21st century, we expect members of pediatric, geriatric, and obstetric populations, as well as astronauts, to travel and stay under the challenging conditions of a space environment. These populations may prove to be more susceptible to the potential hazards of a space capabilities are needed for those who travel and stay at a significant distance from Earth, such as an extended sojourn in the ISS or participation in interplanetary missions-a more distant probability. Environmental health in space may yet become a field of study in its own right. One of the main avenues of research to be undertaken at the ISS will be the elucidation of biomedical risks and hazards relating to space habitation. Space medicine-the ability to deliver high quality health care in space-is in its earliest stages of development; it is hampered by limited flight opportunities, few clinical incidents, and competition for resources among various disciplines (1). As a result, few publications exist in this field as compared to other areas of medicine (2), and it is uncertain whether all potential problems that may arise from long-term space habitation have been anticipated and tabulated (3). Space medicine, therefore, will play an important role in biomedical research in space in the 21st century, as well as in the prevention and treatment of medical problems that will arise in a space environment. The space environment encompasses unique characteristics, forcing scientists to investigate a variety of subjects by using this interesting environment-an environment that is not easily reproduced on Earth. From a cosmic point of view, we have to consider the prevention of space pollution produced by human activity, such as solid wastes and trace contaminants. This "space garbage" includes breakaway parts of rockets, spent satellites, paint flecks, and other hardware. These pieces of trash travel in orbit at high speed (~30,000 km/hr), posing a potential hazard to spacecraft and astronauts. In space there are varying primary cosmic rays. These cosmic rays are continuously penetrating Earth's magnetosphere. However, we are usually protected from galactic cosmic radiation and solar particle radiation by a double radiation shielding, namely, the atmosphere and the magnetosphere. Therefore, the trapped particle radiation confined by the magnetosphere in the Van Allen belt is the major source of exposure in the low-Earth orbit where the ISS is located. This is especially the case in the South Atlantic Anomaly, where the Van Allen belt is shifted to a low-Earth orbit. The resulting increase in solar activity might lead to a 10-100-fold increase in radiation originating from solar flares (4). In addition, this increased solar activity influences the distribution and intensity of the geomagnetic field through an increase in plasma jet to the earth, causing a rise in galactic cosmic radiation and trapped particle radiation as well as solar particle radiation. We must consider periodic and accidental solar activities when contemplating cosmic radiation in a space environment. In addition to existing cosmic rays, there is a possibility of the production of secondary rays, which could result from the interaction of primary cosmic rays and the structural materials of the spacecraft or space station. We also have to consider single-particle effects produced by heavy ions (high Z and energy particles), although there are several conflicting reports in this regard. [See Nelson et al. (5) for an affirmative view, and Krebs et al. (6) for one that disputes these effects.] Potentially, as well, there might be a synergistic action between radiation and microgravity (7). In any case, we must make every effort to reduce the strength and quantity of potentially hazardous radiation. Other physiologic problems of weightlessness are motion sickness, a fluid shift to the upper part of the body due to a loss of hydrostatic pressure, and decreased physical fitness. A prolonged stay in space results in a decrease in blood volume and red blood cell mass, muscle atrophy, a loss of bone mass, and autonomic system disturbance causing orthostatic intolerance. These symptoms are not extremely severe in terms of being life threatening; however, they should be taken into account for the efficient and safe operation of the spacecraft or space station. Exposure to microgravity produces a number of physiologic changes of metabolic and environmental origins that increase the potential for renal stone formation. Although we do not have adequate information as to the changes in immune function caused by being in space, there are reductions in the quantity and reactivity of T lymphocytes, the activity of helper cells and natural killer cells, and the synthetic activity of the principal lymphokines (8), and a decrease in interferon production. The pathogenicity of microorganisms is altered, and some microorganisms have shown a resistance to antibiotics after long flights (8). Immune suppression could impair both physical and mental performance by increasing susceptibility to opportunistic microorganisms. Appropriate onboard exercise is believed to be an effective countermeasure against a decrease in immune function (8). In the 21st century, we expect members of pediatric, geriatric, and obstetric populations, as well as astronauts, to travel and stay under the challenging conditions of a space environment. These populations may prove to be more susceptible to the potential hazards of a space A 300 VOLUME 109 | NUMBER 7 | July 2001 • Environmental Health Perspectives Environmental health in space may yet become a field of study in its own right. environment than those who are selected as astronauts, partly because of astronauts' greater capability in physical and mental fitness and because of their specialized training for occupational missions. In the long run, there is much territory to be covered toward achieving safe, comfortable travel and long-term habitation under the extreme conditions of space, but these are challenges that, when eventually surpassed, will potentially be of great benefit to the human race. I thank Nault Doreen for help in writing the English manuscript. Department of Preventive Medicine and Public Health Tokyo Medical University Tokyo, Japan E-mail: KYP02504@nifty.ne.jp
2018-04-03T02:57:53.691Z
2001-07-01T00:00:00.000
{ "year": 2001, "sha1": "1c35dbffc167d83c35d30f889bad38a3d11b95a5", "oa_license": "pd", "oa_url": "https://doi.org/10.1289/ehp.109-a300", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ccca7e60d8b6ce45a1f6b5f625f2b28c81c388e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
49480175
pes2o/s2orc
v3-fos-license
Information and communication technology and climate change adaptation: Evidence from selected mining companies in South Africa The mining sector is a significant contributor to the gross domestic product of many global economies. Given the increasing trends in climate-induced disasters and the growing desire to find lasting solutions, information and communication technology (ICT) has been introduced into the climate change adaptation mix. Climate change-induced extreme weather events such as flooding, drought, excessive fog, and cyclones have compounded the environmental challenges faced by the mining sector. This article presents the adoption of ICT innovation as part of the adaptation strategies towards reducing the mining sector’s vulnerability and exposure to climate change disaster risks. Document analysis and systematic literature review were adopted as the methodology. Findings from the study reflect how ICT intervention orchestrated changes in communication patterns which are tailored towards the reduction in climate change vulnerability and exposure. The research concludes with a proposition that ICT intervention must be part of the bigger and ongoing climate change adaptation agenda in the mining sector. them, Mirza (2003) and Kunkel, Pielke and Changnon (1999), have modelled the weather variability over a period of time and noted that the climate is changing and this change is persistent. The changing climate is partly responsible for the risks outlined earlier. Physical risks take the form of power (electricity) blackouts, displacement of people, landslides, and flooding as well as overall financial risks (Hojers, Dreborg & Engstrom 2011). The changing climate further leads to job losses in the economy and at times political and social unrest and instability (Okereke & McDaniels 2012). In most cases rebuilding damaged structures eats deep into budgets, including the national fiscal budget (Appiah & Osman 2014;Hilty, Lohmann & Huang 2011;Hojers et al. 2011). Disasters disrupt mining activities as they may lead to both temporary and permanent closures to operations. Fog, mist, and floods, for example, may delay mine product transportation. Heavy rains can result in coal stockpiles getting wet, which means they cannot be transported for power generation (Sapa 2014). Pits may get flooded and equipment drowned as well as getting damaged. Overall, the impact of CC has negative economic, social, and environmental consequences, particularly if left unchecked. Therefore, this is where new innovative ways that will facilitate daily activities through information and communication technology (ICT) comes in, particularly in the mining sector. Governments and policymakers are working to manage these consequences (risks) as highlighted in Nhamo (2014). Furthermore, in the suggestions, Nhamo mentions public awareness and efficiency in national communication among stakeholders as key strategies for CC adaptation. Getting the public to be aware and more informed about CC is precedence to adapting to CC. Against this backdrop, businesses need to adopt new communication technologies or restructure their existing communication technology to enable them to build the capacity for efficient communication. This article interrogates the role of ICT innovation as part of the mining sector CC adaptation strategies towards reducing physical risks of mining operations. Evidence from selected mining companies in South Africa is used to support the arguments. The authors propose that ICT innovation is part of the value additions in the mines for capacity development and public awareness campaign required for CC adaptation. Furthermore, ICTs are seen as vital for negotiation and national communication of CC adaptation strategies. The subject is explored through the lenses of ICT readiness, ICT usage, and existence of indigenous knowledge systems in the mines. The authors' proposal is informed by Bunker and Smith (2009) who argued that ICT adoption should be part of the capacity-building strategies in managing DR, and in our case CC-induced DR. The research findings could inform the mining sector on the potential contribution of ICT innovation towards CC adaptation, and physical risk reduction. In the context of this study, the ICT components considered are not limited to the use of computers (desktop and laptops). Rather the indicators include mobile phones, fixed telephone lines, internet access, fixed broadband subscription, personal computers, and personal digital assistants (PDAs). Therefore where there is the presence of these indicators and they are used, it is assumed that there is ICT compliance (The International Telecommunication Union 2007). This article has four main sections that include: a literature survey on ICT readiness in the CC adaptation context, the methodology, the findings, and a summary of the article. Literature survey This section presents a literature survey of ICT applications in CC adaptation and physical DR reduction. The literature is explored extensively around ICT readiness and communication technology innovation in the mining sector. The literature further presents the mining value chain, reflecting the stages in which mining products such as coal, gold, diamond, platinum, and iron ore undergo before final entrance into the market. The mining value chain is presented to prepare the context for ICT application towards CC adaptation and physical risk reduction at different stages of the value chain. However, as a precursor to the discussion of ICT readiness in this article, ICT is defined in the context of this article as the computer based communication equipment and telecommunication facilities that enable digital exchange of information. Going by this definition, computers, mobile phones, PDAs, and environmental impact assessment systems are included. The readiness theory suggests that readiness is more dynamic and complex than just preparedness to undertake specific tasks. Drawing an inference from Nhamo (2013), readiness is a concept that should not be confused with ephemeral preparedness. A case of ephemeral preparedness is where businesses may decide to stash computers at different offices without installing some software that is meant for specific tasks or employing people who have the requisite skills. In such a situation, such an organisation may seem to be ICT compliant at face value but in the real assessment they are not. Therefore, such organisations can only be seen as having the intention to undertake such task but in the real sense are not ready for it. According to Nhamo (2013), readiness has wider parameters and indicators that give assurances that a clear structure and platform is well constituted for individual actions. ICT readiness is part of a more extensive technology-readiness in the mining sector. The broader technology-readiness spectrum emphasises the technologies that enable safety measures, environmental protection, and cost-effectiveness of operations (Melville 2012). Modern design in mining technologies is linked to laser technologies where computers form the basis of activities. ICT usage is the actual deployment of skilled staff and integration of the ICT facilities into the daily operations of the adopting organisation. Indigenous and local knowledge systems (ILKSs) are the information base for a society, which facilitates communication and decisionmaking. The ILKSs are dynamic and are continually influenced by internal creativity and experimentation as well as by contact with external systems (Flavier, de Jesus & Navarro 1995). This brings ICT into the deep of general technology-readiness strategy in the mining sector. The ultimate goal of ICT innovation in the mining sector is to facilitate communication and its value addition is for safety of operations as these become climate resilient and compatible. When ICT innovation gives an organisation the opportunity to build efficient communication capacity, there is every likelihood that information will circulate at the right time. When information circulates, not only is awareness raised, but it also prepares the minds of people towards eventualities, and such preparation of mind is part of the resilience mechanism. Heat waves are also traceable to both CC and mining activities. Heat waves, according to the Financial Times of 18 January 2011 resulted in the deaths of mining workers in Chile (Roeth & Wokeck 2011). In the examples given herein, stakeholder communication mechanisms are important. Likewise an efficient weather information system is advantageous not to stop the DR from happening, but to help the businesses affected to cope or build resilience. Early warning systems are inevitable in having the mining sector prepare for such eventualities when they take place. Therefore, devising CC adaptation strategies such as 'increased public awareness' and 'efficient communication mechanisms' seem to be the way forward because physical risks associated with extreme weather events are more frequent and higher in magnitude in recent times (Nhamo 2014). The next section highlights mining value chain and CC adaptation in the mining sector. It will be an incomplete story to discuss ICT applications in the mining sector without briefly giving the background of the mining value chain. The generic mining value chain embraces the following key stages: exploration, design, extraction, processing, and closure (Vorster 2001). Figure 1 presents the mining value chain. When tracing the role of ICT in CC adaptation and CCinduced physical risks reduction in the mining sector, it is paramount therefore that one understands the characteristics of such risks across the mining value chain. For example, flooding can impact negatively across the value chain whereas droughts can have a negative impact during processing. This is so because drought initiates water stress and mining companies spend more in sourcing water used both for washing and as the coolant during processing. Fog and mist may have a negative effect on delivery and transportation through destruction of roads to and from mining locations and facilities. Hailstorms may result in electricity blackouts which will severely affect extraction and processing. At every stage of the mining value chain, effective and efficient communication is important either as part of early warning system structure for DR reduction or as a value addition in the mines. Ospina and Heeks (2011:7) rightly put it; 'efficiency of communication among stakeholders and business networking in the mines' helps them to build resilience. That is to say, ICT innovation can be harnessed for (1) early warning systems for flooding resulting in timely evacuations, (2) sharing knowledge of adaptation among staff, (3) awareness raising of climate-related risks, (4) coordinating disaster recovery information, (5) supporting consultation and participation in developing adaptation policies, (6) providing training in flood and risks management, (7) providing data to aid adaptation decision-making and (8) gathering and analysing information for vulnerability assessments (Ospina & Heeks 2010;Shabajee et al. 2014). A summary in terms of how the mining value chain is affected by the impacts of climate DR is presented in Table 1. The impacts are categorised on a scale low impact (L), medium Impact (M) and high Impact (H). Different disaster-orchestrated physical CC risks have different levels of impact along the stages of the mining value chain. Such impacts can be L, M, or H depending on the magnitude and frequency of occurrence of the disaster. The impacts among others include air pollution, unnecessary competition with other small businesses, loss of revenue in the event of disasters and accidents, death, environmental degradation, displacement of communities to make way for mining activities, deforestation, conflicts, and social vices that are associated with mining communities (Appiah & Osman 2014;Hiltey et al. 2006;Hojers et al. 2011). Various efforts are made to address these impacts. For example, advanced fire information systems (AFIS) is a piece of technology adopted in the Ghana mining sector to checkmate negative impacts of wildfires (AFIS 2015). In some countries, the effort to tame the negative impacts of mining has led to initiating environmental policies geared towards minimising mining's environmental impact. Therefore, considering the impact of CC in every stage of the mining value chain (exploration, extraction, processing, transporting, and marketing), value protection and addition is required at every stage. For instance, during exploration there are a lot of negotiations and the technology that will facilitate communication among stakeholders is required. Likewise, at the extraction stage, safety information is hugely required, and research and development (R&D) is needed in order to avert physical risks. Similarly, at the transportation stage, there is need to localise weather information in order to re-route distribution channels. At the marketing stage, the mining sector requires market research information on both e-business and global markets to keep abreast of what is happening in other countries. There is now sufficient evidence that managers of national economies are promulgating CC policies and adaptation strategies to reduce CC-induced DR (Roeth & Wokeck 2011). Such advances by the stakeholders are reflected in Hojers et al. (2011) who noted that reasonable effort has been channelled towards institutionalising early warning systems in disaster prone sectors like mining. However, in several instances, the strategies are either still rooted in ILKSs, not timely, or even undermined by laissez-faire attitudes and less goodwill of the stakeholders. Considering the Intergovernmental Panel on Climate Change (IPCC) report of 2012 and the Kyoto protocol, information dissemination is essential in both CC adaptation and response to DR caused by CC. However, in less developed countries, the cost of putting ICT structure in place has always been the bane to good efforts. Drawing from policy positions mentioned in the previous section of this article, clear policies addressing information communication strategies in achieving CC adaptation are nearly non-existent. However, a number of strategies are outlined by Florini (2011) as key climate adaptation strategies in the mining sector. They include the need to: (1) continue developing and improving early warning systems in respect of extreme weather events, (2) facilitate increased uptake of seasonal climate forecast among key stakeholders, (3) maintain and update The Risk And Vulnerability Atlas, (4) investigate and implement plans to use the media and ICT for information sharing, (5) promote R&D in order to explore risk reduction, (6) collaborate with social groups such as community and nongovernmental organisations for awareness and achieving technology transfer and (7) strengthen both formal and informal education with respect to CC, DR reduction, and CC adaptation. Against such backdrop industrial CC adaptation strategies are a reflection of the mining sector CC adaptation policies. Scholars of high repute, Wastell and White (2013) and Pithouse et al. (2012), have nicknamed ICT intervention 'change levers'. This is partly because of an efficacy of information sharing on the platform of some ICT components particularly mobile phones. Such effectiveness in information dissemination is required most in the event of a disaster to enable people to evacuate or take necessary precautions. In the mining sector ICT being a change lever, can enhance early warning systems thereby reducing potential risks that at times cause economic losses. However, some practitioners, on the other hand, do not see ICT intervention in a sector as the almighty formula for the sustainability status mentioned earlier. Rather they have accused ICT's innovation of being an unfriendly august visitor because of havocs done to the environment by giant ICT facility producing companies in a bid to produce ICT gadgets that people use on a daily basis (Aleke, Wainwright & Green 2011). ICT facility producing companies are equally accused of pollution and illicit dumping of waste products, thereby contributing to contamination of the environment that is already under threat of CC-induced disasters (OECD 2009). These parallel arguments have helped us to focus this discussion with respect to identifying risk reduction potentials of ICT intervention in the mining sector. The evidence is pointing out the need to develop further research projects on the correlation between ICT intervention, CC adaptation and DR reduction in other sectors of the economy. The next section discusses the methodology adopted for the study. Materials and methods The article poses the following research question: to what extent is ICT incorporated in addressing CC adaptation by selected South African mining houses? In response to the research question, the primary objective of the study was spelt out as: to determine measures undertaken by selected South African mining houses in incorporating ICT in addressing and enhancing CC adaptation at different stages of the mining value chain. The study gathered data from 10 selected South Africa mining houses through using their annual reports, individual company's CC policies, carbon disclosure reports, and other relevant documents. The mining houses selected cut across the five top mining subsectors in South Africa that include coal, gold, platinum, diamond, and iron ore. Table 2 presents the mining houses selected based on five principal products in the mining industry and the documents from each company as well as the province where the corporation is operating in South Africa. A selection of two mines from a designated area was purposive to ensure proper representation and also to take cognisance of the availability of reports and policy statements online. Online availability of relevant documents was part of the selection criteria. Document analysis and systematic literature appraisal were adopted to unravel institutional ICT readiness for DR reduction and CC adaptation in the South African mining sector. This comes against the backdrop of renewed interest in DR reduction as well as shifting emphasis from CC mitigation to CC adaptation and transition to green economy in the mining sector through clean technologies. Document analysis and systematic literature appraisal have been used extensively in qualitative research (Bowen 2009;Center et al. 2012). Systematic literature appraisal opens up critical discourse of materials studied (Center et al. 2012). Similarly, Bowen (2009) maintains that document analysis is well situated in grounded theory more especially from a constructivist point of view. Further in this direction, Maree (2012) maintains that systematic literature survey approach and analyses of publicly available documents are typified, critical, and integrative when using mainly inductive reasoning. Suffice it to say that some advantages bequeath use of document analysis -namely cost-effectiveness of sourcing the materials because of availability in the public domain. Given that not all available online information is authentic, every effort was put into retrieving documents from official selected mining house websites. However, the methodology is not immune to weaknesses, as highlighted by Nhamo (2014). Among the weaknesses pointed out are insufficient details of information and bias of selection. This is partly because there is no laid down protocol for selecting documents to be analysed. These weaknesses of document analysis were ameliorated in this particular study by supplementing CC policy statements with annual reports and Carbon Disclosure Project (CDP) request information reports. In total 16 documents were analysed comprising nine annual reports, two environmental performance standards, two CC policy statements, and two CDP information reports. This was to ascertain whether ICT intervention is part of their DR and CC adaptation strategies. Using documents as a data gathering technique focuses on all types of written communications that may shed light on the phenomenon that is being investigated (Arts, Fischer & Van der Wal 2012). The documents reviewed were validated to determine whether they followed the protocol of King III 1 reporting template which requires corporate organisations to include environmental, corporate governance, and social responsibility in their annual reports. The emphasis of the articles is to provide the reflections of such documents about ICT application as part of DR reduction and CC adaptation strategies in the mining sector in South Africa. Presentation of results and discussion of findings Following the set research question and objective, the documents were analysed to establish whether ICT interventions reflected in the policy documents or the annual reports as part of CC adaptation strategies or DR reduction approach in the mines. A keyword search using 'ICT' and 'DR' formed part of the analysis (Figure 2). Going through the documents to find out whether ICT intervention is part of CC adaptation strategies or part of risk reduction strategies in South African mines, we identified a couple of other existing strategies already in place that are not necessarily linked to ICT intervention. These include: redesigning and modernisation of mining pits and mining to checkmate excess of floods, adjustment of transportation channels to counter floods and hailstorms, and the adoption of energy efficiency plans and swift shift to the alternative renewable energy mix. In 60% of the documents analysed it 1.King iii reporting requires integrated representation of a company's performance in terms of both financial and other value relevant information. Companies are meant to provide greater context for performance data in their integrated annual or quarterly report. They should clarify how value relevant information fits into their operations or business goals, The King iii report of every company should also contain details of social responsibilities of such companies to the hosting communities as well as the companies' financial statement. appears that ICT intervention did not quite reflect as part of CC adaptation strategies. In the Anglo-America Thermal 2014 annual report and Thabazimbi Iron Ore 2014 annual report, there was no mention of ICT intervention at all, whether as part of CC adaptation or DR reduction strategy. This revelation from our analysis raises further concern as to what exactly are the roles of ICT in the South African mining sector. Wainwright and Waring (2007), opined that ICT intervention in corporate organisations follows two streams of needs, namely: front-office application and back-office applications. Front-office application talks of ICT readiness in respect to an exchange of information between the group and the outside society. Typical examples of front-office ICT applications include electronic mail communication with stakeholders, electronic advertorials, social media, and electronic billboards. Whereas, back-office ICT applications include payroll service systems, financial bookkeeping systems, and database management systems. Suffice it to say that ICT can be useful in many fabrics of organisational value chain especially, in networking with other businesses. However, the objective of this study is within the scope that is focused on ICT intervention and DR reduction. The missing out of ICT intervention in the CC policy documents of many mining companies raises a suspicion of ICT intervention being skewed towards back-office applications instead of being part of CC adaptation strategy or DR reduction mechanism. Even though, ICT intervention was not built into the mainstream of international CC adaptation policies (IPCC 2012; UNFCCC 2007) the CC adaptation policy of Anglo-Ashanti mine, one of the documents analysed, highlights the options available to individual organisations to localise their CC adaptation strategies and also DR reduction framework. Against this backdrop, individual mining companies in South Africa have the option to integrate ICT intervention in their CC adaptation and DR reduction blueprint (master plan). Apart from this, the Department of Environmental Affairs (DEA 2011) insisted that businesses in South Africa should target 'smart environment' in every one of their business operations. Smart environment according to the DEA (2011) master plan is where risk factors to both people and business are reduced to barest minimum and safety is guaranteed. Smart environment also encompasses different kinds of smart devices continuously working to make inhabitants' lives more comfortable. The dual emphasis on CC adaptation and reduction in physical risks is reflected in Goldfield (2014) annual report that proposes building community resilience to CC through network formation and environmental sustainability as a prerequisite for smart society. An extract from the document under analysis is presented in Box 1. Sustainability cuts across several other documents analysed. The statement in Box 1 portrays clear awareness of the impact of CC and its associated DR in the mining business. It also points out commitments towards CC adaptation and DR reduction. Likewise, in Anglo-American Thermal Environmental Performance Standard 2014 annual report, Anglo-American Inyosi Mine Environmental Scoping Report 2014, and Venetia Diamond 2014 annual report, sustainability was reflected to cover broad subjects which include water management and social sustainability. The particular interest of this article is vested on the social sustainability that covers indicators such as reduced harm to people, awareness generation, improvement in education, improvement in ICT, and social inclusion. Social inclusion on its own is an interesting theory because it borders on gender, care to the elderly, vulnerable children, physically challenged human beings, and the economically vulnerable in the society. The reflection here goes back to the research question, exploring the paramount role of ICT intervention in achieving these sustainability indicators mentioned. The next section highlights vulnerability of mining sector to CC impact and DR. CC documents of the majority of mining companies studied openly acknowledged that CC is a significant threat to mining activities, and adaptation strategies require urgent implementation. CC policies of Anglo-America Thermal (2014) explicitly state the need to increase the financial allocation to CC adaptation and energy use because of what "The most important intervention in Gold Fields' long term (more than 3 years from now) strategy, influenced by climate change has been the formal incorporation of climate change considerations into the process of developing new mining operations. This has been supported by the development of guidelines to support the integration of mitigation and adaptation-related issues into asset design. This will start from supporting mining hosting communities to build resilience. We will support networks that will make information reach the right people. Again Gold Fields' development teams are required to calculate the new asset's carbon footprint over its lifetime and to identify energy efficiency and renewable energy opportunities early in the development process. Furthermore, Gold Fields has set a target that all new mining projects must at least have 20% of their energy sourced from alternative sources of energy". Further in the findings, we identified that the mining sector is not only vulnerable to CC physical risks. The mining sector is equally vulnerable to the adverse impact of its waste product, especially the mine sewage. Therefore there is need to raise awareness of such impact as part of value protection in the mining sector. Bunker and Smith (2009) equally acknowledged the challenging situation does not translate to a solution until a practical step is adopted towards creating a solution. Against this backdrop, ICT intervention can play a huge role in constituting early warning systems at the exploration and mining stages where floods and hailstorms can be problematic in the mining value chain. Geographical information systems (GIS) and remote sensing technologies can generate valuable information required during negotiation for operating licences and community liaisons. Indeed, the vulnerability of the mining sector to CC and its associated DR in a natural resource and agricultural dependent country like South Africa goes beyond the boundaries of the stages of the value chain. Neither does it stop at the economic loss statistics shown in the company's annual balance sheet. The multiplier effects of these financial losses, as a result of disasters, are reflected more on the livelihood of the people within the mining communities. Gathering information on fog and mist and putting it into a mine's communications strategy and/or GIS may add value as a CC adaptation strategy. This could be linked to a live driver update through social media platforms like WhatsApp. Driving across the mining hub of South Africa, thus, the Mpumalanga Province roads are dressed with fog signs that the mining sector can utilise to build GIS and communications strategies for CC adaptation. Such a GIS and communications strategy could be enriched by searching historical data on the number of days in which such events occur annually from the official weather services departments. Figures 3a and b show captions in terms of road signs warning of fog and heavy fog in Mpumalanga Province of South Africa. The Lonmin Platinum 2014 annual report contains a section that speaks directly to corporate social responsibilities of the mines to the communities where women, youths, elderly, and children are most affected by CC-induced disasters and risks associated with anthropogenic mining activities. Part of the social responsibilities captured here include CC awareness programmes, gender balanced formal education, bursaries and royalties, improvement in ICT, and local content recognition in employment. DR reduction emerges as a guided principle of CC change policies and actions. Admittedly, not only that the mining companies surveyed want to reduce environmentally associated risks, but also wish to keep every other risk such as financial risks involved in doing business down. The enabling pillars for DR reduction were identified from the documents analysed. These include, but are not limited to increase in budget (finance), knowledge management, capacity building, technology transfer, and infrastructure upgrade, as well as data management. The data management entails archiving and easy retrieval of climate data for CC adaptation purposes. The latter is achieved through installation of appropriate computer hardware and software. It is therefore not surprising when Kramers et al. (2014) suggested that mining industries can leverage on ICT when addressing impacts of CC. BOX 2: Anglo American plc addressing climate risk. Some of our operations are located in areas exposed to natural catastrophes such as earthquake/extreme weather conditions. The impact of climate change may intensify the severity of weather events. The nature of our operations exposes us to potential failure of mining pit slopes and tailings dam walls, fire, explosion and breakdown of critical machinery, with long lead times for replacement. Specialist consultants are engaged to analyse such event risks on a rotational basis and provide recommendations for management action in order to prevent or limit the effects of such a loss. Contingency plans are developed to respond to significant events and restore normal levels of business activity. Anglo American purchases insurance to protect itself against the financial consequences of an event, subject to availability and cost. Conclusion Part of circumspective evidence of ICT readiness in an organisation or an economic sector that works on a platform of environment is when the system analysts are preoccupied with available environmental simulation and modelling that enables them to localise CC adaptation strategies and develop rapid response frameworks for risk aversion or reduction. Mirroring our findings, one would comfortably argue that ICT intervention in the mining sector has two areas of projection for risk reduction. Firstly is a just-in-time red alert warning, although efficient telecommunication facilities are also needed to be in place for maximum output. Secondly, in reducing health or economic risk one would suggest that mines will transform their indigenous knowledge into digital electronic platforms such as electronic signposts, advertorials, pictures, and flyers for faster and easier information sharing. This venture will enable proactive discovery of what would have led DR to both worker and the company. We identified the need for system improvement when we matched ICT intervention, CC adaptation strategies, and DR reduction at varying stages of the mining value chain. Starting from exploration to marketing as we went through the documents, the need was evident. In clear terms, adoption of ICT innovation in the mining sector will be appreciated if it will reduce disaster physical risks, reduce the cost of operation, and trigger profitability. Adoption of ICT innovation will also be appreciated in the mining sector if its value addition prospect will facilitate environmental and social sustainability. At the marketing stage of mining value chain value addition is necessary to increase revenue generation of the mines and possibly enhance attraction to investors. At the processing stage, system improvement is required to ensure resource conservation, reduction in pollution, promotion of the use of alternative energy, and innovative technologies. System improvement here sets a good precedence for green transition and not only CC adaptation or DR reduction. This article contributes to the overall CC adaptation policy framework by creating a scenario for understanding of the dimension of ICTs in CC adaptation and DR reduction. It has captured and presented the effectiveness and role of ICTs in reducing DR at different stages of the mining value chain. It has brought to bay, the contributions of ICTs towards achieving economic, environmental, and social viabilities in the mining sector because ICTs can support public awareness of CC through social media platforms, thereby paving the way for resilience building. This article has opened up areas of further research on the relevance of ICTs in other mining sectors by proposing ICTs adoption for both human capital development and business process re-engineering, especially if the adoption of such ICTs innovation will serve as both value protection and value addition mechanism.
2018-06-30T00:40:02.210Z
2016-04-29T00:00:00.000
{ "year": 2016, "sha1": "e97ce70b0fcfe82b3aff7096ed3ce63344f36692", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4102/jamba.v8i3.250", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e97ce70b0fcfe82b3aff7096ed3ce63344f36692", "s2fieldsofstudy": [ "Environmental Science", "Business", "Engineering" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
270932839
pes2o/s2orc
v3-fos-license
Detecting Fraudulent Financial Reporting The Construction Sector In Indonesia: Fraud Triangle The category of fraud that has the largest number of losses is financial statement fraud. Apart from being financially detrimental to a company, fraudulent financial reports can also threaten the company's sustainability. Examining the influence of financial targets, financial stability, external pressure, industry nature and rationalization on financial report fraud (F-Score) in construction sector companies in Indonesia is the aim of this research. This research is quantitative, with secondary data types. The number of samples used was 83 from the total construction sector companies. The research results show that financial stability has a positive effect, while financial targets, external pressure, nature of industry and rationalization have a negative and insignificant effect on fraudulent financial reporting. INTRODUCTION Fraudulent financial reporting is a category of fraud with the largest number of losses compared to asset misappropriation or corruption.ACFE said losses arising from fraudulent financial reporting reached $776,000/case (Association of Certified Fraud Examiners, 2024).Financial statement fraud is a deliberate action by someone to manipulate financial report information and data (Setyono et al., 2023).Kassem & Omoteso (2023); Cheliatsidou et al., (2023); serta Mandal & Amilan (2023) Fraudulent financial reporting can have a negative impact on investor confidence, the quality and reliability of financial reports and can damage the global economic system.Globally, fraudulent financial reporting cases often occur in construction sector companies, with the number of cases reaching 73 cases (ACFE, 2024).According to Li et al., (2023) this case does not only occur in Indonesia, but other countries also experience the same thing.The case in the construction sector in Indonesia occurred in PT.Waskita Karya and PT.Wijaya Karya in 2023 which is suspected of having manipulated financial reports by not recording bills from third parties since 2016 which resulted in the company's debt burden appearing to decrease but what actually happened was that the company was unstable.Apart from that, the construction sector has an important role in the success of national strategic projects such as IKN, transportation infrastructure, development of industrial areas and the latest issue related to the Tapera program.This program requires a large budget and has the potential for fraud.Cressey (1953) explained that there are three factors that cause fraud, namely pressure, opportunity and rationalization (fraud triangle theory).These three factors are the basis for the current fraud theories.This shows that the factors that cause fraud in the fraud triangle are still relevant to the current development of fraud motives.Yusrianti et al., (2020) said that the fraud triangle theory has been used as a basis by many previous researchers and used as a reference in audit standard statements, including No. 99 issued by the American Institute of Certified Public Accountants (AICPA).In Indonesia, the fraud triangle theory has been internalized in Public Accountant Professional Standards (SPAP) Number 70 to assess fraud risk factors in the audit process.Rahman & Jie (2024) said that these three factors can be used to predict potential fraud in a company as stated in the International Auditing and Assurance Standards Board in 2009. Opportunity factors arise due to the implementation of an ineffective internal control system (Alfarago & Mabrur, 2022).The opportunity factor in this research is represented by nature of industry.Tarjo et al., (2021) said that nature of industry is the ideal condition in a company.According to Khamainy et al., (2022); Yadiati et al., (2023); Yusrianti et al., (2020) the nature of the industry has a positive influence.However Aulia Haqq & Budiwitjaksono (2020); Wilantari & Ariyanto (2023) argue that nature of industry has a negative influence.The rationalization factor can be interpreted as a justification for the fraudulent actions committed (Alfarago & Mabrur, 2022).According to Achmad et al., (2022); Yarana (2023) rationalization has a positive influence.However Putri & Januarti (2023); Sholikatun & Makaryanawati (2023) argue that rationalization has a negative influence.Empirically, research on detecting financial statement fraud has been carried out before, but the objects used are not the same. Previous researchers tended to use manufacturing, banking, mining, state-owned companies, health and LQ 45 companies (Achmad et al., 2023;Sari et al., 2024;Setyono et al., 2023;Imtikhani & Sukirman, 2021;Sholikatun & Makaryanawati, 2023;Yanti et al., 2023;Medlar & Umar, 2023;Sudrajat et al., 2023).As for the construction sector, there is no such thing yet, Even though currently construction sector companies in Indonesia have an important role in supporting the government's development programs.This research uses the construction sector as the research object and sample.The aim of this research is to examine the influence of pressure (financial target factors, financial stability, external pressure), opportunity (industry nature), and rationalization on fraudulent financial reporting (F-Score) in construction sector companies in Indonesia.It is hoped that this research can contribute to the development of the fraud triangle theory in detecting fraudulent financial reporting. LITERATURE REVIEW Fraud Triangle Theory White collar crime is the main basis in fraud theory (Tuanakotta, 2010).Cressey (1953) believes that fraud can be caused by three things.Advantage of this theory is because this theory is used as a reference in global audit standards (Yusrianti et al., 2020).Rahman & Jie (2024) said that the three factors in the fraud triangle have been used to predict the potential fraud in China.The large risks arising from fraud require management to mitigate and investigate the causal factors.Detecting and identifying causal factors is important to minimize risks for the company.The factors in the fraud triangle are interrelated.This can be described when someone is under pressure because they are in debt to a third party and then they find out that internal control has not been effective, after which they rationalize or justify that what they did was not a violation, so fraud will occur (Rahman & Jie, 2024).The implementation of the Fraud Triangle can be understood and seen from the variables in this research. Fraudulent Financial Reporting Fraudulent financial reporting is a deliberate action by someone to manipulate financial report information and data to gain personal gain and harm others (Putri & Januarti, 2023).According to SAS No. 99 financial reporting fraud, namely planned errors to deceive users of financial reports (Tarjo et al., 2021).The impact of losses arising from fraudulent financial reporting is the largest from 2018 to 2024 (ACFE, 2018;2020;2022;2024). Company management carries out financial manipulation due, among other things, to an assessment of the performance of the entity/institution/company based on financial reports.So this encourages company management to do everything possible, including carrying out fraudulent financial reporting so that the financial reports presented can attract the attention of investors (Nurhakim & Harto, 2023).There are various motives for fraudulent financial reporting, including manipulating data, falsifying evidence or changing information in company annual reports (Achmad et al., 2022).Fraudulent financial reporting has become a real threat to business people, investors and other users of financial reports, because the impact can threaten the reputation and sustainability of the company (Alfarago & Mabrur, 2022).Apart from that, according to Zenzerović & Šajrih (2023) fraudulent financial reporting in companies also affects public trust in investment interest in companies.Naldo & Widuri (2023) say that in organizational structures, positions that have great authority and responsibility, such as company executives, shareholders, company management, have the potential to violate existing regulations in the company.This is reinforced by the results of the ACFE survey which states that the parties who often violate company policies and laws are company managers (ACFE, 2022). Hypothesis Development Financial targets against fraudulent financial reporting Financial target can be interpreted as profit determined by the company.Achieving targets in a company is the responsibility of company management.Financial targets will become pressure when the principal is unrealistic in determining targets so that so that management has the intention to commit fraud in order to achieve the targets expected by the principal.According to Achmad et al., (2023); Naldo & Widuri (2023) financial targets are measured using return on assets (ROA).The use of ROA is intended to find out how effective management is in gaining profits from the company's resources or wealth.A high ROA value can lead to fraudulent financial reporting, because company management will try in every way to exceed the target in order to obtain incentives from this achievement.This is consistent and in line with research conducted by Demetriades & Owusu-Agyei (2022); Naldo & Widuri (2023); Sudrajat et al., (2023); Tarjo et al., (2021); Yarana (2023) which states that the financial target has a positive hypothetical direction.H1= Financial targets have a positive effect on the possibility of fraudulent financial reporting Financial stability against fraudulent financial reporting Financial stability is a description of a company's stable financial condition (Kusumawati et al, 2021).This statement is in accordance with SAS No. 99.For investors, financial stability in a company is their basis for deciding to invest.This is done because companies that have stable financial conditions can provide large profits for investors.However, if the economic conditions in a country are unstable, this can affect the company's financial stability.Therefore, this can put pressure and stimulate management to take fraudulent or manipulative steps in order to provide financial stability in accordance with investor expectations.According to Achmad et al., (2023); Khamainy et al., (2022) financial stability is measured using total assets if there is a significant increase in the number of company assets, this indicates the potential for fraudulent financial reporting.According to Aulia Haqq & Budiwitjaksono (2020); Medlar & Umar (2023); Sari et al., (2024); Wijaya & Witjaksono (2023); Yadiati et al., (2023) financial stability has a positive effect on fraudulent financial reporting.H2= Financial stability has a positive effect on the possibility of fraudulent financial reporting External pressure against fraudulent financial reporting External pressure occurs when a company cannot meet the expectations of external parties (Achmad et al., 2023;Sholikatun & Makaryanawati, 2023).Pressure from external parties can be illustrated when a company needs capital to expand the market, increase the amount of production and purchase equipment and other things.This of course requires large funds/budget.If finances are insufficient, the company needs funds from creditors to meet the budget requirements.Through these funds, it is hoped that large profits can be generated for the company.However, when reality does not match what was planned, this creates pressure for the company, whether it is pressure to fulfill debt obligations as well as pressure from high expectations from third parties.According to Sholikatun & Makaryanawati (2023) external pressure is measured using leverage with the assumption that if the company's debt ratio is high then the potential for the company not being able to fulfill its obligations is also high.So it can also be interpreted that the possibility of the company committing fraud to fulfill the obligations and expectations of external parties is also high.According to Achmad et al., (2022); Agusputri & Sofie, (2019); Kusumawati et al., (2021); Tarjo et al., (2021) external pressure positive effect on fraudulent financial reporting.H3= External pressure has a positive effect on the possibility of fraudulent financial reporting. Nature of industry against fraudulent financial reporting The nature of industry in this research is used as a proxy for opportunity factors.Sholikatun & Makaryanawati (2023) argue that ideal conditions are a reflection of the nature of the industry.This ideal condition can be described when company management can reduce the amount of trade receivables so that cash flow in the company increases (Tarjo et al., 2021).Apart from that, ideal company conditions are a factor that investors consider when investing.So the company will take any steps and efforts to create ideal conditions as desired by investors.In the fraud triangle theory, opportunities arise because companies do not implement effective internal controls.Meanwhile, in this research, opportunities arise because there is an opportunity for company management to estimate the amount of uncollectible receivables.Khamainy et al., (2022) uses accounts receivable to measure the nature of industry .According to Khamainy et al., (2022); Yadiati et al., (2023); Yusrianti et al., (2020) the nature of the industry shows a positive influence.H4= The nature of industry has positive effect on the possibility of fraudulent financial reporting Rationalization against fraudulent financial reporting Rationalization is an attempt to justify a violation committed (Kalovya, 2023).In the fraud triangle theory, rationalization is related to the fraud perpetrator's efforts to convince himself that what he is doing is a normal thing and not an act against the law.Rationalization of fraudulent financial reporting occurs when company management adopts policies related to the application of the accrual principle in the company.Through the use of the accrual principle, company management can manipulate company profits (Sholikatun & Makaryanawati, 2023).Therefore, rationalization is calculated using total accruals to total assets (Achmad, Hapsari, et al., 2022;Ghaisani et al., 2022).This ratio is based on one of the assessment indicators contained in the Beneish M-score.In the opinion of Achmad et al., (2022); Yarana (2023) rationalization has a positive effect on fraudulent financial reporting.Based on this, the hypothesis developed in this research is: H5= rationalization has a positive effect on the possibility of fraudulent financial statements RESEARCH METHODS This research is quantitative research with secondary data in the form of annual reports from construction sector companies in Indonesia 2020-2023.Data was obtained from the Indonesian stock exchange website, the company's official website or Bloomberg.This research uses a saturated sample or census.The analysis method in this research uses logistic regression because the dependent variable is a dummy variable.If a company has an F-score value > 1, it means the company is indicated or has the potential to carry out fraudulent financial reporting, but if the value is < 1, it means the company is not indicated to have carried out fraudulent financial reporting.The regression equation used are as follows: Results Sample Determination Based on sampling analysis using the census, it is known that the number of samples used was 83 samples.There were 11 samples that could not be used because the financial reports were not available, could not be found or the company had not published its financial reports.Sample details for each year of observation can be seen in table 2. The descriptive statistics results in table 3 show that the average value of the independent variable shows a low position.For the smallest minimum value -4.58, the highest maximum value is 14.25 and the maximum standard deviation value is 1.75.Based on the regression model matrix classification, 65% of the total sample of 20 samples according to the f-score results was not indicated to have committed financial statement fraud.From the prediction results for the total sample, there were 7 samples that were predicted to be indicated as having committed fraud, while 13 samples were predicted to have no indication of having committed fraud.Meanwhile, the f-score results for samples indicated to have committed fraud were 95.2% of the total sample of 63 with 60 samples indicated fraud, the other 3 samples were not indicated fraud. Results Log likelihood, Hosmer Lemeshow test , R Square and Omnibus Tests of Model Coefficients The overall model fit test results showed that the log likelihood block number 0 value was 91,855 and the log likelihood block number 1 value was 64,335, there was a decrease in these results so that the model used was fit.As for the results of the Hosmer test and Lemeshow test in this study obtained a significant value of 0.595, so it can be said that the model is fit or there is no significant difference between the model and the data used in this study.Meanwhile, for the R square test results, the Nagelkerke R Square value was 0.551, which means that the independent variable was able to explain 55.1% of the dependent variable, while the rest was explained by other variables outside of this research.Simultaneous test results have a Chi-square value of 38.179 and a significant value of 0.000, meaning that all independent variables have a simultaneous effect on fraudulent financial reporting. Financial targets against fraudulent financial reporting The financial target factor has a significance value of 0.80, the value is > 0.05, the coefficient value is -0.17.These results indicate that financial targets outlined in profitability have a negative effect then, hypothesis is rejected.ROA in this study shows low results, indicating that the target set by the company is low.Likewise, the F-score results show that the potential for fraud is low.The results of this research support the Fraud Triangle theory, when management is not under pressure then the possibility of this condition occurring is also small and conversely if management feels pressured then this condition is very likely to occur.Another factor that may be the cause of this is because the company implements effective and efficient policies so that it can reduce the number of costs incurred.Apart from that, competent human resources are a determining factor in achieving financial targets so that it does not become a pressure for the company.The results of this research support the research Demetriades & Owusu-Agyei (2022); Naldo & Widuri (2023); Tarjo et al., (2021); Yarana (2023).However, this result also contradicts the research results of Khamainy et al., (2022); Setyono et al., (2023);Sholikun & Makaryanawati (2023) which says that financial targets have a negative effect. Financial Stability against fraudulent financial reporting The financial stability factor obtained a significant value of 0.03, which means <0.05, the coefficient value was 2.42.So the proposed hypothesis is accepted.The research results show that the total asset value is low so the potential for fraud is low.These findings correspond to the fraud triangle.Financial stability will become a pressure when management wants to always display stable financial conditions even though economic conditions are not good.Therefore, management will manipulate the financial reports presented.Aulia Haqq & Budiwitjaksono (2020); Medlar & Umar (2023); Sari et al., (2024); Wijaya & Witjaksono (2023); Yusrianti et al., (2020) stated financial stability has a positive and significant.However, according to Achmad al., (2023); Khamainy et al., (2022); Naldo & Widuri (2023); Putri & Januarti (2023); Setyono et al., (2023) financial stability has a negative influence. External Pressure against fraudulent financial reporting The external pressure factor has a sig value of 0.000, which means <0.05, while the coefficient value is -11.22.These results indicate that external pressure has a negative effect on fraudulent financial reporting.So the hypothesis is rejected.The debt ratio in this study has a low value so the potential for fraudulent financial reporting is low.The results of this study contradict the fraud triangle theory.Because the debt ratio in the sample companies is low, creditors do not hesitate to provide debt to the company because the risk of not being able to pay it is small.Apart from that, in this research companies tend to optimize financing from internal sources to meet their needs so that the company's debt ratio remains stable.On the other hand, the low debt owned by the company has an impact on increasing the reputation and trust of capital owners.These findings are in line with research conducted by Imtikhani & Sukirman (2021); Khamainy et al., (2022); Naldo & Widuri (2023) which states that external pressure has a negative effect.However Kusumawati et al., (2021); Tarjo et al., (2021); Yadiati et al., (2023) said external pressure had a positive effect. Nature Of Industry against fraudulent financial reporting Based on the results of the nature of industry regression test, it obtained a significant value of 0.70 with a coefficient value of 0.14.These results show that the nature of industry does not have a significant effect on fraudulent financial reporting.So the proposed hypothesis is rejected.An increase in the amount of trade receivables cannot illustrate or indicate that the company has committed fraud in preparing financial reports.The integrity of company management is the foundation in running the company's business.This finding is not in line with or does not cloud the fraud triangle.Apart from that, the company's stable condition indicates that there is good corporate governance and reliable risk management so that there is very little chance of company management committing fraudulent financial reporting.The results of this study are consistent with Setyono et al., (2023); Agusputri & Sofie (2019); which states that the nature of industry has a negative effect.This contradicts research by Khamainy et al., (2022); Tarjo et al., (2021); Yadiati et al., (2023); Yusrianti et al., (2020) said nature of industry has a positive influence, which is not in line with the results of this research. Rationalization against fraudulent financial reporting The results of the regression test on the rationalization factor show a significant value of 0.15 or > 0.05, with a coefficient value of -2.00.Based on these results, rationalization has a negative direction and does not have a significant influence on the dependent variable.So the proposed hypothesis is rejected.This can be interpreted as meaning that the existence of the authority to make policies by management in implementing the accrual principle in the company cannot encourage company management to cheat in preparing financial reports.This is because the company management has good work professionalism so they prioritize good output by complying with existing policies and regulations in the company.The results of this study are not consistent with the fraud triangle theory.However, these results support his research Sholikatun & Makaryanawati (2023); Situngkir & Triyanto (2020) stated that rationalization does not have a significant effect.However, according to Achmad et al., (2022); Ghaisani et al. (2022) stated that the rationalization of their research results was not in line with or contradicted the findings of this research. CONCLUSION Based on the results of the hypothesis testing above, it can be concluded that the factor that has a positive and significant influence on fraudulent financial reporting is the financial stability factor, this is because management wants to always display stable financial conditions even though economic conditions are not good, so management manipulates financial reports.Financial targets do not have a significant effect because the company implements effective and efficient policies so that it can reduce the amount of costs incurred.Apart from that, the company has competent human resources, making it easier to achieve targets.External pressure that does not indicate fraudulent financial reporting is when the company's debt ratio is low, so that the company does not receive pressure from external parties or in other words the company does not experience difficulty paying debts.Nature of industry does not have a significant influence because company management has high integrity and loyalty to the company so that company management will not take actions that could harm the company.Likewise, rationalization has a negative and insignificant effect on fraudulent financial reporting because the company has implemented professionalism in running the business, so the potential for the condition to occur is low SUGGESTION 1. Practical suggestions: To minimize the occurrence of fraudulent financial reports in construction sector companies, it is necessary to detect if there is a significant increase in total assets.Even though ROA, leverage and receivables have no effect on fraudulent financial statements, this is still important for companies to be aware.2. Theoretical Suggestions: The limitations of this research relate to the small sample. Suggestions for further research are to increase the sample size by using the construction sector in other countries so that the sample used is more comprehensive.
2024-07-04T15:28:25.153Z
2024-06-30T00:00:00.000
{ "year": 2024, "sha1": "2421ee86520e497ab2dab30172522eab9110e288", "oa_license": null, "oa_url": "https://doi.org/10.32534/jpk.v11i2.5894", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4f44ecca7871ebb23681c561b2b2a9202db560bb", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
245496401
pes2o/s2orc
v3-fos-license
Comparison of the Fracture Resistance of Hyflex EDM and WaveOne Gold Rotary System Instruments in Abrupt Apical Curvature Current developments in restorative dentistry have been recorded theoretically and practically in recent years. Short and long term evaluation in the dental practice in consideration of the developments of conventional knowledge and current studies will ensure the continuity of scientific progress. Regular monitoring of old restorations is very important in terms of evaluating the materials and techniques used in dentistry. As a result of the evaluation of posterior direct composite restorations, it may be decided to repair, retreatment or solely follow the restoration. Correspondingly, the quality of treatment and patient satisfaction will increase. This review aims to make evaluations based on scientific criteria and to be expressed in a universal language. Objective: This study aimed to compare the cyclic fatigue resistance of Hyflex EDM and WaveOne Gold rotary instruments in simulated abrupt apical curvature at body temperature. Methods and Materials: A total of 10 Hyflex EDM OneFile (25/04-08) and 10 WaveOne Gold Primary (25/07) instruments free of visible defects were selected after inspection. Instruments were tested in an artificial canal having an angle of curvature of 90° and a radius of curvature of 2 mm. The instruments were rotated at speed and torque values recommended by manufacturers at 37°C until fracture occurred. Time required for fracture (TF) was recorded with a 1/100 s digital chronometer. The TF and fractured fragment length data were statistically analyzed using t-test with significance threshold of 5%. Results: The mean TF value was 98.40 ± 19.93 seconds for WaveOne Gold group and 183.50 ± 35.33 seconds for Hyflex EDM group. Hyflex EDM OneFile instruments fractured in a longer time period than WaveOne Gold instruments and the difference between them was statistically significant (p<0.05). Conclusion: Hyflex EDM OneFile instruments, which are produced with electric discharge machining, were found to be more resistant to cyclic fatigue than WaveOne Gold instruments. INTRODUCTION The introduction of nickel-titanium (NiTi) alloys enabled more favourable and safer root canal preparation due to the superior flexibility and mechanical strength of NiTi compared to stainless-steel hand instruments. 1 However, separation of NiTi instruments without a clinically detectable warning is an ongoing problem for endodontic practice. 2 Disposable use of the NiTi instruments is suggested for prevention of this problem and also cross contamination, however preparation of a molar tooth means that instrument would be used to prepare 3 to 4 root canals. Therefore, evaluation of mechanical properties of motor driven NiTi instruments is important. Separation of NiTi instruments occurs via torsional failure, cyclic fatigue and their combination during clinical use. Torsional failure results when the instrument tip is screwed into the canal and its shank continuous to rotate producing a torque value exceeding the plastic limit of the material. 3 Failure due to cyclic fatigue occurs by continuous stresses of tension and compression in the area of maximum root canal curvature. 4 Cyclic fatigue resistance of a NiTi instrument depends on numerous factors related to design, kinematics and alloy of the instrument including taper, manufacturing process, cross-sectional shape and type of rotation. 5,6 Recent studies emphasized that cyclic fatigue testing of an instrument should be conducted under body temperature to simulate clinical use as possible since the instruments show different behaviours under different conditions depending on their phase transformation temperatures. 7,8 WaveOne Gold (Dentsply Sirona, Ballaigues, Switzerland), which was introduced in 2015, is an upgrade of WaveOne system by maintaining its reciprocating movement while changing the size and cross-sectional shape of the instruments. Moreover, the NiTi alloy underwent thermal treatment after machining, which also contributed to instrument's higher mechanical properties compared to its predecessor. 9 WaveOne Gold has a primary instrument with 0.25 mm apical tip diameter and .07 taper. 10 Hyflex EDM (Coltene/Whaledent, Altstätten, Switzerland) instruments are produced from controlled memory (CM) wires manufactured by electrical discharge machining (EDM) to improve mechanical properties. 11 The instrument also shows variable crosssectional shapes changing from triangular to rectangular from shaft to tip to optimize flexibility, cyclic and torsional resistance. 9 Hyflex EDM OneFile has a tip diameter of 0.25 mm and variable taper from .08 at apical 4 mm decreasing to .04 along the remaining part. 12 Root canal anatomy presents challenges for endodontic treatment not only by presenting complex configurations but also showing abrupt curvatures in both mesiodistal and buccolingual directions. 13 The frequency of abrupt curvatures could be underdiagnosed due to their buccolingual position, angulation of X-ray and superimposition of different canals in preoperative periapical radiographs. Preparation of root canals showing abrupt curvatures might exert great stresses on root canal instruments leading to distortion and failure. 14 The present study aimed to compare cyclic fatigue resistances of WaveOne Gold Primary and Hyflex EDM OneFile in a simulated abrupt apical curvature at body temperature. The null hypothesis was that there would be no difference between the instruments regarding their cyclic fatigue resistances. MATERIALS AND METHODS A priori sample size calculation was performed by the selection of t-test family (difference between two independent means) using the effect size calculated from a previous study (3.68) by G*Power 3.1 (Heinrich Heine University, Dusseldorf, Germany) with 0.05 type I error and 0.99 power and resulted that the sample size should be a minimum of 8 instruments. 14 Therefore, 10 WaveOne Gold Primary (25/07) and 10 Hyflex EDM OneFile (25/0408) free of visible defects were selected after inspection under magnification using a stereomicroscope (Nikon SMZ 745T, Tokyo, Japan). A stainless-steel artificial canal with an inner diameter of 1.5 mm showing an angle of curvature 90° and radius of curvature of 2 mm was immersed in a water bath, which temperature was maintained at 37°C by a submersible heater and thermostats (Figure 1). WaveOne Gold instruments were operated with VDW Silver endodontic motor (VDW, Munich) using the pre-set "WaveOne ALL" mode, whereas Hyflex EDM OneFile instruments were operated with the device in continuous rotation at 2.5 Ncm torque and 500 rpm speed parameters. All instruments were used until fracture, which was detected both audibly and visually. Time required for fracture (TF) was measured by 1/100 seconds chronometer and fractured fragment length was measured by a digital calliper with 10 -2 accuracy. Number of cycles to failure (NCF) values were calculated by multiplying the TF value with recommended speed of each instrument (500 rpm for Hyflex EDM OneFile and 350 rpm for WaveOne Gold) and dividing the result by 60. Fractured surfaces of two instruments were visualized with scanning electron microscope (SEM; JEOL JSM-7001F; JEOL, Tokyo, Japan), and photomicrographs of the fractured surfaces were obtained at 200-220x and 3000x magnifications (Figure 2). Since Shapiro-Wilk test revealed that the data showed normal distribution (P > 0.05), Student t-test was used to analyse TF and fragment length data with 5% significance threshold using SPSS (IBM, SPSS Inc., Chicago, IL, USA). TF data was also analysed using Weibull reliability estimation for the calculation of 99% survival probability as described in a previous paper. 14 Table 1 presents descriptive statistics of TF, NCF and fragment length with Weibull analysis data. Hyflex EDM OneFile instruments showed significantly greater cyclic fatigue resistance than WaveOne Gold Primary instruments (p<0.05). Weibull analysis indicated that Hyflex EDM OneFile instruments also exhibited higher reliability and Weibull modulus than WaveOne Gold Primary. DISCUSSION Single file systems have been advocated for preparation of suitable root canals with minimum number of instruments however in an average molar this means preparation of 4 root canals, which might show different curvatures and configurations. An abrupt apical curvature might exert greater stresses on instruments designed for single file systems since a single instrument encounters stresses while enlarging apical, middle and coronal third of root canals. The present study aimed to compare cyclic fatigue resistances of Hyflex EDM OneFile and WaveOne Gold Primary instruments in static cyclic fatigue model using artificial block with an angle of curvature 90° and radius of curvature of 2 mm. The null hypothesis was rejected due to the higher cyclic fatigue resistance of Hyflex EDM OneFile instruments. Alloy properties have been reported to influence the cyclic fatigue resistance of NiTi instruments primarily. 14 Hyflex EDM instruments are manufactured from CM wire that underwent a unique treatment termed electric discharge machining that improves the cutting ability and fatigue resistance, whereas WaveOne Gold instruments are manufactured from Gold wire that received a thermomechanical treatment after machining. 9 Postmachining thermomechanical treatment of both CM and Gold wires resulted in a mixed phase which consists mainly of martensite providing increased flexibility and cyclic fatigue resistance. 15,16 Hyflex EDM instruments have been compared with Gold wire and reported to show superior cyclic fatigue resistance of Hyflex EDM instruments. 17 Another study, which compared the effect of different working lengths on the cyclic fatigue resistance of Hyflex EDM and WaveOne Gold instrument in an abrupt curvature model reported superiority of Hyflex EDM irrespective of working length. 18 In the present study Hyflex EDM OneFile instruments showed significantly greater fracture resistance and higher reliability than WaveOne Gold when tested in an abrupt curvature. The findings of the present study are in accordance with the findings of the previous ones. 17,18 On the other hand, another previous study reported similarity between the cyclic fatigue resistance values of Hyflex EDM and WaveOne Gold when tested at 90° curvature and also superiority of Hyflex EDM over WaveOne Gold when the curvature angle was 45°. 19 The differences of the findings might be attributed to the differences in the evaluation methods such as different radius of curvature values and ambient temperature. The designs of Hyflex EDM OneFile and WaveOne Gold Primary instruments are different apart from their tip diameters. Hyflex EDM OneFile instruments show variable taper between .04 and .08 and different cross sectional shape along their cutting surface changing from triangular near to shaft to quadrangular near the tip, whereas WaveOne Gold instruments have a constant .07 taper and parallelogram cross sectional shape along the cutting surface. 10,20 Instrument taper, cross sectional shape and area have been regarded to effect cyclic fatigue resistance of NiTi instruments. 21,22 However it would be difficult to attribute superior cyclic fatigue resistance of Hyflex EDM OneFile to a single variable since the tested instruments showed differences regarding their design and metallurgy. This inability to eliminate the effect of different variables constitutes a major drawback of most laboratory studies. 4 Movement kinematic has also been considered to influence cyclic fatigue resistance of NiTi instruments. 23,24 Reciprocating movement has been developed to prevent screw in effect to improve torsional resistance of instruments, and then it was reported to increase cyclic fatigue resistance also. 23,25 However, in the present study Hyflex EDM OneFile instrument operated with continuous rotation movement showed significantly greater cyclic fatigue resistance than WaveOne Gold instruments operated with reciprocating motion. Time required for fracture (TF) values are reported to be more clinically relevant and informative for clinicians to express the performance of instruments rather than number of cycles to failure (NCF) values when a reciprocating instrument is compared with an instrument operates with continuous rotation. 26,27 NCF values have been associated with the mechanical properties of the instruments, therefore in the present study NCF values were also calculated. 28 The results of the NCF analysis indicated similarity with TF findings. The Weibull analysis presents the capacity of tested instruments in extreme-value distribution and the lower values that might be clinically important for operators. 29 Hyflex EDM OneFile instruments showed a higher Weibull modulus, which is an indicative of a higher reliability of the material. 30 The lengths of fractured fragments were similar between groups, which indicated the correct position of the instruments within artificial blocks and formation of similar stresses during testing. 31 CONCLUSIONS The comparison of two post-machining thermally treated NiTi instruments exhibited higher cyclic fatigue resistance of Hyflex EDM OneFile, which are produced with EDM than WaveOne Gold Primary instruments.
2021-12-27T16:02:42.440Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5bf08c6154a9afd39e5135d42315cc36ff8d29be", "oa_license": null, "oa_url": "https://doi.org/10.5505/eudfd.2021.24471", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b00ff47737d6ea2e88ee9fcefa664b1cb8d05345", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [] }
210972915
pes2o/s2orc
v3-fos-license
Doctors’ contributions to primary care in outpatient clinics in depopulated areas within Hokkaido Objective: To examine how doctors who work in outpatient clinics in depopulated areas in Hokkaido contribute to the provision of primary care to residents. Methods: The study adopted a qualitative research design. Six doctors, all of whom were men and in charge of medical clinics located in depopulated areas in Hokkaido, participated in a semi-structured interview. The interviews were recorded using a digital voice recorder. The data were transcribed and classified into codes, subcategories, and categories, and analyzed. Results: A qualitative analysis yielded the following five superordinate categories: (1) clinical praxis in accordance with residents’ lifestyles and life stages; (2) innovative care provision based on residents’ conditions; (3) provision of routine care in partnership with other healthcare providers and associated stakeholders; (4) beliefs and feelings of pride associated with working as doctors in clinics in depopulated areas; and (5) difficulties in guaranteeing reliable and continuous operation of clinics in depopulated areas. Conclusion: This study successfully identified the specific contributions of doctors working in outpatient clinics in depopulated areas to primary care, as well as the related challenges that they face. Moving forward, researchers should continue to examine how the issues faced by clinics in depopulated areas can be addressed using regional medical care plans. Introduction Depopulated areas have been defined as regions in which a rapid decline in the number of residents has altered the social foundation of the community, thereby impeding local efforts to maintain preexistent living standards and productivity levels 1) . Cities, towns, and villages that the Japanese government designated as depopulated areas (kaso chiiki) occupy approximately half of the country's land. Official figures for the Hokkaido Prefecture, which is located in Northern Japan, suggest that 149 of the prefecture's 179 municipalities (i.e., >80%) meet the criteria for this status 2) . In addition to the dwindling numbers of residents and in-dustrial decline, depopulated areas often face a shortage of medical resources. As a result, the government has introduced so-called remote clinics (hekichi shinryōjo) in certain areas based on their medical administrative status, population, and accessibility, with the objective of guaranteeing the provision of high-quality healthcare services to community residents 3) . Hokkaido has the largest number of remote clinics in the country 4) (i.e., 92) 3) . With the passage of the Act for Securing Comprehensive Medical and Long-term Care in the Community in 2014, all prefectures in Japan are now mandated to prepare so-called regional medical plans (chiiki iryō kōsō) 5) . This act divides the country into 341 planning districts based on secondary medical districts and provides guidelines that can be used to estimate the number of beds that are needed to provide medical services across the four stages of care: highly acute, acute, recovery, and chronic stages 6) . However, amidst this progress in regional medical plans, ensuring that residents of rugged and mountainous regions that have undergone significant depopulation and aging have access to reliable transportation to visit healthcare centers remains a critical challenge 7) . Saijo and colleagues showed that the risk of experiencing a fatal stroke increases as the distance from one's | | doi: 10.2185/jrm.3006 2020; 15(1): [16][17][18][19][20][21][22][23][24] nearest primary care facility increases 8) . Regional medical plans are expected to guarantee the provision of healthcare services to residents of mountain villages, outlying islands, and other depopulated areas where private practices may struggle to maintain profitability 6) . By implementing measures that facilitate better communication of health information to residents, The Ministry of Health, Labor and Welfare aim to provide the same quality of healthcare that is provided in urban settings 9,10) . However, Hokkaido's healthcare system and these remote clinics that they subsume have been found to have many problems such as the shortage and uneven deployment of doctors 11) and work overload 12) . Serious shortages of doctors, population decline, a low birth rate, aging, and other defining characteristics of depopulated areas underscore many more issues that are related to the practice of primary care in such areas 13) . Guaranteeing rural healthcare has also been a point of discussion among members of the Investigative Committee on Physician Work Reform 14) . Nonetheless, the day-to-day realities of the practice of primary care, as perceived by individual doctors who work in clinics in depopulated areas, and the issues that they consider to be important remain unexplored. As Japanese prefectures strengthen and promote their regional care plans, it is vital to examine how doctors contribute to primary care in remote clinics and other facilities that are located in depopulated areas as well as the issues that they face. The objective of the present study was to examine how doctors contribute to primary care at outpatient clinics in depopulated areas in Hokkaido using a qualitative research methodology. Our findings are expected to contribute to future advances in regional care plans and promote primary care practice in depopulated areas in Hokkaido. Definitions of key terms Primary care is defined as the collective medical and welfare-related capabilities of a region to respond to any kind of health issue or disease within its population comprehensively, continuously, and holistically. Clinic in a depopulated area is defined as the such clinics are remote or are public medical clinics that lack hospitalization facilities (i.e., limited to outpatient care) and are located in a designated depopulated area. Study design Our study employed a qualitative research design. Qualitative research methods facilitate the discovery of new aspects of a phenomenon or development of a new theory that is based on empirical data rather than the testing of an a priori hypothesis 15) . They can also be used to analyze specific examples or cases based on their temporal and geographical characteristics 16) . Expert interviews are particularly useful in this regard because researchers can access the collective wisdom of a variety of professionals, each of whom possesses unique practical knowledge that has been garnered by working in a specific professional domain for many years, and conceptually reorganize their wisdom into new theories that are related to the research topic 17) . We adopted a qualitative research methodology to examine how specific practices and beliefs among doctors who work in clinics in depopulated areas in Hokkaido contribute to primary care in such communities. Participants Our target population comprised physicians in charge of public medical clinics without hospitalization facilities (i.e., capable of providing only outpatient care) that were managed by small municipalities in Hokkaido. Fifty-one doctors who met this criterion and were serving as the directors of 51 clinics open to the public 18) were invited to participate in the study. Specifically, a sealed envelope, which contained a document that requested their participation and a consent form, was mailed to them. The study sample consisted of six doctors who responded to the invitations by submitting signed consent forms. Data collection Each participant was visited at his workplace. They participated in a semi-structured interview that was conducted with the aid of an interview guide that the researchers had developed based on the conventions of past studies 11,12) . Questions regarding (a) the characteristics of the communities that they served, visiting patients, and staff who were involved in primary care, (b) current deployment and availability of medical resources, and (c) their own beliefs about their contributions to primary care were posed. Interviews were conducted in a private room to protect their privacy, and their responses were recorded using a digital voice recorder. The durations of the interviews ranged from 24 to 36 minutes (M = 29.7 ± 5.0 minutes). Data were collected from October 2016 to November 2017. Analysis The audio recordings were transcribed into textual data, which were then analyzed in accordance with standard methodological conventions for qualitative research 19) . First, the researchers carefully read through each participant's transcript, identified content (and surrounding text for context) that appeared to be related to the current state of primary care in clinics in depopulated areas in Hokkaido, and assigned primary codes to these texts to gain an overall understanding of the data. Next, these primary codes were used to generate secondary codes; care was taken to ensure that semantic content | | doi: 10.2185/jrm.3006 2020; 15(1): [16][17][18][19][20][21][22][23][24] was not lost during this process. Secondary codes were in turn used to generate a final set of codes that had a high level of abstraction. These final codes were scrutinized for generality and specificity, merged into a final dataset that represented information that had been obtained from all the participants, and further generalized into categories and subcategories. To improve the credibility of the analytic process and results, the researchers visited all the participants a second time so that they could recheck the results 20) . Specifically, they were asked to identify any discrepancies between the excerpts and data and what they had said during the interview and verify whether their intended meaning had been correctly captured. All members of the research team participated in the analytic process until no new categories could be generated. Additionally, the suitability of all decisions were scrutinized until a consensus was reached. Ethical considerations This study was conducted with approval of the Ethics Committee of Sapporo City University (date of approval: August 29, 2016). Participants were provided with documents that informed them about the study objective and methods and their right to withdraw from the study at any time. They were also assured that their anonymity would be protected. The same information was also provided verbally. Written consent was obtained from all participants. Results All the six doctors were men (M age = 56.2 ± 12.3 years; 30-39 years: n=1, 50-59 years: n=3, and >60 years: n=2). Their mean years of experience in the current position was 8.2 ± 5.0 years (<5 years: n=1, 5-9 years: n=2, and 10-14 years: n=3). Four of them worked in designated rural clinics. Table 1 presents a list of specific physician practices and beliefs that were found to contribute to primary care in outpatient clinics in depopulated areas in Hokkaido. These practices and beliefs were grouped into 5 superordinate categories and 18 subcategories. Our analysis revealed that codes pertaining to the praxis of physicians who were working in clinics in depopulated areas in Hokkaido could be classified into five categories. In the following sections, categories are presented in boldface, subcategories are presented within quotation marks, and codes are italicized. Clinical praxis in accordance with residents' lifestyles and life stages This category was constituted by four subcategories. The doctors lived in the same community in which they practiced; this positionality made them sympathetic to the geographical challenges faced by the community residents. Accordingly, doctors' comments were consistently indicative of their "sympathy toward community residents because of the geographical difficulties associated with continuing to live in these areas". This subcategory subsumed codes such as the following: some residents who are older adults leave the area once they are no longer able to handle the difficulties that result from heavy snowfall and mobile catering plays a major role in the community's food supply. Such experiences had made the doctors acutely aware of the role that a clinic plays in a depopulated area. More specifically, the doctors articulated that they had an "obligation to see patients of all ages who may present with a wide variety of diseases". Moreover, doctors were also "mindful of the health issues that are unique to depopulated areas and are associated with residents' lifestyles and life stages". This finding typified by codes such as seeing adult and older patients for issues unique to rural communities (e.g., salt overconsumption). To fulfill these roles, doctors had adopted innovative approaches in their clinics and sometimes coordinated their provision of care with government agencies to enhance healthcare services and facilitate the diagnosis and treatment of common diseases. More specifically, such a tendency was captured by responses that pertained to doctor's efforts to ensure "enhanced healthcare provision to facilitate the diagnosis and treatment of frequently encountered diseases". This subcategory subsumed codes such as improved ability to handle many diseases by purchasing medical equipment at one's own expense, enlisting the help of city halls, etc. Innovative care provision based on residents' conditions This category was constituted by four subcategories. In addition to ensuring the clinics had the equipment and medicines that were needed to treat frequently encountered diseases, doctors were also conscientious about "endof-life care provision based on the severity of the patients' conditions". They articulated that their mission was to help patients who were in need of end-of-life care as well as patients with dementia who continue to live in their communities during their last days. The doctors' efforts in this regard are exemplified by codes such as visiting patients at their homes or in other hospitals (i.e., if they were frail or bedridden) to care for them (e.g., provide home oxygen therapy). Moreover, as the following subcategory explicates, doctors had also created innovative methods to help people with dementia by coordinating care provision with social resources in their communities: "care innovations for people with dementia that were developed in coordination with social resources in the community". This subcategory subsumed codes such as a) dementia care that is determined on a caseby-case basis, b) enlisting the ability of residents to help one another, and c) using long-term care facilities ( Ascribing the utmost importance to the task of interviewing and examining every patient who visits the clinic and regarding it as a fundamental rule Pride in abilities as a general practitioner, which is grounded in practical experience Taking special measures to outsource blood panels, as the town lacks a testing center Able to examine and treat patients despite a lack of diagnostic imaging equipment Desire to be a familiar and trusted doctor Glad they were dispatched to their clinic since the foundation of familiarity and trusting relationships with their patients allows them to practice medicine more capably Wanting to be a doctor who locals trust, since they must diagnose and treat a wide variety of diseases facilities) that are warranted by a patient's condition. Doctors spoke about (a) how their clinics were organized, (b) how they met the healthcare needs of a community that was spread across a wide physical area, and (c) how they had modified their practices in accordance with limited workforces and resources. Moreover, doctors were also responsible for ensuring that "the organization and functions of the clinic were modified to meet the community's medical needs". This phenomenon was captured by codes such as healthcare has evolved in accordance with the community's medical needs and arrangements have been made to transport patients by helicopter to other centers in the event of an emergency. The subcategory, "modified practices as a result of limited workforce and resources" subsumed codes such as clinic patients utilize day rehabilitation services at a nearby geriatric health center and running the clinic with limited professional staff and equipment. Routine care in partnership with other healthcare providers and associated stakeholders This category comprised two subcategories. Doctors spoke about their "coordination with secondary and core hospitals in situations that had required urgent care or specialist expertise". Related subcategories included codes such as refer interested patients to hospitals with specialists in neighboring towns or cities for some conditions and provide comprehensive care by coordinating with other medical centers in the region through partnerships with several professionals and secondary hospitals. In addition, the doctors drew attention to the "prevention and awareness activities that they had conducted in association with government health authorities". Specifically, doctors had collaborated with government health authorities to conduct prevention and educational activities, such as needbased seminars, which they had provided to communities, at their behests, to promote resident health. These subcategories consisted of codes such as believing that the influenza prevention programs that were conducted in association with government health authorities were effective and conducting need-based seminars among community residents at the behest of government health authorities. Beliefs and feelings of pride that are associated with working as doctors in clinics in depopulated areas This category was constituted by four subcategories. The doctors' routine practices were rooted in a "belief in the need to scrupulously examine every patient who visits the clinic, irrespective of his or her age or disease". This subcategory subsumed codes such as ascribing the utmost importance to the task of interviewing and examining every patient who visits the clinic and regarding it as a fundamental rule. Doctors were "proud of their abilities as general practitioners, which were grounded in practical experience", and this subcategory subsumed codes such as able to examine and treat patients, despite a lack of diagnostic imaging equipment. In addition, the subcategory, "desire to be a familiar and trusted doctor", made it apparent that doctors valued building close and trusting relationships with their patients. This sentiment was captured by codes such as wanting to be a doctor whom locals trust because they can diagnose and treat a wide variety of diseases. In addition, doctors expressed an "intention to train the next generation of doctors who will be responsible for providing primary care". This subcategory subsumed codes such as believing that educational support is necessary to not only train new staff so that they are capable of providing primary care but also encourage them to return to and practice in depopulated areas. Difficulties in guaranteeing reliable and continuous operation of clinics in depopulated areas This category comprised four subcategories. The doctors spoke about the "burden of having to shoulder extensive responsibilities, including dealing with medical device manufacturers, all by themselves". This subcategory subsumed codes such as suspicious of medical device manufacturers who are too quick in recommending new and expensive models. In addition, the subcategory, "perceptions of personal inadequacy, when one is unable to provide residents with sufficient care as a result of the inherent characteristics of depopulated areas", indicated that doctors experienced a sense of inadequacy. The inherent characteristics of depopulated areas, to which the doctors were referring in this context, were captured by codes such as too few patients seek care at the clinic because of a complex web of factors (e.g., remote location, limited working hours) and previously proposed expansion of the public transportation system because the existing bus network was insufficient for use by patients who needed to visit the clinic. Younger doctors also spoke about their "difficulties in managing and running the clinic while also trying to lead a normal family life". This subcategory subsumed codes such as young doctors are forced to live away from their families because other distant regions offer better educational opportunities and currently trying to work at clinics that are at a reasonable distance from one's home because continuing to commute long distances every day is stressful and difficult. Citing these reasons, the doctors also underscored their "difficulties in recruiting doctors and the consistent demand for medical care at the clinic". This subcategory subsumed | | doi: 10.2185/jrm.3006 2020; 15(1): [16][17][18][19][20][21][22][23][24] codes such as the survival of a clinic is threatened by the outflow of patients outside the area and major problems in ensuring that clinics, medical centers, and secondary hospitals have enough doctors on staff. Discussion Our analysis revealed that responses pertaining to the praxis of physicians who were working in clinics in depopulated areas in Hokkaido could be classified into five categories. One of these categories was related to issues that they had encountered in the provision of medical care. First, we describe the characteristics of the four categories that were related to doctors' work and subsequently discuss them with reference to the key principles of primary care; finally, we discuss the issues that are currently faced by clinics in depopulated areas in Hokkaido. Physician praxis in clinics in depopulated areas The doctors who were surveyed in the present research were sympathetic to the daily difficulties that were faced by residents of depopulated areas, and these feelings were rooted in their own statuses as members of the community. Takeda and her colleagues have noted that any discussion on healthcare provision in depopulated areas must fundamentally examine residents' lifestyles 21) . Our findings suggest that these doctors were cognizant of the relationship between residents' lifestyles and health problems in their provision of treatment and care. This behavior was abstracted into a superordinate category: clinical praxis in accordance with residents' lifestyles and life stages. Previous studies have shown that specific diseases, including certain orthopedic diseases 22) , predominantly affect depopulated areas and that there is an association between frailty and pneumonia treatment among older adults 23) . These populations also require adequate treatment for a variety of other illnesses such as psychiatric diseases 24) , diabetes 25) , and stroke 26) . Further, the responses of doctors who were surveyed in the present study revealed that there was a need to diagnose and treat a variety of illnesses and injuries, particularly lifestyle diseases. The participating doctors' praxis was also captured by another category: innovative care provision based on residents' conditions. This category pertained to the following: (a) supplying their clinics with certain equipment and medicines to facilitate the treatment of frequently encountered diseases, (b) being conscientious about helping end-of-life patients and patients with dementia so that they can continue living in their communities during their final days, and (c) utilizing community-based social resources to create innovative ways of providing healthcare. In addition, our findings revealed that doctors endeavored to refer interested patients to hospitals with specialists that were located in neighboring towns or cities for the treatment of certain diseases. Takeda and colleagues have noted that discussions on healthcare provision in depopulated areas must pay due attention to human and physical medical resources in neighboring medical care zones 21) . The details of this practice were captured by another category: routine care in partnership with other healthcare providers and associated stakeholders. Finally, our abstraction of the superordinate category, beliefs and feelings of pride that are associated with working as doctors in clinics in depopulated areas, revealed several psychological and emotional factors that help physicians fulfill their duties in depopulated areas. Horikoshi et al. reported that being hospitalized in an institution that is far away from one's home is a major burden to both inpatients and their families 27) . Other researchers have found that community-based clinics make it easier for healthcare workers to provide holistic care based on the inherent characteristics of and familial relationships within the community 28) . Moreover, the doctors who participated in our study aimed to provide comprehensive care because they believed that it was important to scrupulously examine every patient who walks through their door as an individual and understand the lifestyles and backgrounds of the residents who had been living in depopulated areas. Physician praxis in relation to the principles of primary care In this section, we discuss the broader significance of the four aforementioned categories within the framework of the five key principles of primary care that have been proposed by the Japan Primary Care Association: comprehensiveness, continuity, coordination, accessibility, and accountability 13) . The category, clinical praxis in accordance with residents' lifestyles and life stages, corresponds to the primary care principle of comprehensiveness, which refers to "medical care from childhood to old age, from prevention to treatment and rehabilitation, with an emphasis on holistic care" 13) . As school doctors, physicians manage the health of infants, young children, and students, and as comprehensive care providers, they conduct prevention and awareness activities that are related to health maintenance and promotion among adults and older adults; further, they diagnose and treat a variety of diseases. The category, innovative care provision based on residents' conditions, corresponds to the primary care principle of continuity, which refers to "continuous care befitting patients' condition, from cradle to grave, in sickness and in health" 13) . The doctors who participated in this study adopted ingenious approaches to provide their patients who had been living in depopulated areas with continuous care, despite limited medical resources. In addition, they had taken efforts to customize the provision of end-of-life care based on the severity of a patient's condition; in this manner, they embodied the principle that physicians must provide continuous care "from the cradle to the grave. The category, routine care in partnership with other healthcare providers and associated stakeholders, corresponds to the primary care principle of coordination, which refers to "close relationships with specialists, coordination with team members and residents, and utilizing socialmedical resources" 13) . The doctors who participated in the present study had partnered with support and core hospitals, when their patients had required emergency or specialist treatment. Finally, the category, beliefs and feelings of pride that are associated with working as doctors in clinics in depopulated areas, corresponds to the primary care principle of accessibility, which refers to "geographical, financial, temporal, and mental proximity" 13) . The ideals of accessibility are also embedded within all the other categories that were identified in this study. Doctors build trusting relationships with patients across repeated visits, and they commonly become a familiar face within the community. The forging of such trusting relationships is motivated by their need to scrupulously examine each patient who visits them, irrespective of their age or presenting problem. In addition, this category also resonates with the principle of accountability, which refers to the internal audit and lifelong learning processes that doctors must undergo as well as their responsibility to provide their patients with complete explanations about their treatments. The doctors who participated in this study wanted community members to trust them because they are equipped to diagnose and treat a wide variety of diseases. In addition, they expressed a wish to train junior physicians in what they have mastered through their primary care praxis and nurture the next generation of physicians. Issues facing clinics in depopulated areas in Hokkaido One practical issue that affects doctors' ability to provide primary care in depopulated areas was captured by the category, difficulties in guaranteeing reliable and continuous operation of clinics in depopulated areas. The doctors had experienced a sense of personal inadequacy when they were unable to provide sufficient care to patients as a result of depopulation-related factors. Further, younger physicians found it difficult to continue to run their clinical practice while also trying to lead a normal family life. Ozone and colleagues 29) have reported that, while community residents feel reassured by the availability of a physician in their local clinic, they are selective about the care behaviors that they consider to be necessary based on their circumstances. Our findings support this notion; specifically, residents of depopulated areas perceive such clinics to be indispensable but continue to visit other medical facilities within the region depending on their circumstances. This is a serious problem that can adversely affect the steady influx of patients in clinics in depopulated areas. One pertinent contributing factor is the widespread use of the internet, which has made it easier for individuals to make their own decisions about where and how to seek treatment. Moreover, high rates of private car ownership, a robust high-speed intercity bus system, and extensive highway networks make it more convenient for patients to receive medical care from institutions that are located far away from home. These investments in industrial and social infrastructure and changing conventions suggest that living in a depopulated area today means something different than what it did years ago. Residents of depopulated areas have more treatment options at present than they did in the past, and this allows them to pursue treatment independently without having to visit their local clinic. Future work is needed to determine the patterns that underlie the treatment-seeking behaviors and healthcare needs of these individuals, and researchers should continue to discover ways of partnering with nearby support and specialist hospitals based on their wishes. However, one could also argue that these clinics allow healthcare workers to better customize and refine primary care (e.g., visiting and caring for patients who are confined to their homes). The results of the Global Burden of Diseases, Injuries, and Risk Factors Study indicated that nonlethal illnesses and injuries that are associated with aging have increased the world's disease burden 30) . These conditions are already pressing issues in depopulated areas. What types of healthcare are we obligated to provide in depopulated areas? This issue should not be solely addressed by doctors who provide such care; instead, they must address it in collaboration with community residents and government authorities. We hope that our findings serve as a useful reference that shapes the future of rural medicine. Study limitations One of the limitations of this study is the small size of the sample that was used. In addition, our sample did not include doctors who work in clinics with hospitalization facilities (i.e., capable of providing inpatient treatment) and private practitioners. Further, our study focused exclusively on depopulated areas in Hokkaido. However, Japan is a large and diverse country with distinct regional characteristics. Its borders stretch across thousands of kilometers that span from the north to the south, it is bounded by the ocean on all sides, and it has many outlying islands. Therefore, future research studies should broaden their definitions of the target demographics to examine the specific ways in which healthcare workers contribute to primary care in clinics in other depopulated areas in Japan. Conclusion Our findings delineate the specific ways in which physicians contribute to primary care in outpatient clinics in depopulated areas in Hokkaido. The doctors who participated in this study had innovatively adapted their care practices to meet the unique needs of their patients, and they provided routine care in partnership with other healthcare providers and associated stakeholders. Beliefs and feelings of pride that are associated with working as doctors in clinics in depopulated areas reinforced these behaviors. On the other hand, the findings also underscored physicians' difficulties in ensuring the stable operation of clinics and the need to focus on issues such as maintaining a sufficient supply of patients in local clinics and training the next generation of physicians. Taken together, our findings delineate doctors' contributions to primary care in clinics in depopulated areas.
2020-01-30T09:03:22.179Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9bd5fc7b3ebbf91e4aae332fd36e2c9f14bbef7e", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jrm/15/1/15_3006/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c054dbcdeadbee7919d397f627105e912fcb133", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
248405995
pes2o/s2orc
v3-fos-license
A Power Analysis for Model-X Knockoffs with $\ell_{p}$-Regularized Statistics Variable selection properties of procedures utilizing penalized-likelihood estimates is a central topic in the study of high dimensional linear regression problems. Existing literature emphasizes the quality of ranking of the variables by such procedures as reflected in the receiver operating characteristic curve or in prediction performance. Specifically, recent works have harnessed modern theory of approximate message-passing (AMP) to obtain, in a particular setting, exact asymptotic predictions of the type I-type II error tradeoff for selection procedures that rely on $\ell_{p}$-regularized estimators. In practice, effective ranking by itself is often not sufficient because some calibration for Type I error is required. In this work we study theoretically the power of selection procedures that similarly rank the features by the size of an $\ell_{p}$-regularized estimator, but further use Model-X knockoffs to control the false discovery rate in the realistic situation where no prior information about the signal is available. In analyzing the power of the resulting procedure, we extend existing results in AMP theory to handle the pairing between original variables and their knockoffs. This is used to derive exact asymptotic predictions for power. We apply the general results to compare the power of the knockoffs versions of Lasso and thresholded-Lasso selection, and demonstrate that in the i.i.d. covariate setting under consideration, tuning by cross-validation on the augmented design matrix is nearly optimal. We further demonstrate how the techniques allow to analyze also the Type S error, and a corresponding notion of power, when selections are supplemented with a decision on the sign of the coefficient. Introduction Suppose that we observe a matrix X ∈ R n×p of the measurements of p predictor variables on each of n subjects, and a response vector Y ∈ R n , and assume that for most j = 1, ..., p. For example, in genetics X ij might encode the state (presence or absence) of a specific genetic variant j for individual i, and Y i measures a quantitative trait of interest. Typical cases entail the number p of genetic variants in the millions but, for all we know about this kind of problems, only a small number of them may have significant explanatory power. Finding mutations which are in that sense important among the p candidates, is key to investigating the causal mechanism regulating the trait. Following recent literature [2, 9, for example], here we treat the problem formally as a multiple hypothesis testing problem with respect to the model (1.1), where the null hypotheses to be tested are H 0j : β j = 0, j ∈ H ≡ {1, ..., p}. Denote by H 0 ≡ {j : β j = 0} the (unknown) subset of nulls, and denote by S ≡ H \ H 0 the subset of nonnulls. In general, a multiple testing procedure uses the data to output an estimate S ⊆ {1, ..., p} of S. For any such procedure we define the false discovery proportion and the true positive proportion as FDP ≡ | S ∩ H 0 | | S| and TPP ≡ | S ∩ S| |S| , respectively, with the convention 0/0 ≡ 0. A good testing procedure is one for which TPP is large and FDP is small, meaning that the test is able to separate nonnulls from nulls. We will later be concerned with the concrete problem of controlling the false discovery rate, below a prespecified level, and we say that a test is valid at level q if FDR ≤ q for all β. Note that, per definition, any variable selection procedure qualifies as a testing procedure and vice versa, and we will use the two terms interchangeably. Selecting variables by thresholding regularized estimators With a growing interest in high-dimensional (large p) settings, considerable attention has been given over the past two decades to variable selection procedures relying on the Lasso program, The Lasso is appealing because it is relatively easy to solve and at the same time the solution to (1.2) tends to be sparse. Thus, for any λ > 0, if β(λ) denotes the solution to (1.2), variable selection is readily elicited by associating with β(λ) the subset S ≡ {j : β j (λ) = 0}, (1.3) which will be referred to as Lasso selection for the rest of this paper. Many works have studied the properties of Lasso selection, mostly establishing conditions on X and β for selection consistency, P( S = S) → 1, e.g. [14,27,29,20,7,12]. Such conditions turn out to be generally very stringent even in the noiseless case, σ 2 = 0; in other words, the fundamental phenomenon is not a matter of insufficient signal-to-noise ratio. While the conditions for (1.3) to recover a superset of the true support S, also referred to as screening, are considerably less restrictive, it tends to select too many null variables (see, e.g., [26,7,28]). This rather discouraging fact has motivated practitioners and theoreticians alike to consider as an alternative the procedure that takes into account the magnitude of the estimate by setting S ≡ {j : | β j (λ)| > t}, (1.4) for some threshold t > 0 [28,26,19,8,16], to which we refer from now on as thresholded-Lasso selection. Even more generally, one may consider, as in [22], replacing the Lasso estimator β(λ) in (1.4) with some bridge estimator, where b γ γ ≡ |β j | γ , and we use the symbol γ from here on instead of the more standard notation p (as in the title) because p is already taken (denotes the number of columns in X). The optimization problem (1.5) retains computational convenience because it is still convex, and at the same time produces a richer family of thresholded-bridge selection procedures, S ≡ {j : | β j (γ, λ)| > t}, (1.6) for some γ ≥ 1 and a threshold t > 0. In [22] the above selection procedure is referred to as a twostage variable selection technique, separating the ranking by the absolute value of the regularized regression estimator, and the thresholding at t > 0. In principle, the parametric curve that associates the expectations of FDP and TPP with every λ > 0 for (1.3), and with every t > 0 for (1.6), could be used to measure the quality of ranking for Lasso selection or for thresholded-bridge selection with different choices of γ > 1. For fixed n and p, however, there are no tractable forms for β j (γ, λ) in general (the special case γ = 2 is an exception), and the expected FDP and TPP are also intractable functions of β and σ 2 . Remarkably, in a certain asymptotic regime and under some further modelling assumptions, it is possible to calculate the limits of FDP and TPP for (1.3) at any fixed λ > 0, and for (1.6) at any fixed γ ≥ 1 and t > 0. More specifically, in a special case where X has i.i.d. Gaussian entries, p and n grow comparably, and the sparsity is linear, |S| ≈ p, [6] leveraged major advances from [3,4] to first obtain exact asymptotic predictions of FDP and TPP for Lasso selection. In [15] a fundamental quantitative tradeoff between FDP and TPP for Lasso, valid uniformly in λ, was presented by extending the aforementioned results. Recently, [22] obtained predictions of FDP and TPP for thresholded-bridge selection with any γ ≥ 1, which covers in particular thresholded-Lasso selection. The main purpose in [22] is to analyze the power corresponding to different choices of γ ≥ 1 in (1.6), and compare them in different regimes of the signal. In particular, while the results of [15] imply that Lasso cannot achieve exact support recovery in this asymptotic setting, [22] show that using thresholded-Lasso can improve dramatically the separation between null and nonnulls if λ is chosen appropriately. This provides rigorous confirmation for the advantages of thresholded-Lasso, which have long been noticed by practitioners. Also, the analysis in [22] reinforces the results of [16], which imply that in the same asymptotic setting, thresholded-Lasso indeed achieves exact support recovery if the signal-to-noise ratio is high and the limiting signal sparsity is below the transition curve of [11]. A "vertical" look at the Lasso path Before proceeding to describe the main focus of this paper, we take a moment to reflect on the basic differences between Lasso selection and thresholded-Lasso selection that account for the potential power increase reported in, e.g., [22]. At first glance, the two selection rules might not appear that different, because (1.3) is just (1.4) with t = 0. There is, however, a fundamental difference between Lasso and thresholded-Lasso. To illustrate this, we simulated data from the model with n = 100, p = 200, σ = 1, and the coefficients are all zero except for β 1 = . . . = β 20 = 10. Figure 1 tracks the absolute value of the Lasso estimates β j (λ) as a function of λ, for null coefficients and for nonnull coefficients. In Lasso selection variables are collected in the order they become active, β j (λ) = 0, as λ decreases; pictorially, this corresponds to looking at the selection path "horizontally" along the λ axis. We can see that false discoveries occur early on the Lasso path (as studied and confirmed in [15]). Consequently, (1.3) cannot keep FDP small unless λ is chosen large, which inevitably affects the power: in this example the maximum TPP for (1.3) subject to FDP≤ 0.1 is 0.45. Nevertheless, it is also evident from the figure that the estimates corresponding to true signals maintain significantly larger size than most of the estimates for nulls, as λ decreases. This suggests that better separation between null and nonnulls can be achieved by looking further down the path (smaller λ) and ordering the variables according to the magnitude of the corresponding estimates; pictorially, this corresponds to looking at the selection path "vertically", as represented by the dashed line at λ = 1.05. The additional flexibility in varying the threshold allows (1.4) to take advantage of this: basically, λ can be chosen freely, while setting t appropriately large will ensure small FDP (by killing small estimates corresponding to null coefficients). The potential advantage is demonstrated in the figure by the broken line, indicating the 10-fold cross-validation estimate of λ. At this value of λ, for example, thresholded-Lasso has TPP equal to 0.95 when t is selected such that FDP ≤ 0.1. Calibration for Type I error The works of [15] and [22] are important because they facilitate a sharp theoretical comparison between Lasso selection and the thresholding selection procedures (1.6). In practice, however, the implications are limited: the analysis in these works will yield the achievable asymptotic FDP for a prescribed asymptotic TPP level at any given λ for (1.3), and at any given t for (1.4), provided that σ and the empirical distribution of the true coefficients β j are known. In reality, such a priori knowledge about the signal and the noise level is rarely available, and the FDP needs to be estimated instead. This motivated [24] to study a knockoffs-augmented setup and obtain an operable counterpart to the "oracle" FDP-TPP curve of [15] for Lasso selection. By "operable" we mean that the power predictions of [24] apply to a procedure that provably controls the FDR for fixed n, p without any knowledge about β or σ. Seeking to increase power while maintaining type I error control, in the present article we obtain an operable analog to the FDP and TPP predictions of [22] for the thresholding selection procedures (1.6), with special attention given to thresholded-Lasso selection. As in [24], we employ knockoffs to allow for FDR calibration, observing that the augmented setup can still be studied within the same AMP framework. However, there is a crucial point of departure between our work and [24] also in the type of knockoffs used: while the construction of [24], reviewed briefly in Section 2.3.1 and referred to as "counting" knockoffs in the sequel, is valid only when the entries of X are i.i.d., here we use the more general prescription of Model-X knockoffs from [9]. The counting knockoffs scheme studied in [24] is something that the analyst would only implement if it were known that the covariates were i.i.d., as it controls FDR only in this limited setting. In contrast, the model-X knockoffs procedure is something that is widely used across a broad range of regimes, and has valid FDR control far beyond the i.i.d. design setting. While our power analysis for this method is, at present, restricted to the i.i.d. setting, the results in the current paper are far more useful since the analysis accommodates a much more general and broadly used kncokoff scheme (and the power analysis can hopefully be extended beyond the i.i.d. setting in future work). To further justify studying Model-X knockoffs for i.i.d. covariates, it is important to emphasize that-perhaps not obviously so-the i.i.d. setting is very different from the orthogonal setting: for example, the discussions in [5, Section 3.2.1] and in [15,Section 3] regarding Lasso, imply that due to shrinkage, even small sample correlations between the realized columns of X generate additional "noise" as an artifact, which increases the variance of the estimates. This makes the analysis quite different, and more involved, as compared to the orthogonal X case. Specifically, the level of this noise increases with the ratio p/n and depends non-monotonically on the tuning parameter λ, see also Figures 3 and 6 below. That explains, informally, why choosing an appropriate value of λ (i.e., a value that makes the power large) is far from trivial already in the i.i.d. covariate case under consideration here. Regarding the comparison with [24], besides the main difference mentioned above in the type of knockoffs, it is worth emphasizing that by implementing Model-X knockoffs, we also obviate the problem of estimating the proportion of nonnulls (i.e., the sparsity), which was a nontrivial issue to handle with counting knockoffs and involved an extra tuning parameter. Table 1 indicates where our work fits in the context of existing literature analyzing variable selection with bridge-penalized statistics in the AMP framework. Our contribution As implied in the previous subsection, the thrust of this article is to develop mathematical tools enabling exact power analysis of Model-X knockoffs procedures, and to study consequences of the resulting analysis. We summarize below our main results. I. An extension of AMP theory. In the Model-X knockoffs framework the statistic used for ranking the variables will involve both β j (γ, λ) and its knockoff counterpart. Therefore, we need to study aspects of their joint distribution, rather than just the marginal distribution of β j (γ, λ) as in [24]. To accommodate this, we present a technical extension of existing AMP results, which underlies our analysis, but may be of independent interest and have broader implications. The challenge is, in essence, to extend the convergence results in [3] so they apply to functions (of the regression coefficients and their estimates) which are not symmetric with respect to all variables 1, ..., p. This basic result is formulated in Theorem 1 (Section 3), and in turn facilitates calculation of the power curves in Corollaries 3.4 and 3.4. II. Asymptotic power predictions for Model-X knockoffs. To give an example of the consequences of the theoretical analysis in the current paper, the right panel of Figure 2 shows asymptotic FDP versus TPP as predicted by the theory for Lasso (1.3) and thresholded-Lasso (1.4) in both the oracle and knockoff versions. Here the undersampling ratio δ ≡ lim(n/p) = 1, the noise level σ = 1, and β j has a mixture distribution of point mass at M = 4.3 with probability = 0.1, and at zero with probability 0.9. In the figure broken lines represent the oracle procedures, and solid lines correspond to knockoff procedures. For thresholded-Lasso curves are shown for both Model-X knockoffs (as proposed in the current paper, and depicted in solid black in the figure) and counting knockoffs with r = p fake columns (solid red). For Lasso selection, predictions with Model-X knockoffs are actually harder to obtain, because (2.4) is not as useful an approximation when W -statistics are considered, so only the curve for counting knockoffs is shown (solid grey). Importantly, the "oracle" version of thresholded-Lasso is implemented here with the optimal value for λ, see the discussion in Section 4. For the Model-X knockoffs version of thresholded-Lasso, the value of λ used here is the limit of the (10-fold) cross-validation estimate, denoted later by λ cv . Comparing first the two oracles, it is clear that thresholded-Lasso has a significantly better tradeoff curve: for example, FDP is about 25% by the time Lasso detects 80% of the signals, whereas thresholded-Lasso is able to detect about 90% of the signal with the same FDP. Turning to the knockoff procedures, it can be seen that counting knockoffs performs slightly better than Model-X; the reason is that counting knockoffs use the lasso coefficient size itself instead of the difference W j , but this is a small price to pay for Model-X in return for a much more general method. More importantly, both knockoff versions for thresholded-Lasso perform substantially better than the knockoffs version of Lasso, in fact much better than the oracle version for Lasso, and even the universal lower bound of [15] on FDP (see the next paragraph). For example, knockoffs still attains TPP of about 80% with FDP just above 10%. III. Thresholded-Lasso selection breaks through the power-FDR tradeoff diagram of [15] also when knockoffs are used for calibration. In [15] a power-FDR tradeoff diagram is provided for Lasso selection, which specifies the upper limit on the asymptotic power subject to maintaining FDR ≤ q ∈ (0, 1); this diagram depends on the undersampling ratio δ = n/p and the sparsity , but holds independently of the magnitude of the nonzero regression coefficients. A consequence of the results in [15] is that, when the sparsity > 0, Lasso selection will fail to exactly recover the true model. In a recent article [16] proved that the aforementioned tradeoff diagram does not apply to thresholded-Lasso selection. More specifically, it is shown that for any value of the tuning parameter λ, proper thresholding of the Lasso coefficient estimates identifies the true model as long as the signal is strong enough, and provided < ρ(δ), where ρ(·) is the famous Donoho-Tanner phase transition curve. However, the appropriate threshold depends on unknown parameters such as the sparsity and the signal magnitude, hence the practical significance of the results in [16] is limited. In Theorem 2 we give a more quantitative result for the setting considered in the current paper, proving that a Model-X knockoffs analog of the thresholded-Lasso procedure still breaks through the tradeoff diagram of [15]. Thus, for any q ∈ (0, 1) Model Xknockoffs equipped with the Lasso coefficient-difference [9, LCD hereafter] statistic, achieves power arbitrarily close to 1 if the signal is strong enough and the sparsity is below the transition curve corresponding to the augmented design. IV. Optimal λ is well approximated by cross-validation. For a fixed q, the performance in terms of achievable TPP of the oracle thresholding selection procedure (1.6) that has (asymptotic) FDP level q, in general depends strongly on λ. For thresholded-Lasso (γ = 1) this is demonstrated in [22], where a characterization is also given for the value of λ that asymptotically maximizes TPP for a prescribed FDP level. When incorporating knockoffs, the analysis is more subtle because we operate with the difference in the estimate size between a variable and its knockoff counterpart, instead of the estimates themselves. While the dependence of the exact optimal λ on the unknown parameters of the problem is fairly complicated, we demonstrate that, at least in the case of i.i.d. X, the optimal λ can be well estimated by cross-validation on the augmented design. To allow incorporating this into our asymptotic predictions, in Section 4 we derive the formula for the limiting value of λ chosen by cross-validation on the augmented design. Setup and review 2.1 Setup Adopting the basic setting from [15], our working hypothesis entails the linear model (1.1) with σ 2 fixed and unknown, and we consider an asymptotic regime where n, p → ∞ such that n/p → δ > 0. We assume that the matrix X has i.i.d. N (0, 1/n) entries, so that the columns are approximately normalized. The components β j of the coefficient vector β are assumed to be i.i.d. copies of a mixture random variable, where ∈ (0, 1) is a constant, and where E Π 2 < ∞. Here P(Π * = 0) = 1, so that P(Π = 0) = ∈ (0, 1). With some abuse of notation, we use Π, Π * to refer to either the random variable or its distribution, but the meaning should be clear from the context. Other than having a mass at zero, Π is completely unknown, which is to say that and Π * are unknown. Finally, X, β, and ξ are all independent of each other. Many selection rules first use the observed data to order the p variables, that is, for some function g, an "importance" statistic is computed, where larger (say) values of T j presumably indicate stronger evidence against the null hypothesis that β j = 0. We assume that g has the natural symmetry property that if X is obtained from X by rearranging the columns, then g(X , y) rearranges the elements of the vector g(X, y) accordingly. 1 Given a target FDR level q, a final model can then be selected by taking where t = t(q) is a threshold that may generally depend on the observed data. For any choice of the importance statistic T (i.e., for any choice of g), we define again with the convention 0/0 = 0. In the rest of the paper we will consider importance statistics that derive from the convex program (1.5). As presented in the Introduction, the case of γ = 1 in the bridge optimization program (1.5) is of particular interest here, because in the Lasso case the estimator β(λ) itself is sparse and can be used directly for variable selection. Therefore, we generally focus on the case γ = 1 from now on, but, importantly, our new results are stated for any γ ≥ 1 to match the generality in [22]. Basic AMP predictions For the Lasso program (1.2), we start with noting that, on defining we have |{j : T j ≥ t}| ≈ |{j : β j (t) = 0}|, because only variables that drop out from the Lasso path-that is, for which β j (λ 0 ) = 0 but β j (λ 1 ) = 0 for λ 1 < λ 0 -can contribute to the difference between the quantities; see discussion in [24]. Therefore, we treat the comparison between (1.3) and (1.4) as essentially a comparison between two procedures of the form (2.2), where T j is given by (2.4) for Lasso, and by for thresholded-Lasso. In anticipation of Section 3, we call (2.4) the Lasso-max statistic, and we call (2.5) the Lasso-coefficient statistic. Remarkably, under the working hypothesis, exact asymptotic predictions of FDP and TPP can be obtained for both Lasso and thresholded-Lasso. Stated informally, Theorem 1 in [3] asserts that under our modeling assumptions, in the limit as n, p → ∞ we can "marginally" treat and we use a dot above the "∼" symbol to indicate that this holds only in that restricted sense. Above, η θ (x) ≡ sgn(x) · (|x| − θ) + is the soft-thresholding operator (acting coordinate-wise); Z ∼ N (0, 1) and independent of β; and τ > 0, α > max{α 0 , 0} is the unique solution to Furthermore, α 0 is the unique root of the equation (1+t 2 )Φ(−t)−tφ(t) = δ/2. This result underlies the analysis in [15], where it is formally shown (Lemma A.1) that with α, τ and Z as described above. For a general importance statistic T , define where the limits are in probability. We use special notation for the limiting FDP and TPP corresponding to the Lasso-max and to the Lasso-coefficient statistics: for the choice of T j in (2.4) we write fdp LM (t) and tpp LM (t), and for the choice of T j in (2.5) we write fdp LC (t; λ) and tpp LC (t; λ). In a more recent work, [22] observed that the implications of [3] can, with the necessary adaptations, be used to analyze TPP and FDP also for selection rules of the form (1.6). In particular, for thresholded-Lasso, Lemma 2.2 in [22] asserts that (2.10) It then follows that where (α, τ ) are determined by λ through (2.7). Hence, the asymptotic TPP and FDP in (2.11) depend on the value of λ at which the Lasso estimates are computed. Theorem 3.2 in [22] further identifies the asymptotically optimal value of λ, proving that for any λ > 0, By inspection, we see that an equivalent characterization of λ * is the value of λ corresponding to the minimum τ in (2.7). This characterization is useful for computing λ * as a function of , Π * , σ 2 . Comparing the curves t → (tpp(t), fdp(t)) corresponding to (2.9) and (2.11), [22] concluded that with an appropriate choice of λ, thresholded-Lasso can improve significantly over Lasso, in the sense that a target TPP level can be achieved with much smaller FDP, and as illustrated by the dotted curves in Figure 2. Model-X knockoffs for FDR control The choice of an adequate feature importance statistic is crucial for producing a good ordering of the β j 's, from the most likely to be nonnull to the least likely to be nonnull. A separate question is how to set the thresholdt in (2.2) so that the FDR is controlled at a prespecified level. Inspired by [2], [9] proposed a general method for the random-X setting, Model-X knockoffs, that utilizes artificial null variables for finite-sample control of the FDR. Assuming that the distribution of the vector X i = (X i1 , ..., X ip ) is known (but arbitrary), the basic idea is to introduce, for each of the p original variables, a fake control so that, whenever β j = 0, the importance statistic for the j-th variable is indistinguishable from that corresponding to its fake copy. This property can then be exploited by keeping track of the number of fake variables selected as an estimate for the number of false positives. Under our working assumptions, the p components X i1 , ..., X ip are i.i.d., in which case the construction of Model-X knockoffs is trivial. Thus, let X ∈ R n×p be a matrix with i.i.d. N (0, 1/n) entries drawn completely independently of X, ξ and β, so that it holds in particular that Y and X are independent conditionally on X. We refer to [X, X] ∈ R n×2p as the augmented X-matrix. Ranking of the original p features is based on contrasting the importance statistic for variable j with that for its knockoff counterpart where, crucially, all importance statistics are computed on the augmented matrix. Thus, to obtain the analog of the selection procedure in (1.6), we first compute the (2p)-vector given by instead of (1.5), and form the differences Because X is a valid matrix of Model-X knockoffs, we have from Lemma 3.3. in [9] that the signs of the W j , j ∈ H 0 , are i.i.d. coin flips (in fact, when X ij , j = 1, ..., p, are i.i.d., as considered here, this is easy to see directly from symmetry). In the knockoffs framework, variables are selected when their W j is large, that is, wheret is a data-dependent threshold. The idea is to rely on the "flip-sign" property of the W j to chooset. Concretely, applying the knockoff filter by puttinĝ ensures that the selection rule given by (2.15) controls the FDR at level q by Theorem 3.4 in [9]. To obtain the knockoffs counterpart of thresholded-Lasso selection, we specialize (2.14) to γ = 1, recovering the Lasso coefficient-difference (LCD) statistic introduced in [9], is the Lasso solution for the augmented setup, i.e., the estimator (2.13) obtained for γ = 1. The corresponding selection procedure in (2.15) will be referred to from now on as the level-q LCDknockoffs procedure. Similarly to the notation in Section 2, we write fdp LCD (t), tpp LCD (t), respectively, for fdp(t) and tpp(t) associated with the statistic (2.17). Finally, let fdp LCD (t) ≡ lim FDP(t) be the limit of the (knockoffs) estimate of FDP given in (2.16) for γ = 1. Before proceeding to the main section, we recall an alternative implementation of knockoffs for the special case of i.i.d. matrices. "Counting" knockoffs for i.i.d. matrices In the special case where X i1 , ..., X ip are i.i.d., there is in fact a simpler approach to implementing a knockoff procedure, as proposed in [24]. Instead of pairing each original covariate with a designated knockoff copy (X j withX j ), we can leverage the information that the covariates are i.i.d., and therefore exchangeable, to create a single pool of knockoff variablesX 1 , . . . ,X r that act as a "control group" simultaneously for each X 1 , . . . , X p . To be concrete, for some integer r > 0, suppose we make the matrix X of dimension n × r instead of n × p, still with i.i.d. N (0, 1/n) entries as before. Then by the symmetry in the problem, the distribution of the fitted coefficient vectorβ 1 , . . . ,β p+r (conditional on β) is unchanged under any reordering of the indices in the "extended" null set, where K 0 ≡ {p + 1, ..., p + r}. This is a stronger notion of exchangeability (all null covariates are exchangeable with all knockoff variables), as compared to the pairwise exchangeability property of the general Model-X framework (where each null X j is only exchangeable with its own knockoff copyX j ). Exploiting this stronger form of exchangeability, [24] prove FDR control-for example, we could take the procedure that rejects H 0j wheneverβ j ≥t for 2 and use AMP machinery to derive the appropriate formulas for the power. In particular, power is gained from the fact that, if we choose r to be smaller than p (e.g., r = c · p for some 0 < c < 1), the variable selection accuracy of the Lasso is better since we have n observations and p + r = p(1 + c) many covariates, rather than n observations and 2p covariates as with Model-X knockoffs. However, the counting knockoffs strategy is extremely specific to the i.i.d. design setting: if the X j 's are not themselves i.i.d. (or exchangeable), then we cannot hope to construct a single control group that can be shared by a heterogeneous set of covariates. The Model-X construction, with knockoff X j designed to pair with X j , is therefore substantially more interesting to study in terms of understanding the performance of this methodology in non-i.i.d. settings. AMP predictions for knockoffs The results presented thus far are not novel. In this section we find the asymptotic FDP and TPP for the Model-X knockoffs versions of the thresholded-bridge selection rules (1.6), in particular for the level-q LCD-knockoffs procedure, and present new results. For the knockoffs procedure to control the FDR, the i.i.d. Gaussian assumption on the p coordinates of X i = (X i1 , ..., X ip ) is by no means necessary, and there is indeed no such assumption in [9]. In the current paper, on the other hand, the goal is to compare the (asymptotic) power of the "oracle" thresholded-bridge selection procedure (1.6) to that of its knockoffs version. In particular, for γ = 1 we ultimately want to compare the curves where the quantities t ∞ (q) andt ∞ (q) are defined, respectively, as the values t ∞ andt ∞ for which Of course, how the two curves in (3.1) compare on power at every given q, depends on the underlying model, including the dependence structure among the coordinates of X i . We now proceed to obtaining power predictions for Model-X knockoffs under the asymptotic setting of Section 2.1. The main technical challenge is to validate that the theory from [4] carries over to the knockoff setup involving W -statistics. To overcome this technical challenge, we develop a "local" version of AMP theory that applies to the broad class of knockoff-calibrated selection procedures in (2.15). More specifically, as compared to (2.6), in order to analyze the knockoffs selection procedure (2.15) we need to study the triples ( β j (γ, λ), β j , β p+j (γ, λ)) rather than the pairs ( β j (γ, λ), β j ). Theorem 1 below asserts that, for our asymptotic FDP and TPP calculations, we can treat which is an extension of (2.6). Above, Z and Z are independent N (0, 1) random variables that are furthermore independent of β j , the operator η θ,γ with threshold level θ > 0 is defined as and (α , τ ) are the unique solution to the equation [25] (3.5) In the special case γ = 1, where the bridge estimator is just the Lasso estimator, the operator η θ,γ reduces to the soft-thresholding operator η θ,1 (u) = η θ (u) ≡ sgn(x) · (|u| − θ) + , and (3.5) becomes (3.6) The following theorem formalizes the notion in which (3.3) holds, and is our main theoretical result. Theorem 1. Let f be any bounded continuous function defined on R 3 . Then, we have in probability. Here (α , τ ) are the unique solution to (3.5), and Z and Z are two independent standard normal random variables, which are further independent of Π. For the special Lasso case, γ = 1, we actually have the stronger result below. Proposition 3.2. Under the assumptions of Theorem 1, the Lasso estimator β(λ) satisfies in probability, where (α , τ ) are the unique solution to (3.6). Moreover, the convergence in probability is uniform over λ in any compact set of (0, ∞). The proofs of Theorem 1 and Proposition 3.2 are deferred to Appendix A.2. For the Lasso case γ = 1, a similar result was obtained in a simultaneous and independent work by [23, see their Theorem 6 and the corresponding analysis]. There are, however, some differences. First, our Theorem 1, of which the first assertion in Proposition 3.2 is a direct consequence, applies more generally to any bridge estimator with γ ≥ 1. Second, the techniques we use in the proof are quite different, and these allow us to establish uniform convergence in λ for the Lasso case in second assertion of Proposition 3.2. The uniform convergence is essential for our results to apply when selecting λ by cross-validation, as we recommend in Section 4. Proposition 3.2 is also closely related to Corollary 1 in [3], which can be viewed as a "marginal" version of the above assertion: in the Model-X knockoffs context, [3] implies the convergence of a sum over all pairs i, j such that 1 ≤ i, j ≤ 2p, as opposed to "diagonal" pairs i, p+i for 1 ≤ i ≤ p in Proposition 3.2 above. Corollary 1 in [3] then follows by making use of its conditional (hence stronger) counterpart, Proposition 3.2. More generally, just as Corollary 1 in [3] applies to a tuple of any number of indices, Proposition 3.2 can be readily extended to multiple knockoffs (where several knockoff copies are generated for each original variable). This extension would enable a theoretical comparison similar to that presented in the current paper except with multiple knockoffs, and we leave this interesting direction for future research. Theorem 1 allows us to calculate the limits of TPP(t) and FDP(t) for the selection path of the W -statistic (2.14) for any γ ≥ 1, which includes the LCD statistic as a special case. Corollary 3.3. For fixed γ ≥ 1 and λ > 0, consider the variable selection procedure given by (2.15). Then the asymptotic FDP and TPP at any fixed threshold t > 0 are, respectively, , Moreover, Theorem 1 allows us to calculate the limit of the corresponding knockoffs estimate of the FDP. . (3.8) Remark 3.5. See the proofs of these two corollaries in Appendix A.2. It can be shown that the convergence is uniform in bounded t. In particular, from Corollaries 3.4 and 3.3 we can calculate tpp LCD t ∞ (q) , the asymptotic TPP achievable by the level-q LCD-knockoffs procedure: setting γ = 1, for a given q first computê t ∞ as the value of t > 0 such that fdp(t) = q, and then plug it into the second equation in (3.7) to find tpp LCD t ∞ . It is easy to verify the relationship fdp so that fdp LCD (t) overestimates fdp LCD (t), the actual asymptotic FDP. However, the difference between the two is typically very small: because the random variable |η α τ ,1 (Π + τ Z)| − |τ η α ,1 (Z )| is designed to tend to large values when Π = 0, the second term on the right hand side of (3.9) is typically much smaller than , for example it converges to zero when the magnitude of nonzero elements of β increases. In other words, using the observable random variable FDP(t) in (2.16) instead of FDP(t), does not make LCD-knockoffs overly conservative. We note that the conservativeness was a nuisance in the (alternative) "counting" knockoffs implementation in [24], where an estimate of that requires an extra (user-specified) tuning parameter, was incorporated to mitigate the effect. Here, conveniently, the use of W -statistics obviates the need to estimate . Figure 3 shows knockoffs power, tpp LCD t ∞ (q) , against "oracle" power ,tpp LC (t ∞ (q)), when the nominal FDR value q varies. We took σ = 1, and Π to be a mixture of mass 0.9 at zero and mass 0.1 at M = 4, while δ varies in the four panels. The tuning parameter λ is selected separately for each procedure: for the oracle, this is the optimal λ obtained by minimizing the value of τ ; for knockoffs, we use the limit λ cv of the (10-fold) cross-validation estimate, see Section 4. We can see that for δ ≥ 1, the powers obtained by knockoffs and the oracle are very similar for any q. When δ is smaller, the loss of power is more pronounced. This is mainly because the Lasso estimate itself has larger variance τ for small values of δ; see the left panel of Figure 5. However, for all considered values of δ the relative difference decreases with the power of the oracle (i.e., when q or the magnitude of nonzero elements of β increases). The dotted lines in Figure 4 are obtained by implementing "counting" knockoffs instead of Model-X knockoffs (r = 1); the power curve is slightly better as compared to Model-X knockoffs because the importance statistic itself is used for each feature rather than the W -statistic. Figure 4). To avoid crowding the figure, we plot only the paths for Model-X knockoffs (and not for counting knockoffs). We can see a good agreement between the empirical results and the theory. We conclude this section with Theorem 2 below, that applies to the Lasso case γ = 1 and formalizes the notion that the LCD-knockoffs procedure allows to break through the FDP-TPP diagram presented in [15]. Specifically, the following result says that for any nominal FDR level q > 0 that is not too close to 1, if the signal is strong enough then the LCD-knocknoffs procedure has asymptotic power arbitrarily close to one, as long as the signal sparsity satisfies where * (δ) is a point on the Donoho-Tanner transition curve [11]. Definition 3.6. A sequence of random variables Π m is said to be -sparse and growing, if P(Π m = 0) = for all m, and P(|Π m | > M |Π m = 0) → 1 as m → ∞ for every M > 0. Theorem 2. Fix q > 0 and denote by TPP(λ, Π, q) the true positive proportion of the level-q LCD-knockoffs procedure that uses parameter λ. Moreover, fix such that (3.10) holds. Then for any sequence {Π m } that is -sparse and growing, it holds that for any fixed 0 < λ 1 < λ 2 and any ν > 0, there exist m and n (m) such that P inf Remark 3.7. The proof, which can be found in Appendix A.3, shows that this theorem continues to hold for the bridge-based knockoffs procedure that uses W j = | β j (γ, λ)| − | β p+j (γ, λ)| with γ > 1. For this extension, the nominal level q can take any value between 0 and 1 since the Donoho-Tanner phase transition does not occur for (2.13) when γ > 1 [25]. Tuning by cross-validation The choice of λ in the level-q LCD-knockoffs procedure is critical. Unlike in the orthogonal X situation, the value of λ substantially affects the ranking of the variables, because λ controls the shrinkage of the Lasso estimates. The advantage of the asymptotic theory is that it provides an analytic form for the relationship between λ and the parameter τ , so we can use this to characterize a good choice of λ by its consequences on the value of τ . Figure 5 below illustrates the dependence of τ on λ for δ = 1 and different values of . Here we can see clearly that the relationship is not monotone and that the choice λ ≈ 0 (i.e., recovering the the Basis Pursuit criterion) as well as excessively large λ would result in an inflation of the variance of estimates. Turning to the formal analysis, let tpp LC (λ) ≡ tpp LC (t(λ); λ), where t(λ) is the smallest positive value such that fdp LC (t(λ); λ) ≤ q. Then Theorem 3.2 in [22] asserts that, for any q, In words, the value of λ minimizing the asymptotic estimation mean squared error (MSE) is also the optimal λ for the testing problem. [22] then observe that minimizing the asymptotic MSE, E(η ατ (Π + τ Z) − Π) 2 , is in turn equivalent to minimizing τ in (2.7) over λ. Because the minimizer of τ depends on Π and σ, [22] propose to estimate λ in practice by minimizing a consistent estimate of τ . If the only difference between knockoffs and the oracle were the fact that the augmented Xmatrix is used instead of the original X-matrix, we would be able to conclude immediately that the optimal tuning parameter for LCD-knockoffs is the value of λ minimizing τ in (3.6) instead of (2.7). This is, however, not the only difference, first because knockoffs use W -statistics instead of β, and secondly because knockoffs utilize an estimate of FDP instead of the actual FDP in setting the threshold. Admittedly, the exact value of λ that is optimal for knockoffs no longer has such a simple characterization, but we can still advocate the λ minimizing τ in (3.6) as a good approximation, and this is our target. Figure 6 demonstrates that this approximation is indeed a good one. The value of λ minimizing τ in (3.6) again depends on the unknown Π and σ. To estimate it, instead of relying on a consistent estimator of τ as in [22], we propose to use cross-validation on the augmented design. This takes advantage of the fact that when the covariates are i.i.d., minimizing the estimation error is equivalent to minimizing the prediction error. Hence, from now on we writê λ cv for the K-fold cross-validation estimate of λ operating on the augmented X-matrix. We can again predict the exact limit ofλ cv as follows. where we note that minimizing τ in (3.6) for δ, , Π * , is equivalent to minimizing τ in (2.7) for δ/2, /2, Π * . How to obtain λ cv is not immediate from Lemma 4.1: for any value of λ, τ is itself given implicitly as the solution to an equation system in two variables, which then needs to be minimized over λ. We can nevertheless define a simple procedure for solving this minimization problem, described in Appendix B and ultimately yielding the system of equations (4.2) We call (4.2) the CV-AMP equations. To obtain λ cv , we solve the CV-AMP equations, and then use the second equation of (3.6) with (K − 1)δ/K substituted for δ and with τ cv substituted for τ . Figure 6 shows power against λ for the LCD-knockoffs procedure applied at level q = 0.1. For reference, horizontal lines indicate theoretical power for the knockoffs procedure utilizing the Lassomax statistic (2.4) (computed on the augmented matrix). The latter is obtained from [24] and uses "counting" knockoffs with the true underlying value of . For LCD, the theoretical predictions are consistent with the simulation results (marker overlays), and demonstrate how drastically power can vary with the choice of the tuning parameter. In particular, bad choices of λ can lead to smaller power than even the knockoffs version of Lasso (1.3). Vertical solid lines indicate the value of λ cv , and they indeed seem close to optimal, i.e., close to the value that maximizes power. The broken vertical lines represent the simulation average for the 10-fold cross-validation λ. In accordance with the right panel of Figure 5, we can see that the optimal value of λ decreases when increases. The boxplots in Figure 7 show sampling variability in 1000 simulation runs for the crossvalidation estimate of λ and for the estimate of [22]. In all panels we used n = 1000, p = 1500, and Π has mass 1 − = 0.9 at zero and mass = 0.1 at M = 5. The red horizontal line indicates λ cv for δ = n/p = 2/3. Sampling variability for cross-validation appears smaller. Another (unrelated) advantage of cross-validation is that we have an explicit characterization of λ cv through the CV-AMP equations, whereas the analog for the method of [22] is given implicitly as a minimizer of a certain estimate. Figure 6: Power versus λ for the level-q LCD-knockoffs procedure, q = 0.1. Light blue curves are theoretical predictions for TPP, marker overlays are averages over N = 100 simulation runs with σ = 1, n = p = 5000, and Π has mass 1 − at zero and mass at M = 5 ( varies between panels). Horizontal red lines indicate predicted TPP for the ("counting") knockoffs procedure using the Lasso-max statistic (2.4). The solid vertical line is the theoretical limit λ cv , and the broken vertical line is the simulation average, for the cross-validation estimate of λ with K = 10 folds. Figure 7: Sampling variability in estimating λ: CV versus the method of [22]. Boxplots are based on 1000 simulation runs. Extension to Type S errors The classical paradigm, which was also adopted here, regards a predictor as important if the corresponding β j = 0, and aims at controlling a Type I error rate. In practice, however, it is almost always the case that all β j are different from zero to some decimal, in which case the Type I error trivially vanishes. In the more general context of multiple comparisons, this has lead to adamant objection to focusing on testing of point null hypotheses [18,17]. A reasonable way out is to consider a predictor as important only if |β j | ≥ ∆ for some ∆ > 0, but this has the disadvantage that the definition depends on ∆. Alternatively, Tukey [18] advocated procedures that classify the sign of β j "with confidence", that is, declare β j > 0 or β j < 0 for as many j as possible while keeping small some rate of incorrect decisions on the sign. Incorrectly declaring that β j < 0 when in fact β j > 0, or that β j > 0 when in fact β j < 0, is commonly referred to as a Type S [13] or Type III error. For hypothesis testing problems of the type considered in this paper, it is natural to ask what consequences supplementing each rejection with a directional decision has on the error rate. As in [1], suppose that for each 'rejection' j ∈ S we further provide an estimate of the sign of β j . We define the false sign proportion to be FSP ≡ |{j ∈ S : sign(β j ) = sign j }| | S| , where sign(x) is 1,-1 or 0 according as x > 0, x < 0 or x = 0. In particular, we can see that FDP ≤ FSP, since any false discovery (i.e., selecting j ∈ S when in fact β j = 0) leads to a sign error, sign(β j ) = 0 = sign j . The false sign proportion may often be much higher than the false discovery proportionindeed, as demonstrated in [13], in a low signal-to-noise regime it is easy for a false discovery rate controlling procedure to have very high false sign rate. In the sign-classification framework, we can apply our results to obtain exact asymptotic predictions of the FSP and a corresponding notion of power, for the knockoff procedures in our setting. Thus, write the nonzero component of the distribution of β j as where π + + π − = 1 and where P(Π + > 0) = P(Π − < 0) = 1. Now suppose that for the knockoffs version of the thresholded bridge selection procedure (1.5), we further estimate sign j = sign( β j ) for each j ∈ S, where β j = β(γ, λ). Taking the Lasso case γ = 1 for example, we can apply Theorem 1 to conclude that the procedure that supplements LCD knockoffs with the sign estimates has fsp(t) ≡ lim FSP(t) = = π + P(η α τ ,1 (Π + + τ Z) < −|τ η α ,1 (Z )| − t) . (5.1) To quantify the power of a procedure in the sign problem, it may at first seem natural to consider the ratio of the number of correctly classified signs divided by the total number of nonzero β j 's. But, in a regime where there are no exact zeros among the coefficients, this definition is not useful because the denominator will equal p, the total number of coefficients, although most (usually, almost all) of the coefficients are still too close to zero in magnitude to be picked up by the selection procedure. To overcome this difficulty, we consider a different model, where the distribution of β j is again a mixture between signals (the "slab") and nulls (the "spike"), but now the null distribution is concentrated near zero instead of being a point mass at zero. In particular, let S j = 0 and S j = 1 indicate if β j is considered to be a "null" or "nonnull" coefficient, respectively, and assume the following distribution: where, consistent with Tukey's viewpoint mentioned earlier, we will assume that neither of Π 0 , Π 1 has point mass at zero, but Π 0 still represents a "spike" and is concentrated near zero and Π 1 represents a "slab" component of the mixture. Notice that this "two group" model entails as the distribution of β j , analogous to (2.1). The true sign proportion is then defined as TSP ≡ |{j ∈ S 1 : sign(β j ) = sign j }| |S 1 | , Appealing again to Theorem 1, the limit of TSP for the knockoffs sign-classification version of (1.4) can be calculated as 0), and where Π + 1 is the conditional distribution of Π 1 given Π 1 > 0 and Π − 1 is the conditional distribution of Π 1 given Π 1 < 0. We turn to discussing asymptotic FSP control under the assumption that Π has no point mass at zero. Recall that by the definition we use for FSP, which subsumes incorrect rejections of zero coefficients, FSP is formally at least as large as FDP in any setting-but, in this specific case, we will actually have FDP≡ 0 since there are no exact zeros. Nonetheless, we will now show that in our new model (5.2) that replaces exact zeros with approximate zeros, the Model X knockoffs at the nominal FDP level q can control FSP at the level q/2. The factor of 2 is due to the fact that, in this new model, for any β j we can only err in one direction, while in the idealized model where nulls are exactly zero, estimating either a positive or negative value forβ j results in an error. To show this formally, we will compare two different scenarios: first, we will consider the false sign rate under the model where there are no exact zeros as in (5.2), and second, we will consider the false discovery rate under the model where now there are exact zeros as in (2.1). To avoid confusion between these two distributions, we will write fsp Π (1) (t) and fdp Π (2) (t) for these two quantities of interest, respectively, to emphasize that we are working with two different distributions. Nevertheless, if the "spike" distribution Π 0 is concentrated extremely close to zero, then the two resulting data distributions are essentially indistinguishable, which is why we can compare the two. Formally, below we will show that which is approximately 0.5fdp Π (2) (t) if ≈ 0. To make the argument clearer, for the rest of this section we denote β ≡ Π,β ≡ η α τ ,1 (Π + τ Z), andβ ≡ τ η α ,1 (Z). Then where in the last step we observe that P Π (1) (|β| − |β| > t) ≤ P Π (2) (|β| − |β| > t) for any Π 0 . Notice that, since t was arbitrary, this holds also for the asymptotic knockoff thresholdt ∞ Π (1) . Moreover, observe that by continuity of the formula fort ∞ ,t ∞ Π (1) →t ∞ Π (2) when the "null" distribution Π 0 converges to the point mass at zero. Thus, for < 0.5 and Π 0 sufficiently concenrated around 0 it holds ≤ q , which allows to conclude that knockoffs allow for the asymptotic FSP control under Π (1) . In fact, when Π 0 is sufficiently concentrated around zero, and Π 1 sufficiently dispersed, the formulas above will imply fsp Π (1) (t) ≈ 0.5 · fdp Π (2) (t). Figure 8: Asymptotic predictions for FSP against TSP in a setting that parallels that of Figure 2. The curved as explained in the main text. A Proofs We prove Theorems 1 and 2 in this appendix. The proofs rely heavily on some extensions of AMP theory and approximation results for continuous functions, which we first present in Section A.1 and the beginning of Section A.2, respectively. A.1 Local AMP lemmas Following the setting of AMP theory as specified earlier in Section 2, we present some extensions of AMP theory for the Lasso method. We call these results local AMP lemmas because these results apply to a subset of the coordinates of the coefficients, unlike the existing AMP results which apply to the entire set of coordinates. Throughout this appendix, we use P −→ to denote convergence in probability. For simplicity, we also denote β j,γ ≡ β j (γ, λ). Recall that α , τ are the unique solutions to the set of equations (3.5). Lemma A.1. Let g : R 2 → R and h : R → R be two bounded continuous functions. We have Lemma A.1 is the main contribution of this subsection. Its proof relies on the following three lemmas and we defer the proofs of these preparatory lemmas later in this subsection. Lemma A.2. Let f : R → R be any bounded continuous function. We have Lemma A.3. Let f : R 2 → R be any bounded bivariate continuous function. We have For any numbers A 1 , . . . , A p and B 1 , . . . , B p , denote by A and B their respective means. Let π be drawn from all permutations of 1, . . . , p uniformly at random. Then, we have Proof of Lemma A.1. By Lemma A.2, we have and Lemma A.3 gives Now, let us consider the distribution of conditional on g( β 1,γ , β 1 ), . . . , g( β p,γ , β p ) and the empirical distribution of {h( β p+i,γ )} p i=1 . This σalgebra is denoted as F. Note that knowing the empirical distribution of {h( β p+i,γ )} p i=1 is the same as knowing all values of h( β p+i,γ ) except for the indices. By symmetry, the conditional distribution of (A.1) is the same as that of 1 p where (π(1), . . . , π(p)) is a permutation of 1, . . . , p drawn uniformly at random. Then, first we know which converges to the constant E g(η α τ 2−γ ,γ (Π + τ Z), Π) E h(τ η α ,γ (Z)) . Recognizing the boundedness of gh/p, which results from the boundedness of the terms of this sum, a consequence of the above implies Moreover, due to the boundedness of 1 Now, we consider the variance and write f ∞ for the supremum of a function f . To begin, we invoke Lemma A.4, from which we get Thus, from (A.2) and (A.4) we get Finally, (A.3) and (A.5) together reveal that as p → ∞. In the remainder of this subsection, we complete the proof of Lemmas A.2, A.3, and A.4. In the proof of Lemma A.2, we need the following preparatory lemma. For any p let S p be a random subset of {1, . . . , m p } of cardinality l p , drawn independently of the ξ pi s. Then by exchangeability, lp i=1 ξ pi is equal in distribution to i∈Sp ξ pi . Therefore we equivalently need to show that lim p→∞ P i∈Sp ξ pi l p − c > ι = 0. We trivially have The assumption Next we bound the remaining term. Recall that the ξ pi 's are bounded, so we can assume ξ pi ∈ [−B, B] for some finite B > 0. We then have Var i∈Sp ξ pi l p ξ p1 , . . . , ξ pmp ≤ 4B 2 l p , since sampling uniformly with replacement always has variance no larger than sampling uniformly without replacement, and the ξ pi 's are bounded. Therefore, P i∈Sp ξ pi l p − ξ p1 + · · · + ξ pmp m p > ι/2 ξ p1 , . . . , ξ pmp ≤ 4B 2 /l p ι 2 /4 almost surely. Marginalizing, which tends to zero as p → ∞ since ι is fixed and l p → ∞. This completes the proof. Now, we are ready to prove Lemma A.2. Proof of Lemma A.2. It suffices to prove the lemma for any bounded Lipschitz continuous functions. To see this sufficiency, assume for the moment that if g is bounded and Lipschitz continuous. Let f be a continuous function that satisfies |f (x)| ≤ M for all x. We show below that the value 0 otherwise. This gives Now we will take a → 0 on the right-hand side of (A. 10). Recognizing that if Π = 0 and otherwise f a η α τ 2−γ ,γ ( Π + τ Z), Π → 0 as a → 0, the boundedness of f a allows us to use Lebesgue's dominated convergence theorem to obtain Turning to the left-hand side of (A.10), we use the fact that for any c 1 > 0, one can find c 2 > 0 such that 1 2p with probability approaching one for each a < c 2 . To see this, note that 1 2p of which the expectation satisfies since Π * places no mass at zero, by definition. This inequality in conjunction with the Markov inequality reveals that (A.12) holds if a is sufficiently small. Proof of Lemma A.3. As with Lemma A.2, it is sufficient to prove the present lemma for any bounded Lipschitz continuous functions. By Theorem 1.5 of [4], we get Note that the right-hand side can be written as On the other hand, from Lemma A.2 we know This completes the proof. Proof of Lemma A.4. We have First, we get where B = (B 1 + · · · + B p )/p, and Thus, we get A.2 Proofs of Theorem 1 and Proposition 3.2 We first prove Theorem 1 with a fixed λ, followed by a discussion showing that the theorem holds uniformly over λ in a compact set for the Lasso, thereby proving Proposition 3.2. In addition to Lemma A.1, the proof relies on Lemmas A.6 and A.7, which we state below. Let C(Ω, R) denote the class of all real-valued continuous functions defined on a compact Hausdorff space Ω. Lemma A.6. Let Ω 1 and Ω 2 be two compact Hausdorff spaces and f : Ω 1 ×Ω 2 → R be a continuous function, then for every υ > 0 there exist a positive integer m and continuous functions g 1 , . . . , g m on Ω 1 and continuous functions h 1 , . . . , h m on Ω 2 such that Lemma A.6 serves as an approximation tool for our proof. For information, this lemma follows from the Stone-Weierstrass theorem (see Corollary 11.6 in [10]). Proof of Lemma A.7. Note that we have It follows from [25] that which tends to 0 as A → ∞. Second, and third, we obtain Last, note that these fractions are all bounded, so Lebesgue's dominated convergence theorem can be applied here. Now we turn to the proof of Theorem 1. Proof of Theorem 1. Denote by M an upper bound of f in absolute value and let R > 0 be a number that will later tend to infinity. It is easy to see that we can construct a continuous functioñ f defined on R 3 such that (1) f (x) ≡f (x) on B R ≡ {x ∈ R 3 : x 2 ≤ R}, (2) |f (x)| ≤ M for all x, and (3) lim x →∞f (x) exists. This can be done, for example, by letting From the three properties off , it is easy to see that this is a continuous function on the product of two compact Hausdorff spaces, R 2 ∪ {∞} and R ∪ {∞}. From Lemma A.6, therefore, we know that there exist continuous functions g 1 , . . . , g m on R 2 ∪ {∞} and h 1 , . . . , h m on R ∪ {∞} such that for any small constant υ > 0. Since g l and h l are continuous on the compactification of their domains for each l, the two functions must be continuous and bounded on R 2 and R, respectively. Thus, we get Taken together, (A.18) and (A. 19) give Next, we consider Our aim is to show that both displays are small. For the first display, note that (A.23) Likewise, we show below that (A.22) can be made arbitrarily small in absolute value. To this end, note that < 3υ + 2M #{1 ≤ i ≤ p : max(| β i,γ |, |β i |, | β p+i,r |) > A} p + 2M P max(|η α τ 2−γ ,γ (Π + τ Z)|, |Π|, |τ η α ,γ (Z )|) > A happens with probability tending to one as p → ∞. Taking A ≡ R/ √ 3 → ∞ followed by letting υ → 0, Lemma A.7 shows that 3υ+ 2M #{1 ≤ i ≤ p : max(| β i,γ |, |β i |, | β p+i,r |) > A} p +2M P max(|η α τ 2−γ ,γ (Π + τ Z)|, |Π|, |τ η α ,γ (Z )|) > A can be made arbitrarily small. This reveals that thereby completing the proof. This proves the first identity in Corollary 3.3. The second identify of Corollary 3.3 and Corollary 3.4 can be proved similarly. The remaining part of this subsection is devoted to showing that Theorem 1 holds uniformly over all λ in a compact interval of (0, ∞) when γ = 1. As with the proof of Lemma A.2, we can assume that f is bounded and L-Lipschitz continuous. The uniformity extension is accomplished largely by using Lemma B.2 from [15] (see also [21]). A.3 Proof of Theorem 2 The proof of Theorem 2 presented here applies more generally to the Model-X knockoffs procedure that uses, instead of the LCD statistic, any other statistic of the form W j (λ) = w( β j (λ), β j+p (λ)), where the link function w satisfies w(u, v) = −w(v, u) and w(x, c) → ∞ as |x| → ∞ for any fixed c; we call such w function faithful in what follows. From [15] we know that Lasso cannot obtain full power unless (3.10) holds, hence we consider only the case < 2 * (δ/2). For any such , it can be shown that the expressions in Equations (3.2) and (3.9) converge to (1 − )/(1 + ) when t → 0 and Π m is growing as in the assumption. We consider first the case q < (1 − )/(1 + ). When the prior distribution is Π m , denote by α m , τ m the solution to (3.6) and lett m be defined as above. Recognizing the assumption of a growing Π m in Definition 3.6, one can show that α m , τ m converge to α ∞ , τ ∞ which are the solution to That is, α m → α ∞ and τ m → τ ∞ as m → ∞. As a consequence,t m tends tot ∞ as m → ∞ as well, where the existence oft ∞ is ensured by the fact that 0 < q < 1− 1+ . Following the proof of Lemma A.1 in [15], we can show that TPP(λ, Π m , q) converges to tpp(λ, Π m , q) ≡ P(w(η α m τ m (Π m + τ m Z), τ m η α m (Z )) ≥t m |Π m = 0) in probability uniformly over λ 1 ≤ λ ≤ λ 2 as n → ∞, by making use of Theorem 1. Having demonstrated earlier that α m and τ m converge to constants, the faithfulness of w and the growing condition of Π m reveal that tpp(λ, Π m , q) → 1. B Derivation of the CV-AMP equations Denote the minimum value for τ by τ cv ≡ min λ τ (λ; (K − 1)δ/K), and let α cv be the corresponding value for α (so α cv is the solution in α to the first equation in (3.6) when τ replaced by τ cv ). Note that we can characterize (α cv , τ cv ) by requiring that for 0 < t < τ cv , does not have a solution in t for α > α min . Therefore, on defining we are looking to solve d f (u) d u u=αcv = 0. Imposing now (B.1), we get the equation system which simplifies to (4.2).
2020-07-31T01:00:47.318Z
2020-07-30T00:00:00.000
{ "year": 2020, "sha1": "02994b52e11a530ea9fdbccad7ea1143d4aa3c56", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "02994b52e11a530ea9fdbccad7ea1143d4aa3c56", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
262574728
pes2o/s2orc
v3-fos-license
SLOBOZHANSKYI HERALD OF SCIENCE AND SPORT Among coaches and scientists, one of the main topics is the improvement of the system for training elite athletes. Traditional studies in basketball devoted to the actual connection of study with a long training process. Training and competitive activity in basketball includes neurodynamic, psychomotor, cognitive and psychoe-motional characteristics. Modern research devoted to the characteristics of the functional states of athletes in various training and competitive conditions. But among the modern studies of team sports, there are no data on the psychophysiological states of elite athletes for different types of monitoring. Purpose: Development of the system of psychophysiological support of elite basketball players as an actual scientific direction in the theory and methodology of sports training Materials and methods: The 13 elite basketball players, age 19-23 (sport experience more than 8 age) were examined. The sensory-motor response, mobility and balance of nervous process, verbal memory, operative thinking and general intelligence were studied. All of tests include the complex of computer diagnostic “Multipsychometer 05”. Also, in battery were include tests: estimates of actual psychical state (by color test Lusher), field independence (by Stroop test), motivation (by Mehrabian test) and aggressive (by Bass-Darki test). With the factors analysis the main components of psychophysiological characteristics of basketball players were obtained. Results: The factor’s structure of psychophysiological state of elite basketballs included 4 factors: neurodynamics, cognitive resources, energy-information and emotional-cognitive. Conclusion: The factor structure of psychophysiological state of elite basketball players was revealed. The identified factors can be used to correct the training process of elite athletes. Introduction Modern basketball is very popular all over the world.Basketball requires athletes to quickly determine the effective-ness of actions in different situations [2,3].But athletes must not only fast improve their technical skills.A basketball player Original article Copyright: © 2023 by the authors.DOI: https://doi.org/10.15391/snsv.2023-3.006needs to take into account the active actions of the opponent and look for an adequate response [18,19]. One of the main topics in the training of basketball players is to improve the training system to achieve high sports results [5,6,11]. The most important factors of the training process is the functional state of athletes.Among many components, the important properties of the functional state are: physical performance, functional fitness, adaptive capabilities, physical development, level of technical and tactical skill and psychophysiological state [12,15,16]. Sports results correlate with the effectiveness of individual approach in the training process of players in sports [4,12,17]. Analysis of current research has shown that a greater number of researchers focus on local characteristics of the functional state of athletes training and competitive activity [9,13]. However, among modern studies of game sports there are no data on psychophysiological states in elite athletes for different types of control. The topic of psychophysiological support in game sports is a new and undeveloped area in the system of training athletes [1,5,6,20]. Thus, the development of the system of psychophysiological support of elite basketball players is a very relevant direction of the theory and methodology of sports training. Materials and Methods Written consent was obtained from all athletes before the procedure to use the results of the study for scientific purposes, in accordance with the recommendations of the Ethical Committee for Biomedical Research and the Declaration of Helsinki Ethics. Thirteen elite basketball players aged 19-23 years (sports experience of more than 8 years) were examined. The methodological approach included three blocks of test performance.The first block, "neurodynamics", was used to assess sensory-motor reaction, mobility and balance of nervous processes.The second block, "cognitive", offered indicators of verbal memory (for words), verbal intelligence (pattern identification) and non-verbal intelligence (Raven's test).The third block, "cognitive-activity", included the following tests: assessment of current mental state (Lusher color test), field independence (Stroop test), motivation (Mehrabian test) and aggressiveness (Bass-Darkey test). Statistical analysis was performed using the computer program STATISTICA 10.0, the level of statistical significance was p<0.05. Results During the study we used descriptive statistical and correlation analysis and obtained 42 parameters for each person.When the number of correlations between values was limited, we used the factor analysis method (normalized by Varimax). Factor analysis revealed informative values from a set of research methods in elite basketball players. From the total number of values, only informative parameters that include the structure of psychophysiological state of elite basketball players were selected (Table 1). The value of concentricity (Lüscher color test) indicates rest, pleasure, passivity.In basketball players concentricity has a low level of manifestation (Me=4.00conventional units).Ac- cording to this indicator the group is not homogeneous.The balance of nervous processes between the processes of excitation and inhibition indicates the state of the nervous system and personality behavior.The accuracy of test performance in elite basketball players appears at an average level and indicates the heterogeneity of the group (CV=43.03%). Functional mobility of neural processes determines the information processing capabilities of basketball players in limited time.Informative parameters for basketball players: dynamism (CV=10.79%),capacity of visual analyzer (CV=8.48%)and limited time of decision making (CV=13.83%).The analysis shows the average manifestation of these parameters and the homogeneity of the groups. The study of visual-motor reaction showed low reaction speed (Me=305.60 ms).This indicates that elite basketball players have a low level of speed reaction, but it is sufficient for efficiency. The study of verbal memory effectiveness reveled of high level of quality of test performance.The variability of memory performance has a high level of manifestation and homogeneity of the group (CV = 18.83%). The results of the pattern identification test in all athletes showed a high level of productivity, accuracy and effectiveness (CV = 82.8%).Also this group is homogeneous in verbal test. For the nonverbal test (Raven's test) informative values -productivity and effectiveness -have an average level of manifestation.By the value of productivity the group of athletes is homogeneous (CV=19,12%), and by the value of effectiveness -heterogeneous (CV=40,17%). According to the obtained results, auto aggression and aggressiveness in elite basketball players indicate an average level of manifestation.This is due to the low level of defense mechanisms in the external environment.The analysis shows that by auto aggression the group of athletes is heterogeneous (CV=74,75%), and by aggression -also heterogeneous (CV=21,34 %). We obtained four factors with a sum of 62.5% in the main variance (Table 2). The first factor, with a contribution to the total variance of 15.1%, combined the characteristics of neurodynamics.The significant parameters of this factor were: accuracy on the nerve balance test (-0.81), the Limited time of decision making on the nerve mobility test (-0.79), and the Latent time of the visual-motor reaction (-0.72).The second factor (14.9%) has more informative values are related to cognitive characteristics: productivity (-0.92), accuracy (-0.81) and effectiveness (-0.89) on the test of pattern identification. The third factor (14.0%) was associated with peculiarities of psychical state and neurodynamics.The main parameters of this factor are: concentricity according to the Lusher color test (0.77), dynamism (-0.76) and the ability of the visual analyzer (-0.73) according to the functional mobility test.Moreover, both of these tests can have opposite directionality of vectors. The fourth factor (18.6%) indicates the parameters of intelligence and personal aggression.Informative values in this factor are productivity (0,75) and effectiveness (0,83) by Raven's test, autoaggression (-0,72) and general aggressiveness (-0,74).In this factor, there is an inverse relationship between the properties of intelligence and aggressiveness. Discussion It is traditional to use factor analysis to study the competitive activity of basketball players [5,6,16].But we use this analysis to develop the structure of psychophysiological state related to the effectiveness of technical and tactical actions in elite basketball players. The obtained results indicate the presence of four factors reflecting the psychophysiological state of elite basketball players.The first factor included indicators of neurodynamics: speed and quality of information processing.In this factor the main personality properties of athletes were observed.`These results were consistent with the relevance of information processing fast and accuracy to a player's athletic performance [10]. The second factor is related to the cognitive resources, which determines the abilities of brain activity in decision The third factor was named "energy-informational".The main parameter of this factor is concentricity of psychical energy.This parameter characterizes the necessity of energy accumulation and preservation.Accumulation and preservation of energy provide fast and qualitative perception and information processing of complex visual reactions. The four factor reflex the manifastation of agression with non verbal intelligence.This factor was named "emotional-cognitive".The obtained results are consistent with the views on the relationship between aggression and cognitive activity [10]. An increase in the level of aggression provokes a decrease in the ability to adequately perceive and make decisions in the conditions of real competitive activity. Conclusions The factor structure of psychophysiological state of elite basketball players was revealed.The identified factors can be used to correct the training process of elite athletes. Table 2 Factor structure links among psychophysiological values of elite basketball players (n=13) [4]ing and effectiveness of tactical and technical actions.The main values of this factor indicate the presence of verbal intelligence in athletes.According to the structure, this factor can be called "cognitive resource".Success in competitive activity is supported not only by functional abilities, but also by motor and sensory activity.One of the main properties to support competitive performance is brain activity, memory, attention and speed of mental problem solving[4].
2023-10-04T14:55:08.953Z
2023-09-24T00:00:00.000
{ "year": 2023, "sha1": "7e4ba0bdd59d798384b5d512dae9fdf0145e128c", "oa_license": "CCBY", "oa_url": "https://shssjournal.com/index.php/journal/article/download/51/24", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e4ba0bdd59d798384b5d512dae9fdf0145e128c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
248665740
pes2o/s2orc
v3-fos-license
Neural Networks with Different Initialization Methods for Depression Detection As a common mental disorder, depression is a leading cause of various diseases worldwide. Early detection and treatment of depression can dramatically promote remission and prevent relapse. However, conventional ways of depression diagnosis require considerable human effort and cause economic burden, while still being prone to misdiagnosis. On the other hand, recent studies report that physical characteristics are major contributors to the diagnosis of depression, which inspires us to mine the internal relationship by neural networks instead of relying on clinical experiences. In this paper, neural networks are constructed to predict depression from physical characteristics. Two initialization methods are examined - Xaiver and Kaiming initialization. Experimental results show that a 3-layers neural network with Kaiming initialization achieves $83\%$ accuracy. Introduction Clinical depression is a psychotic emotional disorder, mainly caused by the individual's difficulty in coping with stressful life events. Depression negatively affects the patients, causes feelings of extreme sadness, and leads to various mental and physical diseases [2]. Depression is recognized as one of the risk factors for suicide [1]. The World Health Organization (WHO) ranks it as the fourth leading cause of disability in the world and predicts that it will become the second leading cause of disability by 2030. As depression becomes common in the general population [4] and a major burden for the healthcare system worldwide [6], effective depression diagnosis and treatment techniques attract increasing attention. However, early diagnosis of depression is clinically challenging. The diagnosis of depression is mostly given by general practitioners. Unfortunately, the modest prevalence of depression in primary care indicates that misidentifications outnumber missed cases [6]. Recently, depression diagnosis based on critical behavioral signals and physiological indicators is gaining growing popularity [7]. The study of objective biological, physiological and behavioral markers not only improves the accuracy of psychological diagnosis and treatment of many mental diseases but also eases the social and economic burdens associated with these diseases [8]. According to recent studies, there is an internal relationship between physical characteristics and the risk of depression. However, the relationship is so complicated that beyond current clinical experiences. Therefore, a model that learns the relationship from physical data to predict the diagnosis results is of great importance. Machine learning is a powerful data analysis tool prevalent in both academia and industry. Studies in recent years have found it feasible and effective in the illness diagnosis. For example, decision tree-based classifiers are employed in the discovery of type II diabetes [12]. Ahmad [11] applies decision tree classifiers to diagnose breast cancer. Recent literature [9] uses neural networks in the field of predicting depression [9] and reports dramatically better results than human efforts. In this paper, we emphasize neural networks for depression diagnosis based on observed behavioral signals and critical physiological indicators. Similar data preprocessing methods as in [9] are adopted. Without prior knowledge concerning depression, physical data from 12 individuals (6 men and 6 women) are collected while they are watching videos, including galvanic skin response (GSR), skin temperature (ST), and pupillary dilation (PD). Galvanic skin response exhibits unique patterns that indicate the response of sweat glands to different emotional stimuli. Skin temperature reveals the intensity of acute stress that the individual feels. Pupillary dilation provides signs of changes in mental activity intensity, and pupil size varies with emotional stimuli. In the experiment, we obtained 23 GSR features, 39 PD features, and 23 ST features in total. These features are processed and used to train the neural networks with different layers. We also investigate two typical initialization methods -Xaiver and Kaiming initialization. Experimental results show that a 3-layers neural network with Kaiming initialization achieves 83% accuracy, which is the highest among all configurations experimented. Method In this paper, we adopt neural network classifiers and emphasize the effect of different layer numbers and initialization methods. Two initialization methods are under experiment: Xaiver [13] and Kaiming [10] initialization. Then, we investigate different configurations by varying the initialization methods and the number of layers. The optimal hyper-parameters for each model is chosen through experiments. Neural Network Artificial neural networks are hotspots in many fields. A neural network is composed of several layers, each of which contains multiple neurons. A neuron typically consists of three components: connection weight, adder, and activation function. Once many values from previous layers arrive at the input, the values are first multiplied by the weights on each connection. Then the adder sums up all the weighted values and forms the actual signal to the neuron. Finally, the activation function maps the input signal to a certain value within a permitted range. The dataset is preprocessed and sent to the first layer of the network. These values flow across layers and are processed by each layer. The final outputs of the last layer encode information for classifications. Three kinds of neural network architectures are evaluated in the paper: singlelayer, 2-layer, and 3-layer neural networks. For a 2-layer neural network, the number of hidden layers we use is 50, the same as in [9]. For a 3-layer neural network, the number of the first hidden layers is 50, and the number of the second hidden layer is 20. These parameters are explored and selected in the experiment. Neural network structures with deeper layers are not considered due to the limited samples. In this paper, we mainly focus on how different initialization methods influence the prediction results. For all models, the learning strategy adopted is statistic gradient descent(SGD) with momentum. We explore and choose the optimal hyperparameters for multi-layer perception(MLP) networks. Table 1 lists the hyperparameters selected for different network models with Xaiver and Kaiming initialization, including batch size(bs), learning rate(lr), and momentum (m). These parameters are chosen to maximize the accuracy of a certain network structure with specific initialization methods so that we can compare the optimality between different structures and initialization methods. Xaiver Initialization The parameters need to be initialized before neural network training. Weight initialization are typically randomized based on Gaussian distribution. However, with neural network depth increases, this approach suffers dramatically from gradient disappearance. The variance of activation values can be decreased layer by layer, causing the gradient vanishing layer by layer in the back propagation process. For training deeper neural networks, it is necessary to avoid the attenuation of the variance of the activation value. To tackle this problem, Xaiver Glorot [13] proposes that the output value of each layer should keep Gaussian distribution in both forward and backward propagation, which is the core of the Xaiver initialization method. A forward propagation involves the following calculations: where Y i , W i , X i , B i correspond to the outputs, weights, inputs, and biases of the neurons in the i th layer. In order to keep the forward signal strength unchanged, a necessary condition is to meet the requirements: Based on the following assumption of variable distribution: W, X, B are independent of each other We can get: In order to guurantee Var(Y i ) = Var(X j ), we need to satisfy d×Var(W ij ) = 1, which is equivalent to: Finally we get the following Xaiver initialization method: Kaiming Initialization Although Xaiver takes the variance of activation value into account, the activation function is still possible to change the distribution of the values flowed across layers. Kaiming initialization [10] is proposed to solve this problem. Consider a forward propagation: Based on Xaiver initialization, a new hypothesis is introduced in Kaiming initialization: X j has a symmetric distribution around 0, which means: And V ar(Y i ) = V ar(X j ) is still satisfied. Put it altogether, the Kaiming initialization method is as follows: Evaluation Metrics Based on the method discussed in Sec. 1, a total of 192 pieces of data are collected from 16 participants and each of them own 12 records. Among the total dataset, 20% of the data is set as the test set. For the remaining data, leave one out method is adopted to divide the training dataset and the validation set. The metrics to measure the performance of different models are accuracy, precision, recall and F1 score. In the binary classification problem, it is assumed that the sample has two categories: positive and negative. Depending on the prediction result, all samples are classified into 4 classes: The definitions of accuracy, precision, recall and F1 score are as follows: Precision = T P T P + F P Results We use the dataset to train the network configurations with different number of layers and initialization methods as stated in Sec. 2, using parameters listed in table 1. The experimental results are listed in table 2 -4. As depicted in table 2, with Kaiming initialization, the precision and recall are both 0 for the moderate class, which signifies that the performance of the model is extremely poor. From the 3 tables listed, we can draw several conclusions. 1. For both initialization methods, increasing the number of layers in the neural network helps to increase the overall performance. 2. For all network topologies, the initialization method dramatically affects the final accuracy results. 3. While the performance of Xaiver remains relatively stable, performance of Kaiming initialization improves faster as the number of layers increases. It performs much worse than Xaiver in the 1-layer network, comparable in the 2-layer network, and better in the 3-layer network. Therefore, Kaiming initialization is more sensitive to network topologies. 4. 3-layer neural network with Kaiming initialization achives the best accuracy of 83% among all configurations. Discussion According to the above results and analysis, although Kaiming initialization optimizes to the Xaiver method, it cannot outperform Xaiver in all situations. While an appropriate initialization method is important for boosting the performance of a model, which method to use depends on many important factors, such as the network topologies, hyperparameters, and the distribution of the dataset, etc. The results also enlighten us to focus more on the initialization methods to get better accuracy when using machine learning models in practice. Limited by the sample sizes, this paper does not involve further study on other important factors. For example, deeper neural networks are expected to perform better than the 3-layer configuration given more training samples. Besides, other network topologies such as CNNs and DNNs may also be applied. This inspires us to introduce and explore more appropriate network architectures in order to achieve higher prediction accuracy. Nonetheless, the result reported in the paper demonstrates the effectiveness of gathering physical signals to training network models for depression prediction without human efforts, which is conducive to more objective diagnosis and early treatment of depression. Also, since the models learn the internal relationship between physical patterns and depression, we may also investigate what exactly the patterns that the model has learned, and which pattern and indicator is the most significant contributor to depression. They remain open questions to be answered in our future work.
2022-05-11T01:15:41.473Z
2022-05-10T00:00:00.000
{ "year": 2022, "sha1": "a32ae945cc417d82f4d91da4a321b4227cbd6d81", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a32ae945cc417d82f4d91da4a321b4227cbd6d81", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
118989377
pes2o/s2orc
v3-fos-license
Are solar granules the only source of acoustic oscillations ? The excitation mechanism of low degree acoustic modes is investigated through the analysis of the stochastic time variations of their energy. The correlation between the energies of two different modes is interpreted as the signature of the occurrence rate of their excitation source. The different correlations determined by Foglizzo et al. (1998) constrain the physical properties of an hypothetical source of excitation, which would act in addition to the classical excitation by the turbulent convection. Particular attention is drawn to the effect of coronal mass ejections. The variability of their occurrence rate with the solar cycle could account for the variation of the correlation between IPHIR and GOLF data. Such an interpretation would suggest that the mean correlation between low degree acoustic modes is at least 0.2 % at solar minimum. Introduction The theoretical mechanism of excitation of solar acoustic modes by the turbulent convection is well documented (Goldreich & Keeley 1977;Kumar & Goldreich 1989;Balmforth 1992a,b,c;Goldreich et al. 1994). High frequency oscillations were interpreted in different ways (Kumar & Lu 1991;Goode et al. 1992;Restaino et al. 1993;Vorontsov et al. 1998), in order to address the question of the depth of the excitation source. Local observations led Rimmele et al. (1995) and Espagnet et al. (1996) to identify the excitation of 5-minute oscillations with acoustic events occurring in the downflowing intergranular regions, rather than overshooting granules. Considering granules of 1000 km diameter renewed on a time scale comparable with their turnover time (8 min), there are about 5 × 10 9 excitations by solar granules per damping time (3.7 days). Even if the number of efficient excitations is smaller by a factor 100 (Brown 1991), each low degree mode is stochastically excited so many times per damping time, that modes with different frequencies Send offprint requests to: foglizzo@cea.fr are expected to have uncorrelated energies. In particular, the fitting procedures used to determine the frequency, linewidth, and splitting of p modes often rely on the statistical independence of neighbouring modes (e.g. Appourchaux et al. 1998). Baudin et al. (1996), however, noticed a possible correlation between low degree p modes in IPHIR data. By measuring the mean correlation of p modes in IPHIR and GOLF data, and determining their statistical significance, Foglizzo et al. (1998) (hereafter F98) suggested the existence of an additional source of excitation during the IPHIR period (end of 1988). The goal of this study is to use the observed correlations as a constraint on the physical properties of this hypothetical additional source. The theoretical relationship between the occurrence rate of this source, its contribution to the energy of low degree p modes, and their correlation coefficient, is established in Sect. 2. Available observational constraints are reviewed in Sect. 3. The possible role of comets, X-ray flares and coronal mass ejections (CMEs) is discussed in Sect. 4. Correlation coefficient between two modes excited by a single mechanism 2.1. The energy of a mode seen as a random walk Let us consider impulsive excitations, distributed over the solar surface. The exciting events, indexed by k, occur at the random poissonian time t k . They correspond to a radial velocity perturbation v k (r) localized in a cone of angle α k around the random direction (θ k , φ k ), with a radial extension δr k . v k (r) is described by an amplitude v k and dimensionless shape functions g k and h k as follows: v k (r) ≡ g k (α)h k (r)v k , (1) h k (r) ∼ 1 for R ⋆ > r > R ⋆ − δr k , g k (α) ∼ 1 for α < α k . α(θ, θ k , φ − φ k ) ≥ 0 is the angle made by the direction θ, φ with the direction θ k , φ k . Let M k be the mass of gas in the volume defined by the functions g k and h k , so that the kinetic energy of the perturbation is E k = M k v 2 k /2. The mean interpulse time is defined as ∆t ≡< t k+1 − t k >. We assume that each impulsive event triggers free oscillations of the set of p modes. Each perturbation is projected onto the basis of eigenvectors. The damping time τ d depends in principle on the mode considered, but is nearly constant (∼ 3.7 days) for the modes considered by F98, in the plateau region between 2.5mHz and 3.5mHz (linewidth of 1µHz). In appendix A it is shown that the total energy of the oscillations associated to the frequency ω nl + mΩ is: where the real function g k l,m = g k l,−m depends on the angular shape of the excitation projected onto the spherical harmonics, and h k nl depends on the radial shape of the excitation projected onto the radial part v r nl (r) of the eigenfunction: The relationship between the spherical harmonics and the Legendre associated functions P m l through the constant q lm is recalled in Eqs. (A.5)-(A.6). The exponential damping in Eq. (4) can be schematized as a term selecting the finite set of excitations which occurred within one damping time before the time t: The series of phases ω nl t k and φ k are independent random variables uniformly distributed in the interval [0, 2π]. The series Φ k nlm defined by Eq. (6) is therefore also uniformly distributed in [0, 2π]. Ignoring the academic case where the ratio of the frequencies is a simple rational number, Φ k nlm and Φ k n ′ l ′ m ′ can be considered independent for two different realistic modes. As a consequence, the energy of the wave is interpreted as the squared length of a random walk in the complex plane. Each step of this random walk is defined by an amplitude a k nlm , and a phase Φ k nlm (negative values of the amplitude can be converted into an increment of the phase). The number of steps N is the number of excitations, over the whole surface of the sun, within one damping time: The central limit theorem ensures that the real and imaginary parts of the complex sum (9) converge towards independent normal distributions if N is large enough, resulting in an exponential distribution of energy (see also ). According to this random walk interpretation, two different modes excited by the same series of events must have correlated energies. This correlation should approach 100% if the number of steps of this random walk is small, i.e. if the interpulse mean time ∆t is longer than the damping time τ d . The independence of the phases Φ k nlm and Φ k n ′ l ′ m ′ , however, makes the correlation decrease to zero when the number of excitations increases. In particular, the random walks associated with the two components ω nl ± mΩ of a mode n, l have the same series of amplitudes, but have two independent series of phases. This leads us to expect that waves travelling in opposite directions do not have the same instantaneous total energy (except on average), although they are excited by the same events. Theoretical correlation between the energies of two modes The mean energy < E nlm > of the mode nlm is directly proportional to the mean energy < e nlm > received from each excitation (appendix B): The correlation C n ′ l ′ m ′ nlm between two modes nlm, n ′ l ′ m ′ excited by the same source described by Eq. (4) is derived in appendix B: Latitudinal distribution and sizes of the excitations The ratio < e 2 nlm > / < e nlm > 2 , related to the spread of the distribution, is called kurtosis (it is sometimes defined with an additional constant −3 which we omit here). The kurtosis of e nlm is partly produced by the kurtosis of E k (independent of nlm), but also by the projection of the spatial distribution of excitations (θ k , α k , δr k ) onto the eigenfunctions nlm. The random variations of the size α k , δr k of the excitation play a negligible role as long as it is smaller than the wavelength of the mode nlm. For the modes l = 0 and l = 1 analysed by F98, we restrict ourselves to perturbations such that α k < π, and δr k is smaller than the depth of the first radial node of the eigenfunction v nl , and thus neglect the random variations of α k , δr k . Let us estimate the kurtosis of the distribution g lm (θ k ), due to the projection of the latitudinal distribution of excitations on the mode nlm. From Eq. (7), the function g lm (θ k ) is independent of θ k for the radial mode l = 0: By contrast, the mode l = 1, m = ±1 is more excited by equatorial events than polar ones. For small scale excitations (α k ≡ 0) distributed uniformly over the sphere, Eq. (7) implies: The kurtosis of g 2 1,±1 should reach a value even closer to unity if the excitations are distributed in an equatorial region, like CMEs at solar minimum or big flares. Let us approximate the kurtosis of e nlm as the product of the kurtosis of E k by the kurtosis of g 2 lm (θ k ). Using this approximation in Eq. (12), the correlation between low degree modes l = 0, m = 0 and l = 1, m = ±1 excited in the random direction (θ k , φ k ) by a small scale excitation (α k ≡ 0) is: C n ′ ,1,±1 n,0,0 where we have incorporated the kurtosis of E k into the definition of the effective number N eff of excitations per damping time: The correlation displayed in Fig. 1 appears to be hardly sensitive to the geometrical content of the function g lm , since all curves are similar within 10%. Error bars of the observationally measured correlations being typically larger than 0.1, Eq. (17) shall be considered accurate enough for the purpose of this study: This is equivalent to approximating the kurtosis of e nlm by the kurtosis of E. Case of two excitation mechanisms superimposed F98 used a one parameter model of two sources of excitation, such that each mode contains a fraction λ of energy common to all the modes. They assumed that both sources produce exponential distributions of energy. This simplification led to the simple relation λ = C 1/2 . This simplification, however, does not apply to the case of a correlation produced by rare impulsive events. If N ≪ 1, the energy is damped to zero except during isolated pulses of duration ∼ τ d /2. The distribution E thus follows a Bernoulli statistics, with Var(E)/ < E > 2 ≫ 1 (Eq. B.8) instead of 1 for an exponential distribution. The occurrence rate of the excitation mechanism therefore appears to be a key parameter which cannot be neglected. Let us consider two excitations mechanisms acting simultaneously on two modes n, l, m and n ′ , l ′ , m ′ : (i) excitation by granules, with a high occurrence rate (N 1 ≫ 1). (ii) excitation by additional events with a total acoustic energy E k ad , occurring on average N ad times per damping time, and contributing to a fraction λ nlm of the power of the mode nlm. We show in appendix C (Eq. C.7) that the correlation between the energies of two oscillators excited by these sources is the same as in Eq. (12), but multiplying the kurtosis of e nlm by λ 2 nlm . The latitudinal distribution and sizes of the excitations play a relatively small role according to Sect. 2.3. The effective number N eff of excitations per damping time depends on the kurtosis β ad of E ad : Making the additional assumption that the fraction λ nlm = λ varies little among the modes considered by F98. The mean correlation between these modes is: A significant correlation can thus be produced by a source representing only a small fraction λ of the total power of each mode if N eff is small enough. Interpreting the observed correlation C as a consequence of an additional source of excitation therefore requires it to contribute to a fraction λ of the total power, deduced from Eq. (24): According to Eq. (11), the fraction λ of power coming from the additional source is related to the mean acoustic energy input < e ad > into the mode nlm: Eq. (25) becomes: Let us assume that the occurrence rate N ad varies from N min to N max with the solar cycle, while the properties of the exciting events (< e ad >, β ad ) remain unchanged. We use Eq. (28) to express the relationship between the minimum and maximum correlations C min , C max : Efficiency of the generation of acoustic waves Since the distribution of acoustic energy E ad produced by the additional source is not directly observable, we are led to assume that it resembles the observed distribution of total energy E T of the additional source. Let p T (E) be the density of probability of the source of energy E min ≤ E T ≤ E max , occurring N T times per damping time. Let us assume that the excitation of low degree modes is efficient only in the range E 1 ≤ E T ≤ E 2 , inside which the efficiency f ad ≡ E ad /E T is constant: In appendix C, Eq. (28) is rewritten as the fraction of the mean acoustic energy < E ad > which must be injected into each mode nlm in order to produce the observed correlation: This fraction is therefore minimal when the range of efficient excitations E 1 , E 2 contains the range of energies where the product E 2 T p(E T ) is maximal. Using in Eq. (28) the occurrence rate N T and kurtosis β T of the distribution of total energy E T instead of the distribution of acoustic energy E ad leads to a lower bound of the ratio < e ad > / < E ad >: For a given excitation mechanism one can estimate the number N of p modes with a wavelength longer than the size of each exciting event, which receive a comparable amount of energy from the source. Considering that the low degree modes in F98 belong to this set, From Eqs. (33)-(34) we deduce a constraint on observable quantities, which shall be useful in Sect. 4 in order to discriminate between possible excitation mechanisms: Exponential distribution of individual p modes energy The distribution of energy of low degree p modes agrees reasonably well with an exponential distribution (Chaplin et al. 1995(Chaplin et al. , 1997 for BiSON data, F98 for IPHIR and GOLF data). Chaplin et al. (1995Chaplin et al. ( , 1997, however, noticed significant deviations in the high energy tail of the distribution, which could be due to an additional excitation mechanism. They estimated that the probability that such deviations occur by chance during the period of observation is only 0.1%. Among 22512 events covering 18331 hours of BiSON data from 1987 to 1994, Chaplin et al. (1997) found 51 events above 6.5 times the mean energy, whereas less than 41 would be expected in 90% of the cases for an exponential distribution. Crudely speaking, about 10 events are unexpected in the distribution, suggesting N ad ≥ 0.05. Solar cycle variations of the total power of low degree p modes The typical energy of a low degree p mode in the range 2.7mHz ≤ ν ≤ 3.4mHz considered by F98 is < E >∼ 8 × 10 27 ergs (Chaplin et al. 1998). The velocity damping time being τ d ∼ 3.7 days, the flux of energy required to excite this p mode is 2 < E > /τ d ∼ 5 × 10 22 ergs s −1 . According to Libbrecht et al. (1986), the total energy in all the p modes (about 10 7 ) is 10 34 ergs within a factor 10. Anguera Gubau et al. (1992) and Elsworth et al. (1993) measured a global 30% decrease of the power of low degree p modes at solar maximum. This decrease seems to preclude a high value of the fraction λ of the total power which some additional source could contribute to. Nevertheless, the physical mechanisms by which the p mode power might decrease, such as damping by active regions, or modification of the properties of the convection, have not yet been quantitatively estimated. This might be efficient enough to dominate the energy input due to an additional source. Moreover, the measurement of the amplitude of global p modes is influenced by the presence of active regions covering a significant fraction of the solar surface at solar maximum (Cacciani & Moretti, 1997). Given these uncertainties, no firm constraint can be deduced from these observations. We shall consider "likely" a fraction λ ≤ 30%. Observed correlations F98 determined the mean correlations between the energies of low degree p modes at two different epochs: 160 days in 1988 near solar maximum using IPHIR data (l = 0, 19 ≤ n ≤ 23 and l = 1, 18 ≤ n ≤ 23), and 310 days in 1996-97 near solar minimum using GOLF data (l = 0 and l = 1, 17 ≤ n ≤ 25). The mean correlation coefficient C they measured is: (i) C = 10.7 ± 5.9% in 1988 (IPHIR data), (ii) C < 0.6% in 1996-97 (GOLF data). According to F98, the probability that the correlation measured from IPHIR data could occur by chance is 0.7% if the modes were independent. F98 also rejected the possibility of an instrumental artefact by checking that the noise of IPHIR data at different frequencies is not correlated. If the granules were the only source of p-mode excitation, the correlation would be less than 10 −5 % according to Eq. (22), i.e. well below the detection limit. The actual limit of detection, of order 0.6% (F98), corresponds to at least one exciting event every 29 minutes, on average. If due to a single mechanism of excitation, the correlation measured with IPHIR data would correspond to an exciting event every 12 ± 7.2 hours. Fig. 2 shows the relationship between N eff and λ required by the correlations observed in IPHIR data, and the upper bound set by GOLF data. The correlation measured by F98 in 160 days of IPHIR data imposes that N ad ≥ τ d /160 = 0.02 in the IPHIR period, which is coherent with the fraction of abnormal events in BiSON data. Applying Eq. (25)-(27) to a typical mode of energy 8×10 27 ergs and damping time τ d = 3.7 days, the constraint C ≥ 4.8% leads us to look for a mechanism occurring at most a few times per day in 1988, with an acoustic energy input of at least 10 26 ergs per mode: Table 1. Energy, momentum and occurrence rate of a free falling comet, a X-ray flare, and a CME (Zarro et al. 1988;Crosby et al. 1993;Hundhausen 1997) comet flare CME kinetic energy (ergs) average -∼ 10 30 6.7 × 10 30 maximum 2 × 10 32 ∼ 10 33 4 × 10 32 kurtosis β -∼ 33 4-20 momentum (g cm s −1 ) average -∼ 10 22 ∼ 10 23 maximum 6 × 10 24 -∼ 10 24 occurrence rate (day −1 ) solar minimum < 0.1 1 0.2 solar maximum < 0.1 19 3 receive the same energy input as low degree modes, which corresponds to a total acoustic energy of ∼ 10 34 ergs per event. This already exceeds by two orders of magnitude the kinetic energy of a comet with the mass as Halley's hitting the solar surface; not to mention the variability issue between IPHIR and SOHO periods, nor the difficult question of the efficiency f of the energy transfer (addressed by Gough 1994;Kosovichev & Zharkova 1995). Observations of waves generated by flares Although sunspots are known to absorb high degree acoustic waves (Braun et al. 1987(Braun et al. , 1988, observations of the waves generated by a large solar flare on 24th April 1984 by Haber et al. (1988a,b) showed a 19% increase of power in outward travelling wave, dominating the sunspot absorption. Nevertheless, Braun & Duvall (1990) also observed the wave emission from a flare on 10th March 1989, and found an upper bound of 10% power increase, if any. Very recently, Kosovichev & Zharkova (1998) analysed the shock wave produced by the "fairly moderate" flare of 9th July 1996, observed by MDI aboard SOHO. The wave amplitude is associated to a momentum a factor 30 smaller than the one expected from their theoretical model. This unexpectedly high efficiency led them to conclude that the seismic flare source might be located in subsurface layers. The effect of flares on low degree modes is however less clear. The high energy events of BiSON data, appearing above the exponential energy distribution of low degree modes, do not seem to be correlated neither with the sunspot number, nor with the strength of X-ray flares (Chaplin et al. 1995). Using the sum of normalized energies of low degree modes in IPHIR data, Gavryusev & Gavryuseva (1997) found an anticorrelation between big pulses in p modes and the mean solar magnetic field but no correlation with the sunspot number. Theoretical estimates of p mode excitation by flares The theoretical estimate of the energy transmitted from a flare to p modes can follow three different approaches: (i) the region surrounding the flare is heated and expands vertically on a time scale short enough to communicate momentum to the atmosphere below it (Wolff 1972). This process could extract 10 28 ergs of acoustic energy from a 10 32 ergs flare (f ∼ 10 −4 ). According to Wolff (1972), the error bar in this estimate is at least a factor 10. (ii) the plasma flows down towards the foot points of the field lines, and hits the solar surface, thus communicating momentum to it. This approach was followed by Kosovichev & Zharkova (1995) who found a smaller effect than that observed by Haber et al. (1988a). (iii) the pressure perturbation associated with the restructuring of the magnetic field in the flare region might be more effective, as suggested by Kosovichev & Zharkova (1995). Energy distribution of flares The total energy released by a flare (less than 10 27 ergs to 10 33 ergs) is usually computed assuming that the observed hard X-rays are produced by bremsstrahlung from a distribution of accelerated electrons impinging upon a thick target plasma. Using the Hard X-Ray Burst Spectrometer on the Solar Maximum Mission satellite from 1980 to 1989, Crosby et al. (1993) showed that the flaring rate varies by about a factor 20 between the solar maximum and minimum, while the energy distribution remains constant. Applying Eq. (29) to the correlation observed by IPHIR, with the 30% variation of the p mode total power, suggests the following correlation C min at solar minimum: which is compatible with the upper bound obtained from GOLF data. The occurrence rate of flares follows a power law distribution p(E) ∼ E −γ against the total flare energy with a slope γ < 2, indicating that the largest flux of energy occurs in rare big energy events. Let E min ≪ E max be the range of energies over which the power law distribution is observed, its mean energy < E F > and kurtosis β F are: According to Crosby et al. (1993), about 13 flares occurred per day in 1980-1982 (solar maximum), with a slope of the energy distribution γ ∼ 1.5 in the range 10 28 ≤ E F ≤ 10 32 ergs. Two thirds of the flares considered by Crosby et al. (1993) fall in this range according to their Fig. 6, so that T. Foglizzo: Are solar granules the only source of acoustic oscillations ? 7 the occurrence rate of these flares can be taken as 8.7 per day (N cme ∼ 32.2). Eqs. (39)-(40) imply < E F >∼ 10 30 ergs and β F ∼ 33. Assuming that the efficiency f does not depend on the energy, the distributions of total energy and of energy input per mode are identical, and N eff = 1.0. Eqs. (25) and (33) imply: With E 2 p(E) ∼ E 1/2 , Eq. (32) indicates that Eq. (43) would still be correct if the energy dependence of the efficiency f (E) were to select only the most energetic events. The size of the flare region viewed from earth is no more than a few arcminutes, so that all p modes with a degree l ≤ l max must receive an equal amount of energy. With l max = 37 according to Wolff (1972), this corresponds to a set of N ≥ 10 4 modes, which contradicts Eq. (35). In conclusion, the total acoustic energy input required from X-ray flares to produce the observed correlation exceeds their total energy. Geometry and timing of CMEs Coronal mass ejections correspond to the release of 3.3 × 10 15 g of matter on average, with a mean velocity of 350 km s −1 (SMM data from 1980and 1984-1989, Hundhausen 1997. The footpoints of the outer loop are separated on average by about 45 degrees in latitude, i.e. much larger than an active region or flare (Harrison 1986, Hundhausen 1993. This large scale structure favours low degree modes. According to Hundhausen (1994), the origin of the CME comes from a gradual build up and storage of energy in a pre-ejection structure (driven by the spreading of magnetic field lines, the emergence of magnetic flux, or the shear of field lines). A finite quantity of this energy is then released in a catastrophic breakdown in the stability or equilibrium of the stressed structure. When the CME is associated with an X-ray flare, the CME kinetic energy seems to be uncorrelated to the flare peak intensity (Hundhausen 1997). In Hundhausen (1994), the acceleration of the CME observed by SMM on 23rd August, 1988 was measured. Its launching precedes the flare event (see also Harrison 1986), and the acceleration to a velocity of 1000 km s −1 occurs in less than 10 min (see the CMEs of 17th August, 1989 and16th February, 1986 for similar examples in Hundhausen 1994). Note that a CME was associated to the flare of 9th July, 1996 analysed by Kosovichev & Zharkova (1998). Solar cycle variability While the average mass of a CME varies little from year to year (Hundhausen et al. 1994b), the annual variation of their mean velocity does not follow the solar cycle (Hundhausen et al. 1994a). It would therefore be interesting to check the correlation between the high energy events noticed by Chaplin et al. (1995Chaplin et al. ( , 1997 and this new indicator. According to Webb & Howard (1994), the occurrence rate of CME varies from 0.2 per day at solar minimum to 3 per day at solar maximum. As for flares, the variation of the occurrence rate is enough to account for the observed variation of the correlation. Eq. (29) implies: Hundhausen et al. (1994a,b) showed that the distribution of CME velocities (10 km s −1 to 2000 km s −1 ) is more widely spread than their distribution of masses (10 14 to 10 16 g). As a consequence, the distribution of kinetic energy is spread over a much wider range than the potential energy. Hundhausen (1997) selected a sample of 249 CMEs measured by SMM in 1984-1989, associated with X-ray flares. The spread of their distribution of kinetic energy (from 10 28 to 10 33 ergs), measured from Fig. 1 of Hundhausen (1997) is such that β cme ∼ 20.2. Note that a smaller value (β cme ∼ 3.9) is obtained when considering the distribution of squared velocities of 109 CMEs during the 160 days IPHIR period in the catalogue of Burkepile & St. Cyr (1993). The apparent contradiction may come partly from the distribution of masses, but mostly from the high sensitivity of β cme to the high energy events which occurred in 1989 at solar maximum. Moreover, fast CMEs are underrepresented in those statistics because the velocity is measured only when the CME is visible on more than one coronograph image (Hundhausen et al. 1994a). As for flares, the distribution of acoustic energy input might be different from the distribution of CMEs kinetic energy, for example, if only a subclass of CMEs excite acoustic waves. In particular, some CMEs are known to be slowly accelerated with the solar wind, while others are much faster than the solar wind (MacQueen & Fisher 1983). Estimates of both λ and the acoustic energy input per mode < e cme > would be multiplied by a factor 2.2 if the value β cme = 4 were chosen. Although the kinetic energy Fig. 3. Relationship between the mean acoustic energy input per mode of each impulsive event and its occurrence rate in order to produce the correlation measured from IPHIR or GOLF data. The kurtosis of the energy distribution of the source of excitation is β ad = 20, the energy of the mode is 8×10 27 ergs and its damping time is τ d = 3.7 days. The corresponding fractions of total power λ = 1 and 0.3 are indicated by the continuous and long dashed lines. The occurrence rates of CMEs at solar minimum and maximum are indicated by circles. distribution of CMEs is less accurately known than for flares, one can infer from the published ones (Hundhausen et al. 1994b, Hundhausen 1997) that the product E 2 p(E) is maximum at high energy. As for flares, Eq. (47) would not be changed if the efficiency of acoustic wave generation were highest at high energy. Although Eqs. (45)-(47) resemble Eqs. (41)-(43) obtained for flares, the following important differences should be noted: (i) the estimate for CMEs is based on their kinetic energy only, which is certainly much smaller than the energy of the mechanism responsible for their ejection. (ii) the large scale of CMEs might favour the excitation of low degree modes, especially if their ejection mechanism is rooted in the convection zone (N ≤ 100). (iii) although the range of energy of CMEs (kinetic + potential ∼ 8.5 × 10 30 ergs on average, Hundhausen et al. 1994b) is the same as for big flares, their momentum can be one hundred times larger than the momentum carried by downflowing electrons produced by flares (see Table 1). This is favourable to a higher efficiency of the energy transfer to acoustic modes. Conclusion We have investigated the consequences of interpreting the observed correlations in terms of an hypothetical excitation mechanism, in addition to the well established excitation by the granules. In particular, this interpretation requires impulsive events occurring no more than a few times per day in the IPHIR period. This has drawn our attention to the largest X-ray flares and CMEs at solar maximum. The variation of their typical energy and occurrence rate with the solar cycle could account for the variation of the correlation between the IPHIR and GOLF observations. If due to solar flares, the correlation determined from IPHIR data requires that at least 10 −4 of the energy of each X-ray flares is injected into each low degree mode at solar maximum. Given the number of modes (at least 10 4 due to the small scale of flares) which should receive the same energy as low degree modes, we are inclined to exclude X-ray flares regardless of the efficiency of the acoustic wave emission by each event. This reasoning, however, does not exclude the possibility of more energetic processes associated to flares, such as the restructuring of the magnetic field in the flare region mentioned by Wolf (1972) and Kosovichev & Zharkova (1995). If the correlation is due to CMEs, at least 3.6 × 10 −5 of the kinetic energy of each CME should be injected into each low degree mode at solar maximum. This leaves two possibilities open: (i) CMEs may correlate only a few tens of low degree modes by injecting a few per cents of the CME kinetic energy into these modes. Higher degree modes (l ≥ 10) would not be correlated in this case. (ii) CMEs could correlate more modes if significantly more than the observed mechanical energy of CMEs can be extracted from their energy reservoir and injected into acoustic modes. If all CMEs participate to acoustic emission with an efficiency f independent of their energy, they should be responsible for a fraction 16.7% ≤ λ ≤ 33.2% of low degree p mode total power at solar maximum. Nevertheless, this fraction might be significantly smaller if only a subset of high energy CMEs participates to acoustic emission. The contribution of CMEs to the low degree p mode power has to be reconciled with the observed 30% decrease of their power at solar maximum. A better theoretical understanding of the efficiency of the energy transfer from a CME event to acoustic modes is needed. If our interpretation of IPHIR correlations is correct, we should be able to make the following observational tests: -detect the effect of the largest CMEs on the energy of low degree p modes, even at solar minimum, -measure the correlation C min ≥ 0.2% at solar minimum, using longer time series and more modes than F98, -confirm with SOHO, during the next solar maximum, the correlations determined from IPHIR data, measure them with a better accuracy, and determine whether higher degree modes are also correlated. Observations of the influence of CMEs on acoustic modes could improve our understanding of both the excitation of acoustic modes and the ejection mechanism of CMEs. Acknowledgements. The author thanks T. Amari, M. Tagger, D. Gough and D. Saundby for helpful discussions. Appendix A: Energy of a mode excited stochastically Let us consider a spherically symmetric, adiabatic model of the sun. Any perturbation of velocity v can be projected onto the basis of orthogonal eigenfunctions v nlm , associated to the (supposedly real) eigenvalues ω nl . The structure of the equations allows us to write each component of the velocity vector as the product of a function of r only and a function of θ, φ (see e.g. Unno et al. 1989). This function is a spherical harmonic for the radial velocity The eigenvectors are normalized as follows: The spherical harmonics are written in terms of Legendre associated functions P m l : The radial velocity perturbation v k (r) is described by Eqs. (1)-(3). The angle α ≥ 0 made by the direction θ, φ with the direction θ k , φ k is defined by: By projecting the perturbation onto the basis of eigenvectors, we obtain: where the real dimensionless functions g k lm (θ k ) and h k nl are defined as follows: Note that the function g k l,m = g k l,−m is real because of the cylindrical symmetry of the impulsion. Eq. (A.9) corresponds to initial conditions with zero displacement at t = t k , suitable to describe the transfer of impulsion due to a shock. Assuming that the eigenfrequencies ω nl are real, we add by hand a damping term, with a time scale τ d . After a series of excitations indexed by k, and neglecting the transient velocities present during each event, the linearity of the equations allows us to write the velocity as follows: Denoting by ξ nlm (r, t) and V nlm (r, t) the displacement and velocity associated to the mode (n, l, m), the acoustic energy E nlm of each mode is the sum of the kinetic and potential energies: Neglecting the slow damping compared to the fast oscillations (ω nl τ d ∼ 7 × 10 3 ≫ 1 for the p modes we consider), and using the normalization (A.3), we may write the energy as follows: Using Eq. (A.14) and some algebra, we separate the two contributions from the waves propagating azimuthally in opposite directions: Although Eqs. (A.14) and (A.16) imply E nlm = E nl−m , the two components E + nlm and E − nlm , are not equal, except on average. This separation of the components of the energy is important since the rotation enables us to separate these two components. Let us restrict ourselves to the simplest case of a solid body rotation, and neglect the Coriolis forces. This is equivalent to replacing in Eq. (A.18) the azimuthal angle φ by φ + Ωt and φ k by φ k + Ωt k , and thus we obtain Eqs. (4)-(6). E + nlm is the energy associated to the frequency ω nl + mΩ, while E − nlm is associated to the frequency ω nl − mΩ. The average energy input < e nlm > of a single random excitation onto the mode n, l, m is proportional to the total kinetic energy E of the perturbation: Appendix B: Correlation produced by a single excitation mechanism B.1. Case N ≪ 1 In order to treat the general case, we need to establish first some properties of the case N ≪ 1. The index nlm of the energy is omitted in what follows, for the sake of clarity The mean energy < E > of the damped oscillator is directly proportional to the mean energy input < e > of each impulsive event: 3) The energies E, E ′ of two modes nlm, n ′ l ′ m ′ satisfy: (B.4) Appendix C: Correlation produced by two excitation mechanisms Let us consider two independent series of impulsive events due to two different excitation mechanisms superimposed, indexed by k, characterized by the series of velocity perturbations v (1) k , and mean interpulse times ∆t (1) , ∆t (2) . The energy of a mode (nlm) can be decomposed into two random walks of N1 and N2 steps. Let us write this equation for two modes (nlm) and (n ′ l ′ m ′ ). Hereafter we simply denote the quantities specific to the mode (n ′ l ′ m ′ ) with a prime, for the sake of clarity. Ei and E ′ i are the energies of the modes nlm and n ′ l ′ m ′ excited by the excitation mechanism (i) only (i = 1, 2). The series of phases Φ k(1) , Φ k(2) , Φ k(1) ′ , Φ k(2) ′ are independent. where Ci is the correlation between Ei and E ′ i . Let us define the fractions λ, λ ′ of the total power of each mode contributed by the second mechanism as follows: Let convection be the first excitation mechanism (C1 = 0), and let us use the index (ad) for the properties N ad , e ad of the second excitation mechanism. Using Eqs. (B.8), (12) and (C.3) in (C.4), we obtain:
2019-04-14T01:52:22.313Z
1998-08-27T00:00:00.000
{ "year": 1998, "sha1": "34d793805baf967119312992d4f13ba4f923ab6d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0a412f99cdbe24b2e57006ae1b684efc9600676f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7368665
pes2o/s2orc
v3-fos-license
A diverse and evolutionarily fluid set of microRNAs in Arabidopsis thaliana To better understand the diversity of small silencing RNAs expressed in plants, we employed high-throughput pyrosequencing to obtain 887,000 reads corresponding to Arabidopsis thaliana small RNAs. They represented 340,000 unique sequences, a substantially greater diversity than previously obtained in any species. Most of the small RNAs had the properties of heterochromatic small interfering RNAs (siRNAs) associated with DNA silencing in that they were preferentially 24 nucleotides long and mapped to intergenic regions. Their density was greatest in the proximal and distal pericentromeric regions, with only a slightly preferential propensity to match repetitive elements. Also present were 38 newly identified microRNAs (miRNAs) and dozens of other plausible candidates. One miRNA mapped within an intron of DICER-LIKE 1 ( DCL1 ), suggesting a second homeostatic autoregulatory mechanism for DCL1 expression; another defined the phase for siRNAs deriving from a newly identified trans -acting siRNA gene ( TAS4 ); and two depended on DCL4 rather than DCL1 for their accumulation, indicating a second pathway for miRNA biogenesis in plants. More generally, our results revealed the existence of a layer of miRNA-based control beyond that found previously that is evolutionarily much more fluid, employing many newly emergent and diverse miRNAs, each expressed in specialized tissues or at low levels under standard growth conditions. Small silencing RNAs direct transcriptional and posttranscriptional gene silencing activities that shape eukaryotic transcriptomes and protein output (Chen 2005; Jones- Rhoades et al. 2006;Mallory and Vaucheret 2006). In plants, these small regulatory RNAs are comprised of microRNAs (miRNAs) and several classes of endogenous small interfering RNAs (siRNAs), which can be differentiated by their distinct modes of biogenesis and the types of genomic loci from which they derive. The miRNAs derive from primary transcripts that form characteristic stem-loop structures (Ambros 2004;Bartel 2004;Jones-Rhoades et al. 2006). For characterized Arabidopsis miRNAs, this miRNA stem-loop precursor is processed by a Dicer-like RNAseIII-type ribonuclease (DCL1) to generate the miRNA:miRNA* duplex, with 2-nucleotide (nt) 3Ј overhangs (Park et al. 2002;. The miRNA* species derives from the opposing arm of the hairpin and pairs imperfectly to the miRNA . The miRNA strand preferentially incorporates into a silencing complex that has at its core the ARGONAUTE1 (AGO1) protein Baumberger and Baulcombe 2005;Qi et al. 2005). Plant miRNAs have imperfect but extensive complementarity to their mRNA targets, enabling these targets to be predicted with confidence, particularly when the miRNA:target pairing is conserved in multiple species Jones-Rhoades and Bartel 2004). Plant miRNAs typically direct cleavage of their targets (Llave et al. 2002;Tang et al. 2003). The conserved targets of plant miRNAs are predominantly messages of transcription factors, and the importance of miRNA-mediated regulation of many of these targets for proper embryonic, vegetative, and/or floral development is well established (Chen 2005; Jones- Rhoades et al. 2006;Mallory and Vaucheret 2006). Conserved miRNA targets also include messages for other developmental factors, such as F-box proteins, DCL1, and AGO1, and messages for nondevelopmental factors, such as stress-response proteins (Chen 2005; Jones- Rhoades et al. 2006;Mallory and Vaucheret 2006). Endogenous siRNAs derive from long double-stranded RNA (dsRNA) formed as a product of an RNA-dependent RNA polymerase (RdRP), convergent transcription, or transcription of repeats. They typically perform autosilencing, in that they target DNA or transcripts corre-sponding to (or related to) the loci from which they derive. Exceptions are the ∼21-nt trans-acting siRNAs (tasiRNAs), which derive from nonprotein-coding genes known as TRANS-ACTING siRNA (TAS) genes and post-transcriptionally down-regulate protein-coding transcripts from unrelated loci in a fashion reminiscent of miRNA-directed repression (Peragine et al. 2004;Vazquez et al. 2004;Allen et al. 2005;Yoshikawa et al. 2005). A segment of the TAS transcript is converted to dsRNA by RDR6, which is then successively cleaved by DCL4 into 21-nt siRNAs that are then loaded into an AGO1-or AGO7-containing silencing complex where they direct cleavage of the mRNA targets (Peragine et al. 2004;Vazquez et al. 2004;Allen et al. 2005;Gasciolli et al. 2005;Xie et al. 2005b;Yoshikawa et al. 2005;Adenot et al. 2006;Fahlgren et al. 2006;Hunter et al. 2006). One hallmark of tasiRNAs is that they are processed in phase from predominantly one register, which greatly decreases the diversity of siRNAs that accumulate to appreciable levels from a particular TAS locus and ensures production of those with intended targets Allen et al. 2005). To define the proper phasing register, each of the known TAS transcripts is cleaved by a miRNA-programmed silencing complex (Allen et al. 2005;Yoshikawa et al. 2005). Another type of endogenous siRNA directing PTGS in plants is natural antisense siRNA (nat-siRNA). In the founding example of nat-siRNA-directed silencing, high salt levels induce the expression of SRO5, one of a pair of convergently transcribed genes, such that in the presence of transcripts from the other gene, P5CDH, a DCL2/RDR6/SGS3/NRPD1a-dependent 24-nt siRNA is produced that directs cleavage of P5CDH transcripts (Borsani et al. 2005). This creates a terminus for RdRP production of dsRNA and subsequent processing into secondary siRNAs by DCL1, which can also target P5CDH messages (Borsani et al. 2005). A third type of endogenous siRNA found in plants is heterochromatic siRNA. The concerted activity of plantspecific DNA-dependent RNA polymerases, PolIVa and PolIVb, correlates with the accumulation of 24-nt heterochromatic siRNAs via RDR2-mediated dsRNA formation and DCL3-mediated processing Chan et al. 2005). A fraction of these siRNAs associate with AGO4 to form a silencing complex thought to direct sequence-specific methylation events at the DNA and/or chromatin level, which in turn can lead to heterochromatin formation and maintenance at loci from which the siRNAs arise, such as retroelements and the 5S rDNA arrays (Herr et al. 2005;Kanno et al. 2005;Onodera et al. 2005;Pontier et al. 2005). Other siRNAs depend on PolIVa-RDR2-DCL3 but not PolIVb or AGO4, and are not associated with DNA methylation and heterochromatin (Zilberman et al. 2003;Xie et al. 2004;Pontier et al. 2005;Pontes et al. 2006). Their function remains unknown. Conventional cloning and sequencing of small RNAs from Arabidopsis has suggested that plants have a remarkable diversity of endogenous small RNAs (Llave et al. 2002;Park et al. 2002;Sunkar and Zhu 2004;Xie et al. 2004). Recently, massively parallel signature sequencing (MPSS) was employed to obtain a set of 77,434 unique 17-nt signatures of endogenous small RNAs from wild-type Arabidopsis (Lu et al. 2005). An appealing alternative that combines the fulllength small RNA information of conventional sequencing with the high-throughput character of MPSS is a pyrophosphate-based high-throughput sequencing technique (Margulies et al. 2005). This technique has recently been applied on a pilot scale to the sequencing of Arabidopsis small RNAs, generating between 13,000 and 45,000 unique sequences that match the genome Lu et al. 2006;Qi et al. 2006). In order to more broadly characterize the genomic distribution of loci that produce small RNAs in Arabidopsis, we used high-throughput pyrosequencing to obtain >340,000 unique small RNA sequences that matched the nuclear, plastid, or mitochondrial genomes. Analysis of this data set provided insights into the evolution, genomics, expression, biogenesis, and function of small silencing RNAs in Arabidopsis. A diverse set of endogenous small RNAs We adapted our small RNA purification and sequencing protocol, designed to identify RNAs with the size and covalent structure (5Ј phosphate and 3Ј OH) of DCL products (Lau et al. 2001), to take advantage of highthroughput pyrophosphate sequencing. Arabidopsis small RNAs were sequenced from libraries made from whole seedlings, rosette leaves, whole flowers, and siliques. These four runs yielded >1,500,000 reads. Those with recognizable flanking adaptor sequences and with lengths between 16 and 28 nt were compared with Arabidopsis nuclear, chloroplast, and mitochondrial genomes. Including another 4239 reads obtained by using conventional methods, 887,266 reads perfectly matched at least one locus and were analyzed further (188,954 from seedling, 186,899 from rosette, 205,649 from flower, and 305,764 from siliques). These 887,266 reads represented 340,114 unique, although sometimes partially overlapping, sequences (Table 1). About 65% (221,676) of the unique sequences were only sequenced once. The distribution of lengths and 5Ј nucleotides for the set of singletons and for the set of sequences with multiple reads were virtually identical (data not shown), suggesting that the two sets represented similar classes of small RNAs. Comparison with a data set of 77,434 unique 17-nt MPSS signatures representing small RNAs that match the Arabidopsis genome (Lu et al. 2005) found only 13,596 unique signatures that matched the first 17 nt of at least one of our unique reads. Together, the preponderance of singletons in our library and the modest overlap with the MPSS data set indicated that deep sequencing approaches were still far from saturating the small RNA pools in Arabidopsis. Although many small RNAs expressed in Arabidopsis remained unidentified, our data set represented a substantial increase in the known diversity of small RNAs matching the genome. The most abundant reads corresponded to conserved, previously identified miRNAs As expected, known miRNAs were the sequences most redundantly retrieved from the pool, boasting the highest read frequency of all small RNA classes and 15% of the total ( Table 1). All of the miRNA families known to be conserved to poplar and rice (20 families) or just poplar (one additional family; Jones- Rhoades et al. 2006) were represented among our reads, with frequencies as high as 36,093 (miR167). Even the stress-inducible miRNAs miR395 and miR399, previously undetectable in plants grown under normal conditions (Jones- Fujii et al. 2005), were represented (13 and 580 reads, respectively), suggesting that some other miRNAs induced in specific conditions might also be represented by multiple reads in our data set. For a few of the miRNAs, including miR319a/b, the sequenced species differed slightly from the annotated species, suggesting refinements of the annotated species (Supplementary Database 1). Most previously identified conserved miRNA families have multiple, paralogous loci, which combined total 92 loci (Jones- Rhoades et al. 2006). In some cases, members of the same family have slightly different sequences, which can sometimes target distinct sets of messages (Schwab et al. 2005). In other cases, paralogous loci give rise to identical mature miRNAs, raising the question of which paralogs are expressed. Mapping the miRNA* species, rare side products, or degradation fragments unique to a single paralog enabled us to confirm the expression of all but 13 of the 92 loci (Supplementary Database 1), including 19 whose transcription had not been previously confirmed, either by cloning or by mapping the 5Ј end of primary transcripts (Xie et al. 2005a). The exceptions were for loci for which no reads could be uniquely mapped. For most miRNAs, variants of the most abundant read were isolated with 5Ј or 3Ј heterogeneity, evidenced by missing or extra nucleotides at each terminus (Supplementary Database 1). Occasional slippage of DCL1 processing presumably gives rise to the extra bases, whereas missing nucleotides could result from slippage or end degradation. In contrast to metazoan miRNAs (Lau et al. 2001), we found that 5Ј heterogeneity was common in Arabidopsis miRNA pools and only slightly less prevalent than 3Ј heterogeneity, with no correlation between the extent of heterogeneity and the arm of the foldback that produced the miRNA (Supplementary Database 1). Plant miRNAs might tolerate more extensive 5Ј heterogeneity because seed pairing represents a smaller portion of their targeting specificity , whereas animal miRNAs truncated or extended by a single nucleotide at their 5Ј end would no longer recognize many normal targets and would instead recognize many other messages (Lim et al. 2005). The miRNA* species had ∼9% as many reads as the mature miRNAs, which was higher than the 1% observed in worms (Ruby et al. 2006). This percentage varied widely. For two of the 21 conserved families-miR395 and miR397, represented by 13 and 361 reads, respectively-no miRNA* species were observed (Supplementary Database 1). At the other extreme, miR403* was observed more frequently than mature miR403 (1643 and 66 reads, respectively). Mature miR403 directs cleavage of AGO2 mRNA (Allen et al. 2005) and is more easily detected by RNA blotting than is miR403* (H. Vaucheret, unpubl.). We infer that sequencing abundance does not always correlate with in vivo abundance, which in any event might not always predict the functional strand. Several characteristics of Arabidopsis miRNAs and their foldbacks emerged from analysis of reads corresponding to miRNA loci that had previously been confidently identified. First, relatively few unique nonoverlapping reads mapped to authentic miRNA foldbacks (Supplementary Database 1). Although DCL1 processing on some foldbacks appeared a little sloppy, it appeared at least globally very precise in that most reads centered on the miRNA/miRNA*, even for stems that were quite extensive. Second, the miRNA* sequence was observed for most loci. Third, <1% of reads mapped to the strand antisense to that giving rise to miRNA and miRNA* (Supplementary Database 1). Arabidopsis has many miRNAs lacking close orthologs in other sequenced plants In addition to the 21 conserved miRNA families, another five apparently nonconserved miRNA genes (miR158, miR161, miR163, miR173, and miR447) have been confidently identified in Arabidopsis (Jones- Rhoades et al. 2006). Each was represented among our reads, with read frequency ranging from 29 for miR447 to 10,573 for miR161 (Supplementary Database 1). How many additional nonconserved miRNAs might exist in Arabidopsis? The multitude of other endogenous small RNAs, some of which derive from regions with fortuitous potential to fold into miRNA-like hairpins, has complicated miRNA identification in plants, leading to the suggestion that biogenetic requirements be confirmed using mutant backgrounds prior to annotation (Jones- Rhoades et al. 2006). High-throughput sequencing offered an alternative approach for distinguishing miRNAs from other small RNAs. Candidates from loci with a substantial number of reads deriving from the antisense strand can also be excluded because such antisense reads suggest origin from a perfect dsRNA rather than a hairpin. For the remaining candidates meeting the conventional hairpin-pairing criteria, a sequenced miRNA* species, especially one with 2-nt 3Ј overhangs, provides strong evidence that a candidate originates from a DCLprocessed stem-loop. As a result, demonstrating that the candidate accumulates in prescribed mutant backgrounds becomes less important, which is particularly helpful for miRNAs difficult to detect on blots. Using these criteria, we identified 38 additional Arabidopsis miRNA families, thereby increasing by 2.5-fold the known diversity of miRNAs in Arabidopsis (Fig. 1A Table 2 and Supplementary Database 2 also include a 39th miRNA, miR391, which was absent in miRBase version 7.1). To receive miRNA designation, a miRNA* species (or close variant if the miRNA was sequenced at least three times) must have been observed among the reads, with the exception of miR823, which was validated using RNA blotting and the conventional set of mutants (below). The most abundant read on the foldback was deemed the miRNA, although in cases where read density was roughly equivalent for the most abundant reads from each arm of the foldback, both were together annotated to represent the new miRNA locus (using the 5Ј and 3Ј designations adopted in similar cases for metazoan miRNAs). Many other miRNAs might exist in Arabidopsis; another 40 candidates mapped to plausible hairpins but lacked reads representing the miRNA* species (Supplementary Table 4). Four particularly compelling candidates, each sequenced >25 times ( Table 2, Candi-dateA-CandidateD), were carried forward in subsequent analysis, anticipating that they will eventually be validated. A search in plant expressed sequence tags (EST) data sets, and the Oryza sativa (rice) and Populus trichocarpa (poplar) genomes, revealed potential orthologs for only one of the newly identified miRNAs, miR828, which had recognizable orthologs in poplar and leafy spurge, each with one substitution in the mature miRNA (Supplementary Fig. 2). For all other newly identified miRNAs, potential orthologs were either absent in sequenced genomes or found only after relaxing the homology criterion to allow three point substitutions. However, most, if not all, of these candidates appeared to be false positives, because at this stringency an equivalent number of hits was found that satisfied the homology and pairing criteria but mapped to the nonhomologous arms of predicted hairpins. We concluded that most of the newly identified miRNAs do not have identifiable orthologs in the sequence databases and henceforth refer to them all as "nonconserved," while recognizing that a few might have divergent orthologs difficult to identify with confidence, and that many might have orthologs in unsequenced species more closely related to Arabidopsis thaliana. Mirroring the search for orthologs, we found no convincing Arabidopsis paralogs of the newly identified miRNAs. Although screening was performed on 20-to 24-nt reads, without preference for a particular length or 5Ј nucleotide, 74% of the newly identified miRNA loci encoded a 21-nt miRNA, and 87% encoded a miRNA beginning with a U (Table 2). Thus, these characteristics of the conserved miRNAs were shared by the newly identified miRNAs. Some intriguing tissue specificities were also evident ( Table 2). For example, miR771 and miR839 were sequenced primarily from flowers, miR391 and miR825 appeared preferentially in rosette leaves, miR822 and miR842 were pref- The miRNA hairpins of A, shown in bracket notation with a tally of reads mapping to the hairpin and nucleotides colored as in A. (C) RNA blots demonstrating that accumulation of six detectable miRNAs depended on DCL1, not DCL2, DCL3, DCL4, RDR2, or RDR6. As a loading control, blots were stripped and reprobed for U6. (D) Sequencing frequencies of Arabidopsis miRNA families. Shown are cumulative plots for all Arabidopsis miRNA families (red squares), conserved families (violet diamonds), and all families plus the 40 sequenced candidates (gray triangles). Fourteen families, 11 of which were conserved, were sequenced at a frequency of greater than one per 1000 (dashed line). GENES & DEVELOPMENT 3411 Cold Spring Harbor Laboratory Press on February 19, 2020 -Published by genesdev.cshlp.org Downloaded from Two GIF transcription factors (8) miR837 ( The sequencing of a miRNA* species is denoted "Yes" if the perfect match to the miRNA* (with the 2-nt 3Ј overhangs typical of DCL products) was sequenced and was the most abundant read from that arm of the hairpin, "Yes*" if the perfect miRNA* was sequenced but was not the most abundant read from that arm of the hairpin, "Yes**" if only a close heterogeneous variant of the perfect miRNA* was sequenced, and "No" if no star species was recovered. d AGI codes are given for genes with top-scoring target sites for the miRNA. A complete list of predicted mRNA targets of newly identified miRNAs, along with the associated target site alignments and scores, is given in Supplementary Database 4. e Protein products of predicted targets in the best score class. The total number of predicted targets falling within the cutoff is given in parentheses if more than one target was predicted. f Reported as a miRNA in Xie et al. (2005a) but was not annotated in miRBase version 7. g Locus was reported as a miRNA in Lu et al. (2006). Rajagopalan et al. erentially sequenced in seedlings, and miR828 was sequenced most often from siliques. For some with the most striking specificities, we speculate that expression might be at a high level within just a few specialized cells within that organ. DCL4 processes some Arabidopsis miRNAs Most of the newly identified miRNAs were infrequently recovered by deep sequencing, with median read frequencies of only 13, compared with 731 for the conserved families, suggesting that in plants nonconserved miRNAs are generally expressed at low levels or primarily in specific cells or growth conditions. When RNA from plants grown under normal laboratory conditions was blotted and probed for the 10 most abundant newly identified miRNAs, only eight could be detected using either DNA or LNA probes. Accumulation of six of these eight miRNAs displayed the classical biogenetic profile of DCL1-dependency, with insensitivity to defects in any of the other DCL enzymes or RDR proteins (Fig. 1C). In contrast to the previously characterized Arabidopsis miRNAs, accumulation of two of the eight miRNAs detectable on RNA blots depended on DCL4, not DCL1 ( Fig. 2A). One, miR822, was previously classified as an siRNA (ASRP1729) because it accumulates in dcl1 plants (Allen et al. 2004;Xie et al. 2004). Both miR822 and miR839 were insensitive to defects in RDR2 and RDR6, as expected for RNAs that derive from hairpins rather than dsRNA. Further supporting a hairpin precursor structure was the pattern of reads from these loci (Fig. 2B). Over 99% of reads arose from one strand, with only two of the 1892 MIR822 reads and one of the 332 MIR839 reads deriving from the antisense strand-a pattern inconsistent with a perfect dsRNA intermediate. Furthermore, the major species from each arm of the predicted foldbacks paired to each other, with 2-nt 3Ј overhangs observed for the miR822:miR822* duplex. Although the cleavage precision did not appear to match that of DCL1, this preferred processing from a localized region of an RNA hairpin stem satisfied the defining feature of miRNAs. We concluded that transcripts from a few miRNA loci are processed by DCL4 rather than by DCL1. The dependency on DCL4 for miR822 and miR839 accumulation appeared even higher than that for tasiRNA accumulation; in the absence of DCL4, ta-siRNA precursors are processed into 22-nt and 24-nt species by DCL2 and DCL3, respectively (Gasciolli et al. 2005;Xie et al. 2005b), whereas miR822 and miR839 species are not detectable in either dcl4-1 or dcl4-2 plants ( Fig. 2A; data not shown). Predicted targets of newly identified miRNAs Conserved miRNA targets can be predicted with very high confidence, whereas in single-genome analyses only the more extensively paired interactions can be predicted with reasonable confidence (Jones- . To better predict nonconserved interactions, scoring rubrics have been developed that preferentially penalize mismatches to the 5Ј and central regions of the miRNA (Allen et al. 2005;Schwab et al. 2005), which are more disruptive than those to the 3Ј region of the miRNA . When applying the rubric of Allen et al. (2005) in a single-genome search to predict targets of 22 unrelated miRNAs, scoring cutoffs that captured 86% of the experimentally confirmed targets of these miRNAs gave a ratio of authentic to falsepositive predictions of 6.9:1, estimated by summing the number of targets predicted for the miRNAs and comparing with the average predicted for 10 shuffled cohorts. Using these score cutoffs, we applied the rubric to predict targets of the newly discovered miRNAs, achieving a lower, although still significant, estimated signal:noise ratio of 3.0:1 (Table 2; Supplementary Database 4). One explanation for the apparently lower specificity was that for six miRNAs, the miRNA and miRNA* species were difficult to distinguish from each other, and thus both were included in the target prediction analysis, recognizing that one of the strands might contribute only false-positive predictions. Similarly, two registershifted sequences of roughly equal abundance from the miR829 foldback were included. Another explanation might be that some of the newly identified miRNA families have fewer targets with extensive complementarity than do the previously identified families. Indeed, some might not have any biological targets, a subset of which might be "young" DCL1/DCL4 substrates whose processing will soon be lost in the course of neutral evolutionary drift unless a beneficial targeting interaction emerges first. Nonetheless, the prediction of three times as many targets as expected by chance suggested that many of the newly identified miRNAs down-regulate genes. Targets for three of the more abundant miRNAs were validated by 5Ј RACE (Supplementary Fig. 3). These were CMT3, a miR823 target that encodes a CpNpG DNA cytosine methyltransferase; AGL16, a miR824 target that encodes a MADS-box transcription factor; and MYB113, a miR828 target that encodes a MYB transcription factor. Predicted targets of the newly identified miRNAs included transcription factors in the MYB and AP2 families, which each have paralogs known to be targeted by previously identified miRNA families (Supplementary Database 4). In addition, members of transcription factor families not previously known to be regulated by Arabidopsis miRNAs, such as MADS-box, ERF (ethylene response factor), WHIRLY, and Dof (DNA binding with one finger) proteins, were among the predicted targets. F-box-containing proteins added to the list of known miRNA targets implicated in protein degradation. A PPR gene distinct from those known to be targeted by miR161, miR400, or TAS1 or TAS2 trans-acting siRNAs was also among the predictions. Other predictions extended the biological processes thought to be regulated by miRNAs. For example, nine jacalin lectins, predicted miR842 and miR846 targets, bind complex carbohydrates and are thought to be involved in initiating pathogen defense responses (Geshi and Brandt 1998). Histone variants, and epigenetic silencing machinery such as CMT3 and a bromo-adjacent homology (BAH) domaincontaining protein were predicted targets, suggesting that Arabidopsis miRNAs regulate transcriptional silencing pathway components in addition to targeting miRNA biogenetic and effector proteins like DCL1 and AGO1. Evolutionary origins of miRNA genes Some miRNAs might have arisen from duplication of their target loci, and if so, those that were recently derived might exhibit similarity to their targets that extends beyond the mature miRNA sequence, as observed previously for miR161 and miR163 (Allen et al. 2004). Six of the newly identified miRNA loci displayed extended sequence similarity with their predicted target genes, diagnostic of common origins. Both arms of the MIR822 gene were previously observed to have an extended alignment to several DC1 domain-containing genes (Allen et al. 2005). The same pattern was seen for MIR841 and MIR826 and their predicted targets (Fig. 3A,B). A different pattern was observed for MIR842 and MIR846, suggesting an alternative pathway for miRNA gene emergence. As illustrated for MIR846, these genes appeared to derive from two regions of their predicted targets, rather than one (Fig. 3C). The simplest explanation for the dual alignment to their targets, with the miRNA arm of the hairpin aligning to one region of the target and the miRNA* arm aligning to the other region, was that a duplication within the targets preceded the duplications that gave rise to the miRNA locus. Another interesting miRNA-target configuration involved MIR840, which was expressed from the opposite strand of its predicted target gene, AtWhirly3. This is an arrangement first observed for an Epstein-Barr Virus miRNA and its target (Pfeffer et al. 2004), but one that had not been seen in plants. AtWhirly3 encodes a homolog of potato p24, a known transcriptional regulator of plant defense and disease resistance genes. In the sense orientation, the miRNA was found within the annotated 3Ј untranslated region (UTR) of a PPR mRNA, At2g02750. Although both strands encode a hairpin, our reads did not include any small RNA sequences from the AtWhirly3 strand. Either the presumptive miRNA or its star sequence could target the AtWhirly3 3Ј UTR for cleavage. This implies a mechanism by which the expression of one member of a convergent gene pair influences the output of the other-a miRNA counterpart to that observed previously for a convergent gene pair that generates nat-siRNAs (Borsani et al. 2005). Of the 44 genes for the miRNAs and candidates listed in Table 2, 35 were in regions between annotated genes, as is typical of plant miRNA genes , whereas nine overlapped protein-coding genes. One was miR840, described above. Of the remaining eight, miR837, miR838, miR848, miR852, and CandidateD overlapped introns, in the same orientation as the protein-coding host gene-an arrangement that bypasses the need to acquire an independent promoter (Baskerville and Bartel 2005). Mature miR837 also had a second match in the genome, located within the same intron that contains the miR837 stem-loop but in the antisense orientation, suggesting that miR837 might target the pre-mRNA of its host gene, an oligopeptide transporter. miR841 derived from the strand antisense to the intron of At4g13570, a gene closely related to one of its predicted targets but itself not predicted because our search was limited to spliced messages. miR777 and miR834 were localized to the 5Ј UTR and 3Ј UTR of genes, respectively, with their foldbacks potentially extending into annotated protein-coding regions. Cold Spring Harbor Laboratory Press on February 19, 2020 -Published by genesdev.cshlp.org Downloaded from A homeostatic self-regulatory mechanism for DCL1 miR838 derived from a hairpin within intron 14 of the DCL1 mRNA (Fig. 4A). The foldback potential of this intron was previously noted, and RACE mapping of the DCL1 transcript revealed a 4.0-kb fragment whose 3Ј end terminates at the exon 14/15 junction, and a population of ∼2.5-kb fragments, some of which have 5Ј ends falling within intron 14 (Xie et al. 2003). Because small RNAs were not detected, the fragments were attributed to aberrant splicing at intron 14 (Xie et al. 2003). We propose that the presence of this intronic miRNA enables a self-regulatory mechanism that helps maintain DCL1 homeostasis (Fig. 4B). When nuclear DCL1 protein levels are high, the miRNA biogenesis machinery (including DCL1 and HYL1) could compete more efficiently than the splicing machinery for the DCL1 precursor transcript. If DCL1 began to process the miRNA hairpin before the intron 14 splice sites were defined and juxtaposed during spliceosome formation, then DCL1 expression would shift toward a pool of truncated, nonfunctional DCL1 transcripts, thereby providing a regulatory feedback mechanism that supplements miR162-directed cleavage (Fig. 4C). 5Ј RACE confirmed that a population of fragments had 5Ј ends terminating at the ends of miR838 (Fig. 4A). The low abundance of the miRNA can be explained by the idea that four linkages must be cut to generate the miRNA:miRNA* duplex, whereas just a single cut bisects the mRNA. Perhaps very little of the duplex is fully excised, and as a result the miRNA never accumulates to sufficient levels to direct efficient target cleavage. We suggest that the processing of other intronic miRNAs might also influence the expression of their host genes-speculation bolstered by the presence of a conserved miRNA-like hairpin in the mammalian DGCR8 gene, whose protein product functions in pri-miRNA processing (Pedersen et al. 2006). A newly identified tasiRNA gene in Arabidopsis In a search for tasiRNA loci, we implemented a clustering algorithm that scanned the genome for phased clusters of ∼21-nt reads. This procedure found all five of the previously identified Arabidopsis tasiRNA genes (TAS1a, TAS1b, TAS1c, TAS2, and TAS3) (Supplementary Database 3) and discovered an additional locus, TAS4 (Fig. 5A), mapping between At3g25800 and At3g25790 (a MYB transcription factor). Because miRNA-directed cleavage sets the phase for and stimulates production of tasiRNAs (Allen et al. , we searched TAS4 for miRNA complementary sites upstream of and downstream from the region that generated small RNAs. It identified a single miR828 complementary site. Cleavage at this site, validated by 5Ј RACE, defined a 5Ј terminus that matched that of the most proximal siRNAs arising from this locus and was in perfect register with the other predominant siRNAs (Fig. 5A). The EST mapping to this region (AU226008) corresponded to the opposite strand of the inferred primary transcript and presumably represented the RDR6polymerized strand. Although poplar ESTs with miR828 complementary sites were found, conservation of AtTAS4 to poplar was unclear. We also predicted three targets for TAS4-siR81(−), one of the dominant TAS4 siRNAs. The predicted targets, PAP1/MYB75/At1g56650, PAP2/MYB90/At1g66390, and MYB113/At1g66370 (Fig. 5B), encoded three MYB transcription factors that were distinct from the MYB genes targeted by miR159, and those with complementarity to miR835 and CandidateD. PAP1 and PAP2 regulate expression of anthocyanin/flavenoid and phenylpropanoid biosynthetic genes, and might also be involved in regulating leaf senescence (Borevitz et al. 2000;Pourtau et al. 2006). Intriguingly, miR828 was also predicted to down-regulate MYB113 at an independent target site (Fig. 5B), suggesting a close functional evolutionary relationship among these MYB target genes, miR828, and the TAS4 cluster. Using 5Ј RACE, we identified mRNA cleavage fragments diagnostic of miR828-directed cleavage of MYB113 and tasiRNA-directed cleavage of PAP2, thereby experimentally confirming these predicted targets and demonstrating that the TAS4 locus was indeed trans-acting. When considered as a group, the 10,469 reads from all six TAS genes were predominantly 21 nt and tended to begin with a uridine, as also observed for Arabidopsis miRNAs (Supplementary Fig. 1). MicroRNA and synthetic siRNA duplexes assemble into the silencing complex asymmetrically, such that the strand that pairs with less stability at its 5Ј terminus is incorporated as the guide strand, while the other strand is degraded (Khvorova et al. 2003;Schwarz et al. 2003). Analysis of the initial 12 siRNA reads from TAS1a suggests that tasiRNAs also obey the asymmetry guidelines ). The acquisition of many additional tasiRNA reads enabled us to revisit this issue. For each of the six TAS genes, each possible duplex represented by a 21mer read (including those out of register with the dominant phasing register) was considered and evaluated for which strand of the duplex yielded more reads. For 57% of the duplexes with energetically distinct terminal base-pairing, the strand with more reads was the one that appeared to be least stably paired at its 5Ј terminus. Confounding this analysis, however, was the preference for a U at tasiRNA 5Ј termini. If assembly or stability of the silencing complex simply preferred a U at the 5Ј terminus of the guide strand, without regard for the differential pairing stabilities at the duplex ends, then there would more frequently be an A:U pair at the 5Ј terminus of the guide strand than at the 5Ј terminus of the passenger strand, thereby generating artifactual adherence to the pairing asymmetry guidelines. Indeed, the weak adherence vanished when repeating our analysis considering only the duplexes with an A or U at the 5Ј termini of both the guide and passenger strands. Apparently, the pairing asymmetry guidelines do not apply for tasiRNAs. Figure 5. The TAS4 locus gives rise to tasiRNAs predicted to down-regulate MYB transcripts. (A) The number of reads with a 5Ј terminus at each position is plotted. Bars above the axis represent sense reads; those below represent antisense reads. The miR828 complementary site is marked by a red arrow and shown below the graph, together with the fraction of 5Ј RACE clones supporting the indicated cleavage site. TAS4-siR81(−), the siRNA predicted to target MYB genes, is indicated (blue bar), as is the spacing separating the phased species at each interval; spacing for the species not represented by reads is indicated in gray. (B) TAS4-siR81(−) and miR828 complementary sites in three MYB genes. Cleavage confirmed by 5Ј RACE is indicated (arrows), along with the fraction of clones mapping to the cleavage site. The remaining 14 PAP2 clones mapped >20 nt from the cleavage site. GENES & DEVELOPMENT 3417 Cold Spring Harbor Laboratory Press on February 19, 2020 -Published by genesdev.cshlp.org Downloaded from Other endogenous siRNAs in Arabidopsis mapped predominantly to intergenic regions After removing the RNAs that corresponded to the sense strand of ribosomal RNAs (rRNAs), transfer RNAs (tRNAs), small nuclear RNAs (snRNAs), and small nucleolar RNAs (snoRNAs) ( Table 1; Supplemental Material), and those matching previously annotated and newly identified miRNAs and tasiRNAs, we considered the remaining RNAs that matched the nuclear genome. These included a majority (63%) of the reads and a large majority (90%) of the unique sequences (Table 1). Of the reads that did not match noncoding RNA transcripts, only 10% corresponded solely to annotated mRNAs or introns, which represented substantial depletion when considering that 49% of the sequenced genome is annotated as mRNA and intron. Of those that hit annotated mRNAs or introns, ∼46% were exclusively in the antisense orientation to a protein-coding gene. The length and 5Ј nucleotide profiles of small RNAs mapping exclusively to the sense strand of genes closely resembled that of small RNAs mapping exclusively to the antisense strand of genes ( Supplementary Fig. 1G,H), suggesting that a majority of the sense as well as antisense reads might be siRNAs. We considered them, together with the other small RNAs that did not match noncoding RNA transcripts, as endogenous siRNA candidates. The candidate siRNAs included 20,720 reads that mapped to the antisense of rRNAs or to ribosomal DNA (rDNA)-like repeats but not to the mature rRNA sequences (Table 1). These are likely to include bona fide siRNAs acting by targeting the rDNA arrays for chromatin or histone modifications Pontier et al. 2005;Li et al. 2006;Pontes et al. 2006). Because the fraction of the genome comprised by the rDNA arrays, as well as their copy number, was unknown, and because much of the rDNA sequence was missing from the current assembly, it was difficult to determine if this represented an enrichment. The candidate siRNAs were mostly 24mers, the size of siRNAs associated with PolIV, heterochromatin formation, and DNA methylation (Chan et al. 2004;Xie et al. 2004;Herr et al. 2005;Kanno et al. 2005;Onodera et al. 2005;Pontier et al. 2005). As expected based on initial sequencing efforts (Tang et al. 2003), the 24mers were enriched for a 5Ј-terminal adenosine ( Supplementary Fig. 1E,F). As for the tasiRNAs, a tendency to adhere to the pairing asymmetry guidelines for silencing complex assembly was observed only in a naïve analysis that did not consider the presumably independent preference for a particular nucleotide at the 5Ј termini of the siRNA reads. When correcting for the preference for an A at the 5Ј terminus of the candidate siRNAs, the strand with less stable pairing at its 5Ј terminus was sequenced no more frequently than expected by chance. Small RNAs matching annotated protein-coding genes Some protein-coding genes had a particularly high propensity for spawning small RNAs (Supplementary Table 2). Eleven of the 20 genes most frequently hit, when read counts were normalized by the number of genomic hits and assigned only when unambiguously sense or antisense to genes, were convergently transcribed with a neighboring gene. These included an antisense gene pair, At2g16580/At2g16575, ranked 16th and 19th. Both genes have ORFs with unknown functions, with one ORF falling largely within the intron of the other. Convergent, overlapping transcription presumably generated dsRNA from which the small RNAs were derived. For the nine remaining genes in a convergent context, the 3Ј termini were either uncharacterized or had nonoverlapping annotation. RdRPs provide another mechanism for generating dsRNA. A search for reads that matched a cDNA database but failed to match the genome found 32 reads that spanned mRNA splice junctions in the antisense orientation (Supplementary Table 3). Such reads provided evidence for siRNAs generated by RdRP acting on a spliced mRNA template. The two cDNAs with the most nonoverlapping antisense hits to splice junctions encoded a TIR-NBS-LRR disease-resistance protein (At5g38850) and a basic helix-loop-helix protein (At3g23690). Both were among the top 20 genes hit by small RNAs (Supplementary Table 2). Neither had hits to introns, indicating that for these two genes the RdRP activity acted primarily on spliced templates. However, many of the genes frequently hit by small RNAs had hits to introns. Moreover, many more small RNAs matched the sense strand of mRNA splice junctions (641) than matched the antisense (Supplementary Table 3). The 20fold difference between sense and antisense reads to splice junctions, compared with the nearly even numbers of sense and antisense reads matching protein-coding genes more generally, suggests that if RdRPs play a major role in producing siRNAs from protein-coding regions, then the templates are usually unspliced transcripts. Candidate siRNAs derived preferentially from pericentromeric regions, with a slight preference for repeats When candidate siRNA reads were mapped to each chromosome, plotting sequencing abundance normalized by the number of genome matches (Fig. 6), the largest peak included mostly 22mers and corresponded to a cluster of Gypsy and MuDr elements on the long arm of Chromosome 3. The next two largest peaks corresponded to two rDNA arrays with neighboring repetitive elements on Chromosome 3 and Chromosome 2. Most of the remaining small RNAs were 24mers that mapped to numerous loci dispersed throughout the genome (Fig. 6). Although the Arabidopsis genome is relatively compact, repetitive loci are abundant. They are most dense at and near the centromeres, and their density gradually tapers off in the 2-3 Mb on both sides of each centromere as protein-coding density increases (Fig. 6). Along the remainder of the chromosomes, repeats are present but occur at much lower density. The siRNA density did not peak at the same regions as the repeat density peaked and was instead greatest in the proximal and distal pericentromeric regions, characterized by an intermediate density of both repeats and annotated protein-coding genes (Fig. 6). A look at a diagnostic centromeric repeat class, the ∼180-base-pair (bp) repeat satellite arrays (Copenhaver et al. 1999;Nagaki et al. 2003), illustrated this result. Only 2386 reads (1055 unique sequences) matched ∼180-bp centromeric satellite repeats annotated by RepeatMasker. This was 0.43% of our reads, whereas the annotated ∼180-bp repeat represented 0.39% of the current genome assembly. Because many ∼180-bp repeats are missing from the assembly, this slight apparent enrichment was undoubtedly an overestimate; siRNAs deriving from unassembled repeats would artifactually add to the perceived density at any assembled repeats that they match. The unremarkable correspondence between candidate siRNAs and known heterochromatin was also illustrated at the heterochromatic knobs, which were rich in candidate siRNAs, but not more enriched than were the pericentromeric regions that surrounded them (Fig. 6). The observation that siRNAs were often associated with repeats, but were not highest where repeats were most dense, raised the question of whether siRNAs derived preferentially from repeat loci. Transposons, retro-Figure 6. Normalized abundance of candidate siRNAs in 0.1-Mb windows spanning the nuclear genome. Colored bars above the axis represent matches to the plus strand; colored bars below the axis represent those to the minus strand, with the colors indicating the proportion of 21mers (red), 24mers (light and dark blue), 24mers with a 5Ј A (light blue), and other lengths (yellow). Below the siRNA profiles are histograms plotting the fraction of nucleotides falling within annotated protein-coding genes (black; scale, 0%-100%) and the fraction falling within repetitive elements annotated by RepeatMasker (gray; scale, 0%-100%). Centromeres are indicated by solid gray bars, and heterochromatic knobs are indicated by hashed gray bars. GENES & DEVELOPMENT 3419 Cold Spring Harbor Laboratory Press on February 19, 2020 -Published by genesdev.cshlp.org Downloaded from elements, and low-complexity sequences identified by RepeatMasker comprised ∼15% of the current genome assembly. Of the 558,481 candidate siRNA reads, 188,502 (34%) hit these regions annotated by Repeat-Masker-a modest, twofold enrichment over the 15% that would have been expected if the siRNAs derived uniformly from repetitive and nonrepetitive regions throughout the genome. The twofold enrichment was largely attributed to the depletion of both repeats and siRNA matches within annotated protein-coding genes. Of the 51% of the genome that fell between annotated protein-coding genes, ∼30% corresponded to repeats annotated by RepeatMasker. Of the 491,180 siRNAs mapping between annotated protein-coding genes, 188,502 (38%) hit regions annotated by RepeatMasker, indicating only a 1.2-fold preference for repeat regions within intergenic regions. The modest preference for repeat regions decreased further when excluding the 20,720 reads deriving from rDNA repeats. When considering only those RNAs associated with AGO4 (Qi et al. 2006), this slight enrichment increased, but not by much. Local (<100 kb) inverted-repeat regions did not appear to be overrepresented among genomic hits of intergenic siRNA candidates that fell outside of repetitive elements identified by RepeatMasker. Of the repetitive DNA detected by RepeatMasker, 80% was either class I (retrotransposon derived) or class II (DNA transposon derived). About 95% of the repeat-associated siRNAs corresponded to these two classes, in the proportion expected based on the contribution of these two classes to the genome. Representation of some of the more well-characterized transposable element families is listed in Supplementary Table 6. Small RNA hotspots corresponded to unannotated genomic regions Some intergenic loci had a high propensity to give rise to candidate siRNAs. To supplement the low-resolution analysis (Fig. 6), we performed a higher-resolution search for such siRNA hotspots and then surveyed the annotations corresponding to the top 20, which ranged in length from 0.5 to 50 kb. Although most were in the vicinity of mobile elements or low-complexity sequence, only one hotspot had a transposon at the densest region of siRNAs. Two of the top 20 were very near centromeres, and half were in pericentromeric regions, within 4 Mb of the centromeres. One hotspot, ranked 12th, corresponded to the 5S rDNA array on chromosome 2. Although the topmost-ranked hotspot corresponded to a predicted but unlikely ORF, the other highly ranked hotspots were typically lacking in annotated features within the region producing the majority of small RNAs and represented uncharacterized intergenic regions ( Supplementary Fig. 4). A preference for being in local (<100 kb) inverted repeats was not found among the top 20 hotspots, but three lower-ranking loci (ranking 27, 36, and 37) were found in an inverted context. Nine of the top 20 hotspots were in a convergent context with regard to flanking annotated genes. This was higher than might have been expected if convergent, nonconvergent, and divergent contexts were randomly distributed with respect to siRNA-generating loci. However, without mapped transcripts for these convergent flanking genes, the mechanism for siRNA production remains to be elucidated. Endogenous siRNAs in Arabidopsis Perhaps the most surprising property of the candidate siRNAs was their underwhelming tendency to derive from repeat loci. Of course, RepeatMasker is limited to the identification of repetitive DNA with detectable homology with known repetitive element families, and cannot recognize genomic regions corresponding to novel transposable elements. Although some candidate siRNAs that did not match annotated repeats had multiple genomic matches, suggesting that they might derive from uncharacterized repeats (Table 1), most had only one hit. Moreover, unknown repeats would substantially increase the 1.2-fold enrichment found in intergenic regions only in the unlikely event that the repeats not yet identified were a far richer source of siRNAs than were known repeats. Part of the reason that repeats generally were not a more rich source of siRNAs was that the regions within and immediately flanking the centromeres, which are mostly annotated repeats, were somewhat depleted in siRNAs when compared with the more distal pericentromeric regions, which had only an intermediate density of repeats (Fig. 6). This observation differed from the report that siRNA density closely mirrors repeat density (Lu et al. 2005). We attribute this apparent contradiction to our normalization of read counts based on the number of times the sequence hit the genome assembly. That is, if a sequence with two reads hit the genome 200 times, we assigned one-hundredth of a count to each locus, rather than two counts to each locus. Our approach attempted to reflect both the fact that a given molecule cannot arise simultaneously from more than one locus, and recent results showing that heterochromatic siRNAs act preferentially at the locus of origin (Buhler et al. 2006), while at the same time leaving ambiguous which repeat locus gave rise to a particular siRNA molecule. Our finding that siRNAs were less abundant at the centromeres, compared with the pericentromeric regions, was reminiscent of the heterochromatic siRNAs of Schizosaccharomyces pombe, which map to the heterochromatic outer repeats of the centromeres but not to centromere cores . We suggest that heterochromatic siRNAs might function primarily near the boundaries of heterochromatin and euchromatin and play less of a role within large stretches of heterochromatin at the centromeres. Pairing asymmetry, known to influence incorporation of miRNAs and synthetic siRNAs into silencing complexes (Khvorova et al. 2003;Schwarz et al. 2003), had no detectable correlation with accumulation of tasiRNAs and other Arabidopsis siRNA candidates. The same was found when we analyzed (data not shown) a set of Arabidopsis transgene siRNAs previously reported to follow the guidelines (Khvorova et al. 2003). Therefore, for no known cases in animals or plants do endogenously expressed siRNAs preferentially follow the asymmetry guidelines. One explanation might be that most siRNAs in the cell are in the duplex configuration and have not been loaded into the silencing complex. However, no preference was observed when repeating the analysis with a recently reported set of Ago4-associated siRNAs. Therefore, we favor the notion that for most if not all classes of endogenous siRNAs, pairing asymmetry plays little or no role in deciding which strand of the duplex serves as the guide strand. MicroRNAs are a different story; in every plant or animal species examined, miRNA accumulation tends to follow the asymmetry guidelines, even after accounting for their propensity to begin with a U (data not shown). In mammals, synthetic siRNAs also obey the asymmetry guidelines, presumably because vertebrate cells recognize and utilize a synthetic siRNA duplex as if it were an endogenous miRNA duplex. Perhaps more important than pairing asymmetry for endogenous siRNAs is the identity of the 5Ј residue. For plant 24mer siRNAs, a 5Ј-terminal A may favor incorporation or stabilization within the silencing complex, whereas for tasiRNAs, a 5Ј U may do the same. The identity of the 5Ј nucleotide might also influence the incorporation or stability of miRNAs, which would help explain the strong preferences for U over A and C over G observed at their 5Ј termini in all species. A diverse set of newly emergent miRNAs Many miRNA candidates have been proposed over the last few years, some of which have been published and annotated in miRBase as authentic Arabidopsis miRNAs. Our large data set provided an opportunity to evaluate these candidates and the methods used to identify them. Beyond the 97 confidently identified genes, none of the other current Arabidopsis miRNA annotations (miRBase version 7.1) were supported by our data from wild-type plants grown under standard conditions; some of these proposed hairpins matched reads but in a pattern suggestive of endogenous siRNAs (Supplementary Database 1). Furthermore, none of the mature miRNA and candidate sequences of Table 2 matched recently proposed computational candidates, although for seven of 592 recent miRNA predictions (Lindow and Krogh 2005) there was some overlap, which ranged between 7 and 19 nt (Supplementary Database 2). Apart from homologs of known miRNAs, it appears that the only plant miRNAs to have been identified computationally and subsequently confirmed experimentally were those initially reported by Jones- , at a time when computational searches that required evolutionary conservation could still be productive because some highly conserved miRNAs remained to be found. miR771, miR772, miR775, miR777, miR779, and one of our candidates (CandidateI) corresponded to miRNA hairpins reported while our paper was in review . For MIR772, the species we annotated as the miRNA appears to be the miRNA*; for MIR779, the species we sequenced more frequently and annotated as the miRNA derived from a different portion of the hairpin than did miR779.1. Five of our newly identified miRNAs were in a set of 86 candidates previously suggested by analysis of MPSS signatures (Lu et al. 2005) and whose sequences were provided by B. Meyers (pers. comm.). Of the 38 newly identified miRNAs, only one, miR828, was clearly conserved in other sequenced genomes. The inferred emergence of the new miRNAs after the divergence of the eurosids I (represented by Arabidopsis) and II (represented by poplar) ∼90 million years ago (Wikstrom et al. 2001) significantly changes our view of miRNAs in plants. Previously, the proportion of known miRNAs that were conserved among eudicots (Arabidopsis and poplar) was quite striking-92 of the 97 known genes, 21 of the 26 known families. With respect to the number of microRNA molecules in wild-type plants, this domination by conserved miRNAs still holds, in that >87% of the miRNA molecules we sequenced were conserved throughout sequenced flowering plants. However, with respect to the diversity of plant miRNAs, the picture has dramatically broadened to encompass twice as many nonconserved miRNA families as conserved. In addition to the previously known set of highly conserved miRNAs, each typically expressed from multiple genes at high levels, we now know of a much more evolutionarily flexible set of miRNAs, each expressed from single genes at low levels or in very specialized tissues in plants grown under standard conditions. A plot of the cumulative distribution of sequencing frequency illustrates the relationship between conservation and expression that delineates these two sets of miRNAs (Fig. 1D). All but three of the 14 families sequenced at a frequency of greater than one per 1000 were conserved, whereas only 11 of the 51 families sequenced at a frequency less than one per 1000 appeared to be conserved. The identification and characterization of these additional miRNAs also expanded our view of plant miRNA biogenesis. At least two miRNAs, miR822 and miR839, depended on DCL4 rather than DCL1 for their accumulation. Just as DCL1, which is primarily responsible for miRNA biogenesis, can generate some siRNAs (Borsani et al. 2005;Bouche et al. 2006;Henderson et al. 2006), DCL4, which is primarily responsible for siRNA biogenesis, can generate some miRNAs. The imprecise cleavage of MIR161, which yields miR161 5Ј termini ranging over 16 nt, and the dual, apparently sequential processing of the MIR163 hairpin, which yields miR163.1 and miR163.2 (Kurihara and Watanabe 2004), both illustrate that DCL1 processing of some apparently young miRNA hairpins can be quite heterogeneous (Supplementary Database 1). DCL4-catalyzed cleavage appears even less precise, with a signature yielding numerous minor products often in phase with the miRNA:miRNA* duplex, suggestive of sequential processing after liberation of the miRNA:miRNA* (Fig. 2B). DCL4 can also process per-fect hairpins to generate transgene siRNAs (Dunoyer et al. 2005), which are presumably far less defined. To the extent that these transgene hairpins might resemble evolutionary precursors of some miRNAs (Allen et al. 2004), we suggest an adaptive switch from DCL4-to DCL1mediated processing during the course of miRNA gene emergence and evolution, which is driven by selective pressure for enhanced processing precision as the hairpin acquires substitutions and elevated expression, increasing both the probability and consequences of off-target repression. We suspect that the accumulation of some of the miRNAs that accumulate to levels insufficient to detect by RNA blot might also be DCL4 dependent. One attractive candidate would be MIR841, which appears to have emerged recently (Fig. 3) and for which registershifted variants were isolated (Supplementary Database 2). The nonconserved plant miRNAs presumably emerge and dissipate in short evolutionary time scales. Such rapid emergence of new genes is likely facilitated by the small size and simple architecture of miRNA genes. It could be further facilitated by mechanisms in which they can derive from their future targets ( Fig. 3; Allen et al. 2004), although it is unclear whether such mechanisms are relevant for most newly emergent miRNAs or just a minority of them. High-throughput sequencing of small RNAs from species closely related to Arabidopsis would help define the life span of these transient miRNA genes as well as the types of processes that they are particularly prone to control. We suspect that these processes will include those under strong positive selection, such as those involved in pathogen response and reproductive isolation. With the discovery of this diverse, evolutionarily fluid set of miRNAs sequenced at low frequency, the question arises as to how many more miRNAs remain to be reliably identified in Arabidopsis. Extrapolating from the sequencing frequencies of the conserved miRNAs, there is little reason to suspect that many more conserved families remain to be discovered (Fig. 1D). Indeed, the curve for the conserved miRNAs was already beginning to plateau with the identification of the first 13 plant miRNA families . The forecast is quite different for the nonconserved families, for which the curve shows no sign of a plateau, particularly when considering the 40 plausible candidates that appeared to derive from miRNA-like hairpins but did not meet our criteria for confident annotation because their miRNA* species had not been sequenced (Fig. 1D, gray symbols; Supplementary Table 1). Based on the large number of genomic segments with predicted potential to give rise to miRNA-like hairpins, it has long been easy to speculate that many nonabundant, nonconserved miRNAs might exist in a given plant or animal. For Arabidopsis, such speculation now has experimental support. Libraries and sequencing Wild-type Arabidopsis (Columbia accession) plants were grown under standard greenhouse conditions, except seedlings, which were grown as in . Total RNA was extracted (Mallory et al. 2001) from whole seedlings, flowers, rosette leaves, and siliques, harvested 6 d, 4 wk, 6 wk, and 2 mo after planting, respectively. Small RNA cDNA libraries were prepared for standard sequencing as in Lau et al. (2001) and for bead-in-well pyrophosphate sequencing as in Axtell et al. (2006). Pyrosequencing was performed at 454 Life Sciences. miRNA identification Twenty-nucleotide to 24-nt sequences with more than one read, 16 or fewer hits to the genome, and no matches to annotated noncoding RNA were folded using RNAfold with 330 nt of upstream and downstream flanking sequence. For efficiency, candidate reads were clustered and only the most abundant in a set of overlapping hits was considered. Structures were evaluated using mirCheck, a script that assesses the quality of a foldback based on a battery of parameters that capture known miRNA hairpins (Jones- . Hairpins that passed this initial filter were then manually screened. Designation as a miRNA required (1) a foldback in which the duplex region that included 25 nt centered on the most frequently sequenced read had less than eight unpaired nucleotides (summing unpaired nucleotides on both arms of the stem) and no more than three consecutive unpaired nucleotides, of which no more than two were asymmetrically bulged; (2) a sequenced miRNA* species (paired to the miRNA within the duplex with 2-nt 3Ј overhangs) or for candidates with three or more reads, a slight variant of the miRNA*; and (3) a sense:antisense read ratio >0.90. In practice, all but six foldbacks that passed manual inspection and were named as miRNA loci had a ratio >99% (Supplementary Database 2). MIR824, which has >330 nt between the miRNA and the miRNA*, was found in a separate analysis of genomic regions with abundant 21-nt reads. Phased siRNA discovery For each unique small RNA sequence (excluding those matching miRNAs, other noncoding RNAs, or protein-coding genes) a 500-nt window, anchored at one end by that sequence, was evaluated for phased small RNAs. If three or more unique 20-to 23-nt sequences with nonoverlapping hits existed in the window, each was evaluated for phasing with any of the others in the window, allowing ±2 nt of divergence from perfect 21-nt phasing. Phased sequences were extracted from the window and the process was repeated for any remaining 20-to 23-nt sequences until two or fewer unique nonoverlapping hits were left. Each potential phase was then evaluated according to five parameters: (1) a count of all unique sequences in phase; (2) a count of the reads in the window; (3) a hits-normalized score, whereby the sum of the read frequencies of all phased sequences was divided by the sum of their genomic hits; (4) a normalized 21mer score, which divided the sum of the 21mer reads in phase by the sum of the reads in the window; and (5) a phasing score, which divided the sum of the reads in phase by the sum of the reads in the window. Cutoffs for each score were empirically adjusted to find values that captured all known tasiRNA clusters but restricted the number of false positives. Target site prediction for miRNAs and TAS4 siRNAs Patscan was used to search for near matches (up to six mismatches, or four mismatches and one bulged nucleotide) in TAIR version 6.0 Arabidopsis cDNA database (http://www. arabidopsis.org) to each miRNA, and target sites were scored as described (Allen et al. 2005). To assess performance, we applied this algorithm to a control set of diverse Arabidopsis miRNAs, choosing the most frequently sequenced miRNA variant to represent each known miRNA family for which mRNA targets have been experimentally validated by 5Ј RACE, as listed in Jones- Rhoades et al. (2006). We also generated 10 different shuffled cohorts of these 22 miRNAs, preserving dinucleotide composition. Signal:noise ratios were calculated by comparing the total predictions for authentic miRNAs (signal) and the average for shuffled cohorts (noise). Analogously selected cohorts were also used to estimate specificity of target prediction for the newly identified miRNAs. Hotspot identification For candidate siRNAs, nonoverlapping 500-bp windows were ranked by scoring small RNA density as a sum of abundances of all sequences in the window, normalized by the sum of the total number of times each sequence hit the genome. Top-ranking windows were then used as seeds for extension in both directions until a 500-bp window lacking any siRNA hits was encountered. siRNA duplex asymmetry determination Conceptual duplexes with 2-nt 3Ј overhangs were constructed by determining the reverse complement of the sequenced strand. The terminal three pairs (two nearest neighbors) on each end of the duplex were analyzed, comparing sums of the two −⌬G°3 7 nearest-neighbor parameters for RNA duplex stability (Xia et al. 1998). More complex algorithms were also implemented and yielded similar conclusions (Supplemental Material). 5Ј RACE 5Ј RACE was performed as described in Jones-Rhoades and Bartel (2004), except that RNA samples were obtained from whole siliques or seedlings, gene-specific primers (Supplemental Material) were designed to be 70-400 bases from the predicted cleavage site, and the first gene-specific amplifications for DCL1, PAP2, and MYB113 were done with the GeneRacer 5Ј outer primer and were followed by two nested amplifications done with the GeneRacer 5Ј nested primer. Accession numbers All genome-matched small RNA sequences generated in this study are accessible at http://www.ncbi.nlm.nih.gov/geo as Platform GPL3968; Samples GSM118372, GSM118373, GSM118374, and GSM118375; and Series GSE5228. All genomic loci of the small RNAs are listed in Supplementary Table 5; sequences that hit cDNA but not the genome are listed in Supplementary Table 3.
2018-04-03T03:36:19.770Z
2006-12-15T00:00:00.000
{ "year": 2006, "sha1": "0d3549f769c37e59debe18cb185e85c70ba17c90", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/20/24/3407.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f1bf0b92b0328db4667ca878373d9f9c8f9ac8a5", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1375812
pes2o/s2orc
v3-fos-license
Parkin Regulates the Activity of Pyruvate Kinase M2* Parkin, a ubiquitin E3 ligase, is mutated in most cases of autosomal recessive early onset Parkinson disease. It was discovered that Parkin is also mutated in glioblastoma and other human malignancies and that it inhibits tumor cell growth. Here, we identified pyruvate kinase M2 (PKM2) as a unique substrate for parkin through biochemical purification. We found that parkin interacts with PKM2 both in vitro and in vivo, and this interaction dramatically increases during glucose starvation. Ubiquitylation of PKM2 by parkin does not affect its stability but decreases its enzymatic activity. Parkin regulates the glycolysis pathway and affects the cell metabolism. Our studies revealed the novel important roles of parkin in tumor cell metabolism and provided new insight for therapy of Parkinson disease. Parkin, a ubiquitin E3 ligase, is mutated in most cases of autosomal recessive early onset Parkinson disease. It was discovered that Parkin is also mutated in glioblastoma and other human malignancies and that it inhibits tumor cell growth. Here, we identified pyruvate kinase M2 (PKM2) as a unique substrate for parkin through biochemical purification. We found that parkin interacts with PKM2 both in vitro and in vivo, and this interaction dramatically increases during glucose starvation. Ubiquitylation of PKM2 by parkin does not affect its stability but decreases its enzymatic activity. Parkin regulates the glycolysis pathway and affects the cell metabolism. Our studies revealed the novel important roles of parkin in tumor cell metabolism and provided new insight for therapy of Parkinson disease. Parkin, a RING-HECT hybrid E3 ubiquitin ligase (1), is mutated in most cases of autosomal recessive early onset Parkinson disease, which is a neurodegenerative disease associated with loss of dopaminergic neurons in the midbrain (2,3). Parkin plays a central role in mitochondrial homeostasis and mitophagy by ubiquitylating and tagging depolarized or damaged mitochondria for clearance (4). Parkin translocates to depolarized or impaired mitochondria from the cytosol (5)(6)(7) and ubiquitylates a number of proteins within the mitochondrial outer membrane (8,9). Increasing evidence has shown that parkin functions as a tumor suppressor. Human parkin gene localizes at chromosome 6q25-q26 region, which is a common fragile site. It has been found that this region is often deleted in ovarian, lung, and breast cancers (10). Parkin knock-out mice had enhanced hepatocyte proliferation and developed macroscopic hepatic tumors with the characteristics of hepatocellular carcinoma (11). Recently, it was discovered that parkin is mutated in glioblastoma and other human malignancies (12). Cancer-specific mutations abrogate the growth-suppressive effects of the parkin protein. Parkin mutations in cancer decrease its E3 ligase activity, compromising its ability to ubiquitylate cyclin E and resulting in mitotic instability (12). Several studies indicate that parkin affects tumor cell metabolism (9,12,13). A proteomic study identified a number of enzymes in metabolism as candidate substrates for parkin (9). Parkin prevents the Warburg effect and promotes oxidative metabolism as a p53 target gene (13). However, the role of parkin in tumor cell growth inhibition remains obscure. Glycolysis is the essential metabolism pathway for cell growth and survival. Compared with normal cells, tumor cells often have an increased rate of glycolysis and utilize much more glucose to keep the balance among the production of ATP, biosynthesis of building blocks, and reducing equivalents for rapid proliferation (14,15). The key step of glycolysis is catalyzed by pyruvate kinase to convert phosphoenolpyruvate to pyruvate. Pyruvate kinase M2 (PKM2) 2 is a less active isoform of pyruvate kinase and is important for tumor cell maintenance and growth (16 -21). Its enzymatic activity is allosterically regulated; the natural ligands and allosteric regulators of PKM2 include fructose 1,6-bisphosphate (22), serine (23), and phosphoribosylaminoimidazolesuccinocarboxamide (24). It was reported that PKM2 is phosphorylated (25)(26)(27) and acetylated (28,29), indicating that PKM2 can be regulated by post-translational modification. In this study, we identified parkin as a regulator of PKM2 through biochemical purification of protein complex. Parkin is a specific PKM2-interacting protein and catalyzes ubiquitin conjugation to PKM2 mainly on sites Lys-186 and Lys-206. Ubiquitylation of PKM2 decreases its enzymatic activity. In contrast, PKM2 enzymatic activity is enhanced after ablation of parkin, hence increasing the steady state metabolite levels of glycolysis in cells. This is the first direct evidence to support the concept that parkin suppresses tumor growth by inhibiting glycolysis through PKM2 ubiquitylation. vector. The full-length PKM2 was amplified by RT-PCR from human cells. The cDNA sequences corresponding to fulllength parkin and different fragments of parkin and those corresponding to full-length PKM2 were amplified by PCR and subcloned into pGEX (GST) vector for expression in bacteria. Antibodies used in Western blotting assay were ␤-actin (A15), FLAG M2 from Sigma; HA (3F10) from Roche Applied Science; PKM2 (3198), Parkin (2132) and COX IV from Cell Signaling; citrate synthase form Abcam; Myc (9E10) from Santa Cruz. Transfections with plasmid DNA or siRNA oligos were performed by Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol. The immunoprecipitation assay was performed with cell cytoplasmic extracts. Cytoplasmic extracts were prepared by resuspending cell pellet in hypotonic buffer (10 mM Tris-HCl, pH 7.9, 10 mM KCl, and 1.5 mM MgCl 2 supplemented with fresh proteinase inhibitor mixture and 0.2% CHAPS), Dounce homogenizing (20 strokes with a Type A pestle), and pelleting nuclei (1000 ϫ g for 10 min). The supernatant was kept as the cell cytoplasmic extracts. Cytoplasmic extracts were adjusted to a final concentration of 100 mM NaCl and 0.1% CHAPS. Cell cytoplasmic extracts were incubated with parkin-specific antibody or PKM2-specific antibody at 4°C overnight followed by Protein A/G beads for 4 h to analyze endogenous parkin or PKM2. After washing five times with BC100 buffer (20 mM Tris-HCl, pH 7.9, 100 mM NaCl, 10 mM KCl, 1.5 mM MgCl 2 , 20% glycerol, and 0.1% Triton X-100), the bound proteins were eluted by 1ϫ SDS loading buffer with heat to denature proteins. Alternatively, cell cytoplasmic extracts were incubated with FLAG-agarose beads (Sigma) or HA-agarose beads (Roche Applied Science) at 4°C overnight to analyze cells transfected with FLAG-tagged or HA-tagged plasmid. The beads were washed five times with BC100 buffer, and the bound proteins were eluted using FLAG peptide or HA peptide in BC100 buffer for 2 h at 4°C. Protein Complex Purification-Protein complex purification was performed as described previously (30,31) with some modifications. The cytoplasmic extracts of the FLAG-HA-parkin/ H1299 stable lines or FLAG-HA-PKM2/H1299stable lines were prepared as described above and subjected to a FLAG M2 and HA two-step immunoprecipitation. The tandem affinitypurified parkin or PKM2-associated proteins were analyzed by liquid chromatography (LC)-MS/MS. GST Pulldown Assay-GST or GST-tagged fusion proteins were purified as described previously (30,31). [ 35 S]Methioninelabeled proteins were prepared by in vitro translation using the TNT Coupled Reticulocyte Lysate System (Promega). GST or GST-tagged proteins were incubated with 35 S-labeled proteins at 4°C overnight in BC100 buffer ϩ 0.2% BSA and then incubated with GST resins (Novagen) for 4 h. The resins were washed five times with BC100 buffer. The bound proteins were eluted with 20 mM reduced glutathione (Sigma) in BC100 buffer for 2 h at 4°C and resolved by SDS-PAGE. The pulled down 35 S-labeled protein was detected by autoradiography. Ablation of parkin in MCF10A cells were performed by infection with shRNA lentivirus. Parkin-specific shRNA plasmids and control shRNA plasmid were received from Thermo Sciences (1, catalog number V2LHS_84518; 2, catalog number V2LHS_84520; 3, catalog number V3LHS_327550; and 4, catalog number V3LHS_327554). The lentivirus was packaged in 293T cells and infected cells as described in the manufacturer's protocol. Ablation of parkin in U87 cells and FLAG-HA-parkin/U87 stable line was performed by transfecting cells once with a pool of four siRNA duplex oligonucleotides against parkin 3Ј-UTR region (1, CCAACTATGCGTAAATCAA; 2, CCTTCTCTTAGGACAGTAA; 3, CCTTATGTTGACATG-GATT; 4, GCCCAAAGCTCACATAGAA). To purify ubiquitylated PKM2, first all His-ubiquitin-conjugated proteins including PKM2 were purified with Ni-NTA resin as described above and eluted with elution buffer (0.5 M imidazole in BC100 buffer). The eluants were dialyzed with BC100 buffer for 16 h at 4°C, exchanging the buffer for fresh buffer five times during that period. Then the eluants were incubated with the FLAG M2-agarose beads (Sigma) at 4°C overnight. After washing three times with BC500 buffer (20 mM Tris-HCl, pH 7.9, 500 mM NaCl, 10 mM KCl, 1.5 mM MgCl 2 , 20% glycerol, and 0.5% Triton X-100) and two times with BC100 buffer, the bound proteins were eluted with FLAG peptide (Sigma) in BC100 buffer for 2 h at 4°C. The ubiquitylated PKM2 proteins were dialyzed with BC100 buffer for 16 h at 4°C and used for pyruvate kinase activity and Western blotting assays. Another cell-based ubiquitylation assay was performed by FLAG M2 and HA tandem immunoprecipitation. 293 cells were transfected with FLAG-PKM2, myc-parkin, and HAubiquitin. After 24 h, the cells were lysed with BC500 buffer supplemented with proteinase inhibitor mixture, sonicated to shear chromatin, and subjected to FLAG M2 IP overnight at 4°C. The FLAG M2 beads were washed three times with BC500 buffer and eluted with FLAG peptide in BC500 buffer. The eluants were incubated with HA-agarose beads overnight at 4°C. After washing HA beads three times with BC500 buffer and once with BC100 buffer, the ubiquitylated PKM2 proteins were eluted with HA peptide in BC100 buffer. The proteins were subjected to Western blotting analysis with antibodies against FLAG and HA. Pyruvate Kinase Activity Assay-The pyruvate kinase activity assay was performed using a pyruvate kinase activity assay kit (BioVision, catalog number 709-100) according to the manufacturer's protocol. Cell extracts were prepared by lysing cells with 4 volumes of pyruvate assay buffer and spinning at 15,000 rpm for 15 min at 4°C to remove insoluble material. Cell extracts or purified proteins (PKM2 or ubiquitylated PKM2) were added into a 96-well flat bottom plate. The volume was adjusted to 50 l/well with pyruvate assay buffer. Then 50 l of reaction mixture (46 l of pyruvate assay buffer, 2 l of pyruvate probe, and 2 l of enzyme mixture) per well were added and mixed well. The absorbance A 570 nm was scanned once per minute for 40 min at room temperature. In the same time, a standard curve of nmol/well versus A 570 nm readings was plotted. Then the sample readings were applied to the standard curve to obtain the amount of pyruvate in the sample wells. The rate of pyruvate yield was normalized by the amount of total proteins in the lysate or the amount of pyruvate kinase. Metabolite Analysis-Medium was removed from cells in 10-cm plates as completely as possible, and 80% methanol prechilled at Ϫ80°C was added and incubated at Ϫ80°C for 15 min. The cell lysate/methanol mixtures were transferred to tubes and centrifuged to remove the cell debris and proteins. The extracts were lyophilized and analyzed by mass spectrometry to identify the metabolism intermediate compounds. The parental cells were lysed with SDS, and total proteins were quantitated. The metabolite levels were obtained by LC-MS/ MS, and values (given in arbitrary units) reflect the integrated peak area of an MS signal. Data were normalized by total protein content in the cells and are an average of three independent experiments. Error bars represent S.D. of the mean of triplicates. p values were determined by two-sample paired Student's t test. Results Identification of Parkin as a Unique Component of PKM2associated Complexes-To elucidate the mechanisms of PKM2mediated cell metabolism in vivo, we isolated PKM2-associated protein complex from human cells. We utilize an H1299 lung carcinoma cell line that stably expresses a double-tagged human PKM2 protein with N-terminal FLAG and HA epitopes (FLAG-HA-PKM2) (Fig. 1, A and B). To isolate PKM2-containing complexes, cell cytoplasmic extracts from the stable line were subjected to two-step affinity chromatography as described previously (30,31). The tandem affinity-purified PKM2-associated proteins were analyzed by LC-MS/MS. MS analysis revealed that two peptide sequences matched the parkin sequence in the database (Fig. 1, C and D). None of the peptide sequence of parkin was identified from the control protein complexes purified in parental H1299 cells. Parkin is likely a unique binding partner of PKM2. To further confirm that parkin interacts with PKM2, we also established an H1299 cell line stably expressing FLAG-and HA-double tagged human parkin. We purified parkin-containing complexes from the cell cytoplasmic extracts. LC-MS/MS analysis identified two peptide sequences that matched PKM2 (Fig. 1E). Parkin Is a Specific PKM2-interacting Protein-To investigate the relationship of parkin and PKM2 in vivo, we further examined the interaction between these two proteins. We first transiently transfected 293 cells with expression plasmids for FLAG-tagged PKM2 and myc-tagged parkin. Cell cytoplasmic extracts were subjected to immunoprecipitation with FLAG M2-agarose beads. Western blotting analysis showed that parkin is clearly detected in PKM2-associated immunoprecipitates ( Fig. 2A). Then we transiently transfected H1299 cells with expression plasmid for FLAG-tagged parkin. Western blotting analysis revealed that endogenous PKM2 is easily detected in parkin-containing immunoprecipitates (Fig. 2B). To investigate the interaction between endogenous parkin and PKM2 proteins, cytoplasmic extracts from U87 (Fig. 2C) and IMR32 (Fig. 2D) cells were immunoprecipitated with PKM2-specific antibody and control IgG or parkin-specific antibody and control IgG. As expected, parkin is easily detected in the immunoprecipitates obtained with the PKM2 antibody but not the control IgG (Fig. 2, C and D). Vice versa, PKM2 is readily detected in the parkin antibody immunoprecipitates but not the control IgG (Fig. 2, C and D). These results confirmed that endogenous parkin and PKM2 interact in cells. To investigate the direct interaction between parkin and PKM2, we performed an in vitro GST pulldown assay. Purified recombinant GST-tagged parkin protein was incubated with in vitro translated [ 35 S]methionine-labeled PKM2. Following immobilization with GST resins and recovery of captured complexes using reduced glutathione, the eluted complexes were resolved by SDS-PAGE and analyzed by autoradiography. 35 S-Labeled PKM2 bound immobilized GST-tagged full-length parkin but not the control GST (Fig. 2E). Parkin was divided into different fragments according to its four domains: 1-100 fragment containing the ubiquitin-like domain, 100 -300 fragment containing RING 1 domain, and 300 -465 fragment containing the IBR and RING 2 domains. Purified full-length GST-PKM2 fusion proteins or GST alone was incubated with in vitro translated [ 35 S]methionine-labeled full-length parkin protein (Fig. 2F) or different parkin fragments (Fig. 2G). The immobilized complexes were resolved by SDS-PAGE and analyzed by autoradiography. GST-PKM2 interacts with 35 S-labeled fulllength parkin and 100 -465 fragment that contains RING 1-IBR-RING 2 domains (Fig. 2, F and G). To investigate the dynamic process of the interaction between parkin and PKM2 under the condition of glucose starvation, we subjected the FLAG-HA-parkin/H1299 stable line to glucose starvation for 0, 1, and 2 h, respectively. The cytoplasmic extracts were immunoprecipitated with FLAG M2-agarose beads. Western blotting analysis revealed that parkin binds much more PKM2 protein under glucose starvation (Fig. 2H). Notably, although the level of PKM2 and parkin does not change, PKM2 binding by parkin was significantly enhanced upon glucose starvation. These data demonstrated that the interaction between parkin and PKM2 is regulated by the level of glucose. Parkin Ubiquitylates PKM2-To investigate the relationship of parkin and PKM2, we examined whether parkin can ubiquitylate PKM2. 293 cells were transiently transfected with plasmids expressing His-ubiquitin (Ub), FLAG-PKM2, and parkin. Western blotting analysis showed that monoubiquitin-conjugated PKM2 bound Ni-NTA resin with wild type parkin but not with mutant parkin (12) (Fig. 3A). The structure of parkin was determined recently (33)(34)(35). We performed the same assay and found that an inactive mutant parkin whose active site cysteine was mutated to serine (C431S) lost its activity to PKM2 (33-35) (Fig. 4A), and the constitutively active parkin in which the N-terminal autoinhibitory domain was removed (parkin/ d79) (33-35) had stronger activity (Fig. 4B). We also transfected inducible parkin H1299 stable lines with HA-Ub plasmid and induced the stable lines to express a high level of parkin by treatment with doxycycline. Western blotting analysis revealed that monoubiquitin-conjugated PKM2 was easily detected in HA immunoprecipitates of cell extracts from the cells induced to express parkin (Fig. 3B). To confirm that parkin ubiquitylates PKM2, we performed a FLAG M2 and HA double immunoprecipitation assay. 293 cells were transiently transfected with plasmids expressing FLAG-PKM2, HA-Ub, and parkin. Cell extracts were subjected to a FLAG M2 and HA tandem immunoprecipitation under stringent condition. Western blotting analysis showed that monoubiquitin-conjugated PKM2 was readily detected by antibodies against FLAG and HA (Fig. 3C). To identify the modification sites of PKM2, we purified ubiquitylated PKM2 proteins by FLAG M2 and HA double IP as in Fig. 3C. The proteins were analyzed by LC-MS/MS, and Lys-62, Lys-66, Lys-186, Lys-206, Lys-305, and Lys-367 were identified as the candidate modified sites. We further mutated PKM2 on (12). The cell extracts and the Ni-NTA-agarose bead pulldown were assayed by Western blotting analysis. B, the inducible parkin/H1299 stable cell lines were transfected with plasmid DNA expressing HA-Ub and induced to overexpress parkin by treatment with doxycycline. The cell extracts and the elution of IP with HA-agarose beads were assayed by Western blotting analysis. C, 293 cells were transfected with plasmid DNA expressing FLAG-PKM2, HA-Ub, and/or parkin. The cell extracts were subjected to tandem IP with FLAG M2-and HA-agarose beads. The cell extracts and the elution of IP were assayed by Western blotting analysis. D, parkin ubiquitylates PKM2 mainly on Lys-186 and Lys-206. PKM2 mutants of the candidate modification sites were assayed in the same way as in C. E, PKM2 was ubiquitylated by the constitutively active parkin (33)(34)(35) in vitro. PKM2 and PKM2/K186R,K206R (labeled as PKM2/2KR), parkin, parkin deleted of the N-terminal 79 amino acids (labeled as parkin/d79), were purified with FLAG M2-agarose beads under stringent condition from the 293 cells transfected with plasmids. E1, E2, and phosphorylated Ser-65 (pS65) ubiquitin were purchased from Boston Biochemical. those sites to arginine and performed a ubiquitylation assay. The results showed that the major ubiquitylation sites on PKM2 are located at Lys-186 and Lys-206 (Fig. 3D). To further confirm that parkin ubiquitylates PKM2, we performed an in vitro ubiquitylation assay. We purified PKM2, parkin, and the constitutively active parkin (parkin/d79) from transfected 293 cells under stringent condition. The results showed that the constitutively active parkin can catalyze PKM2 ubiquitylation with phosphorylated Ser-65 ubiquitin in vitro (Fig. 3E). Ubiquitylated PKM2 Attenuates PKM2 Activity-To understand the role of PKM2 ubiquitylation, we examined whether ubiquitylation of PKM2 has an effect on its enzymatic activity. 293 cells were transiently transfected with plasmids expressing His-Ub, FLAG-PKM2, and parkin. All His-Ub-conjugated proteins from cell extracts were obtained by Ni-NTA pulldown, FIGURE 4. PKM2 ubiquitylated by parkin has reduced pyruvate kinase activity. A, 293 cells were transfected with plasmid DNA expressing His-Ub, FLAG-PKM2, and/or wild type myc-parkin or mutant myc-parkin/C431S (33)(34)(35). The cell extracts and the Ni-NTA-agarose bead pulldown were assayed by Western blotting analysis. B, 293 cells were transfected with plasmid DNA expressing His-Ub, FLAG-PKM2, and/or wild type FLAG-parkin or FLAG-parkin/d79 (33)(34)(35). The cell extracts and the Ni-NTA-agarose bead pulldown were assayed by Western blotting analysis. C, the activity of ubiquitylated PKM2 was reduced. The ubiquitylated PKM2 was purified by Ni-NTA-agarose bead pulldown, and then IP with FLAG M2-agarose beads under stringent condition from the 293 cells transfected with plasmids expressing FLAG-PKM2, His-Ub, and parkin was performed. The PKM2 activity was analyzed with a pyruvate kinase activity assay kit. The sample A 570 nm readings were applied to the standard curve to obtain the amount of pyruvate in the sample wells. Data represent the rate of pyruvate yield, which was normalized by the amount of the pyruvate kinase proteins. Error bars represent S.D. of the mean from triplicates. D, PKM2-Ub fusion protein to mimic monoubiquitylated PKM2 reduces PKM2 activity. PKM2 and PKM2-Ub fusion protein were purified by IP with FLAG M2-agarose beads from 293 cells transfected with FLAG-PKM2 or FLAG-PKM2-Ub plasmid. The rate of pyruvate yield in PKM2 is steeper than in PKM2-Ub; even the amount of PKM2-Ub protein is much more than the amount of PKM2, which were determined by Western blotting analysis. E, PKM2 mutant K186R,K206R (K186,206R) does not affect enzymatic activity. The rates of pyruvate yield in PKM2 wild type and mutant were the same under the same amount of proteins, which were determined by Western blotting analysis. F, PKM1 interacts with parkin. H1299 cells were transfected with plasmid DNA expressing FLAG-PKM1 and HA-parkin. The cell extracts and the elution of IP with FLAG M2-agarose beads or HA beads were assayed by Western blotting analysis. G, PKM1 was ubiquitylated by parkin. 293 cells were transfected with plasmid DNA expressing His-Ub, FLAG-PKM and myc-parkin. The cell extracts and the Ni-NTA-agarose bead pulldown were assayed by Western blotting analysis. H, the activity of ubiquitylated PKM1 was reduced. The ubiquitylated PKM1 was purified by Ni-NTA-agarose bead pulldown, and then IP with FLAG M2-agarose beads under stringent condition from the 293 cells transfected with plasmids expressing FLAG-PKM1, His-Ub, and parkin was performed. The PKM1 activity was analyzed with a pyruvate kinase activity assay kit. I, 293 cells were transfected with plasmid DNA expressing FLAG-PKM1 (F-PKM1), HA-Ub, and/or parkin. The cell extracts were subjected to tandem IP with FLAG M2-and HA-agarose beads. The cell extracts and the elution of IP were assayed by Western blotting analysis. and His-Ub-conjugated PKM2 was further purified by FLAG M2 immunoprecipitation. The pyruvate kinase activity of PKM2 and Ub-conjugated PKM2 was assessed, and although the amount of proteins was almost equal, the activity of ubiquitylated PKM2 was significantly lower than that of PKM2 (Fig. 4C). These data demonstrated that ubiquitin-modified PKM2 has decreased activity. To confirm this result, we purified PKM2 and PKM2-Ub fusion protein, which mimics the monoubiquitylated PKM2, and assessed their pyruvate kinase activity. The pyruvate kinase activity of PKM2-Ub was significantly reduced compared with wild-type PKM2 (Fig. 4D), although the pyruvate kinase activity of PKM2 K186R,K206R did not change compared with the wild-type PKM2 (Fig. 4E). Parkin Does Not Affect PKM2 Stability-As shown above, parkin catalyzes ubiquitin conjugation to PKM2, so we examined whether parkin degrades PKM2 in vivo. Parkin was knocked down in H460 cells by transfection with the pool of four parkin-specific siRNA oligos (Fig. 5A) or different parkin-specific siRNA oligos (Fig. 5B). Although endogenous parkin was severely reduced, the level of PKM2 was unaffected. In MCF10A cells, parkin was ablated by transfection with the pool of four parkin-specific siRNA oligos (Fig. 5C) or by infection with different shRNA lentiviruses (Fig. 5D). The same as in H460 cells, the level of PKM2 remained the same. We further overexpressed parkin in cells to examine the levels of PKM2. MCF10A cells were transfected with an increasing amount of wild-type parkin or mutant parkin. Western blotting analysis showed that the level of endogenous PKM2 was stable regardless of the level of wild-type or mutant parkin overexpression (Fig. 5E). In an inducible parkin/H1299 cell lines, parkin was induced for overexpression, and the endogenous PKM2 remained stable (Fig. 5F). H1299 cells were transfected with FLAG-PKM2 and an increasing amount of myc-parkin. Western blotting analysis showed no change of the level of FLAG-PKM2 no matter how much parkin was expressed (Fig. 5G). H1299 cells also were transfected with FLAG-PKM2 and an activity. E, MCF10A cells were transfected with plasmid expressing wild type myc-parkin or E344G mutant myc-parkin. F, the inducible parkin/H1299 stable cell lines were induced to overexpress parkin. A-F, the pyruvate kinase activity in the cell extracts was analyzed by pyruvate kinase activity assay kit. The absorbance A 570 nm was scanned once per minute for 40 min at room temperature. Then the sample readings were applied to the standard curve to obtain the amount of pyruvate. The rate of pyruvate yield was normalized by the amount of total proteins in the lysate. Data are an average of three independent experiments. Error bars represent S.D. of the mean from triplicates. G and H, parkin does not degrade PKM2. G, H1299 cells were transfected with plasmids expressing FLAG (F)-PKM2 and different amounts of myc-parkin. The cell extracts were assayed by Western blotting analysis. H, H1299 cells were transfected with plasmids expressing FLAG (F)-PKM2 and different amounts of FLAG-parkin and FLAG-parkin/d79. The cell extracts were assayed by Western blotting analysis. increasing amount of FLAG-parkin or constitutively active FLAG-parkin (parkin/d79). The results showed that the level of FLAG-PKM2 did not change no matter how much parkin was expressed (Fig. 5H). These data demonstrated that parkin does not regulate PKM2 stability. Parkin Interacts with PKM1 and Ubiquitylates PKM1-To investigate the relationship of parkin and PKM1, we examined the interaction between these two proteins. 293 cells were transiently transfected with expression plasmids for FLAG-tagged PKM1 and HA-tagged parkin. Cell cytoplasmic extracts were subjected to immunoprecipitation with FLAG M2-agarose beads or HA-agarose beads. Western blotting analysis showed that parkin clearly associates with PKM1 (Fig. 4F). To confirm that parkin ubiquitylates PKM1, we performed a FLAG M2 and HA double immunoprecipitation assay. 293 cells were transiently transfected with plasmids expressing FLAG-PKM1, HA-Ub, and parkin. Cell extracts were subjected to a FLAG M2 and HA tandem immunoprecipitation under stringent condition. Western blotting analysis showed that ubiquitin-conjugated PKM1 was readily detected by antibodies against FLAG and HA (Fig. 4I). To understand the role of PKM1 ubiquitylation, we examined whether ubiquitylation of PKM1 has an effect on its enzymatic activity. 293 cells were transiently transfected with plasmids expressing His-Ub, FLAG-PKM1, and parkin. All His-Ub-conjugated proteins from cell extracts were obtained by Ni-NTA pulldown. His-Ub-conjugated PKM1 was further purified by FLAG M2 immunoprecipitation. The pyruvate kinase activity of PKM1 and Ub-conjugated PKM1 was assessed. The activity of ubiquitylated PKM1 was significantly lower than that of PKM1 (Fig. 4H). These data demonstrated that ubiquitin-modified PKM1 has decreased activity. Parkin Does Affect Pyruvate Kinase Activity of Cells-To further elucidate the effect of parkin on pyruvate kinase activity under physiological condition, we performed the pyruvate kinase activity assay in cells after inactivation of endogenous parkin. Parkin was knocked down in H460 cells by transfection with the pool of four parkin-specific siRNA oligos (Fig. 5A) or different parkin-specific siRNA oligos (Fig. 5B). Although endogenous parkin was severely reduced and the level of PKM2 was unaffected, the pyruvate kinase activity of cell extracts was strikingly increased (Fig. 5, A and B). In MCF10A cells, parkin was ablated by transfection with the pool of four parkin-specific siRNA oligos (Fig. 5C) or by infection with different shRNA lentiviruses (Fig. 5D). The same as in H460 cells, the level of PKM2 was unchanged, and the pyruvate kinase activity of cell extracts was strikingly increased (Fig. 5, C and D). To further confirm the effect of parkin on pyruvate kinase activity, we performed the pyruvate kinase activity assay in parkin-overexpressing cells. MCF10A cells were transfected with increasing amounts of wild-type parkin and mutant parkin. Although Western blotting analysis showed that the level of endogenous PKM2 was unchanged, no matter how much wild-type or mutant parkin was overexpressed, the pyruvate kinase activity of cell extracts decreased with more wild-type parkin and increased with more mutant parkin (Fig. 5E). The inducible parkin/H1299 cells were treated with doxycycline to induce expression of a high level of parkin. The endogenous PKM2 remained stable, and the pyruvate kinase activity of cell extracts decreased with different amounts of overexpressed parkin (Fig. 5F). These data demonstrated that parkin regulates pyruvate kinase activity of cells. Inactivation of Parkin Influences Glycolysis-To investigate the physiological function of regulation of PKM2 by parkin, we examined whether inactivation of endogenous parkin has an effect on the cell metabolic pathway. U87 cells or FLAG-HAparkin/U87 stable cell lines were transfected once with the pool of four siRNA oligos specific for parkin 3Ј-UTR region or control siRNA oligos. The cells were cultured in DMEM for 24 h. The metabolism intermediate products were extracted with 80% cold methanol and analyzed by mass spectrometry. The mass spectrometric peak strength of each compound was normalized to the amount of total proteins. As shown in Fig. 6, the steady state metabolite levels of glycolysis increase after ablation of parkin and are rescued by additional FLAG-HA-parkin. Cells in the absence of endogenous parkin have higher levels of glucose 6-phosphate, fructose 6-phosphate, 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate than the parental cells. However, the levels of fructose 1,6-bisphosphate, glyceraldehyde 3-phosphate, and 1,3-bisphosphoglycerate were not changed (Fig. 6). These metabolite analysis results revealed that the cells metabolic steady state levels of glycolysis are higher in the absence of endogenous parkin than in the parental cells and demonstrated that parkin plays an important role in cellular metabolism through regulating pyruvate kinase activity. Discussion Parkin plays a central role in mitochondrial homeostasis and mitophagy (4). Several studies suggest that mitophagy is tumorpromoting and is required to maintain a healthy pool of mitochondria upon which tumor cells depend for growth (36,37). However, these studies inhibited autophagy generically not just mitophagy and did not examine other aspects such as defects in turnover of endoplasmic reticulum, peroxisomes, or protein aggregates. Therefore, it is not likely that parkin functions in mitophagy to suppress tumors. Evidence has shown that defective metabolism in mitochondria instead of dysfunctional mitochondria contributes to tumor formation (38). It has been identified in several human cancers that key Krebs cycle enzymes are mutated and that metabolism in mitochondria is inherently defective (39). Studies have shown that altered expression of phosphoglycerate dehydrogenase, phosphoglycerate mutase 1, and pyruvate kinase M2 reduces the rate of glycolysis and increases biosynthetic pathways (40 -44). Our results showing that parkin regulates cell metabolism through ubiquitylating PKM2 and reduces its enzymatic activity provide new evidence demonstrating the tumor suppression mechanism of parkin. It has been shown that PKM2 is important for tumor cell survival and growth (19 -24). PKM2 is necessary for aerobic glycolysis, which provides a selective growth advantage for tumor cells in vivo (19). PKM2 is also important for tumor cells to withstand oxidative stress and to control intracellular reactive oxygen species concentration, which are critical for tumor cell survival (23). PKM2 is functionally regulated by various post-translational modifications. Anaplastic lymphoma kinase phosphorylates PKM2 at Tyr-105, decreases PKM2 enzymatic activity, and induces cells to shift to aerobic glycolysis (25). PKM2 is also the substrate of protein-tyrosine phosphatase 1B: inhibition of PTP1B increased PKM2 Tyr-105 phosphorylation and decreased PKM2 activity. Importantly, decreased PKM2 Tyr-105 phosphorylation correlates with the development of glucose intolerance and insulin resistance in rodents, non-human primates, and humans (26). ERK1/2 also regulates PKM2 by phosphorylation at Ser-37 and converts PKM2 from a tetramer to a monomer to translocate into the nucleus. Nuclear PKM2 acts as a coactivator of ␤-catenin to induce c-Myc expression, resulting in the up-regulation of GLUT1, lactate dehydrogenase A, and in a positive feedback loop polypyrimidine tract-binding protein-dependent PKM2 expression and promoting the Warburg effect (27). PKM2 is also regulated by acetylation. Lys-305 acetylation under stimulation of high glucose concentration targets PKM2 for degradation through chaperone-mediated autophagy and promotes tumor growth (28). Mitogenic and oncogenic stimulation of Lys-433 acetylation, which interferes with fructose 1,6-bisphosphate binding to prevent allosteric activation, promotes PKM2 protein kinase activity and nuclear localization (29). Our findings on parkin ubiquitylating PKM2 at Lys-186 and Lys-206 and inhibiting PKM2 enzymatic activity revealed another post-translational modification event to regulate PKM2 functions and provided a new way to modulate PKM2 activity for certain cancer and Parkinson disease therapy. Many proteins can be regulated by monoubiquitylation including histones (45), DNA replication and repair proteins (46,47), endocytosed receptors and their regulators (48,49), and p53 (50,51). It has been demonstrated that monoubiquitylation works as a signal, which is recognized by ubiquitin binding domains of other proteins. Monoubiquitylation plays important roles in many biological processes such as the regulation of gene transcription, protein trafficking, and DNA repair. Many proteins can be regulated by both monoubiquitylation and polyubiquitylation with different functions. In the case of p53, low levels of Mdm2 induce p53 monoubiquitylation for p53 export from the nucleus to the cytoplasm, whereas high levels of Mdm2 polyubiquitylate p53 for proteasome degradation (50,51). Parkin, by contrast, only induces PKM2 monoubiquitylation and decreases PKM2 activity. Inactivation of parkin increases pyruvate kinase activity followed by an increase of the steady state metabolite levels of glycolysis (Fig. 6). These FIGURE 6. The steady state metabolite levels of glycolysis increase after ablation of parkin. A, U87 cells or FLAG-HA (FH)-parkin/U87 stable cell lines were transfected with either control siRNA or parkin-specific siRNA oligos. The cell extracts were assayed by Western blotting analysis. B and C, U87 cells or FLAG-HA-parkin/U87 stable cell lines were transfected with either control siRNA or parkin-specific siRNA oligos and cultured with fresh medium for 24 h. The cell metabolism intermediates were extracted with cold 80% methanol. The parental cells were lysed with SDS, and the total proteins were quantitated. Column 1 represents U87 cells transfected with control siRNA, column 2 represents U87 cells knocked down for endogenous parkin, and column 3 represents FLAG-HA-parkin/U87 stable cell lines knocked down for endogenous parkin. The metabolite levels were obtained by LC-MS/MS, and values depicted in arbitrary units that reflects the integrated peak area of an MS signal. Data were normalized by total protein content in the cells and are an average of three independent experiments. Error bar represent S.D. of the mean from triplicates. p values were determined by two-sample paired Student's t test. n.s., not significant. results demonstrated that parkin plays important roles in cellular metabolism, and knockdown of endogenous parkin will induce in cells a metabolic shift toward aerobic glycolysis. Pyruvate kinase M gene encodes two isoenzymes through mRNA differential splicing (19,20). Pyruvate kinase M1 is very similar to PKM2 (only one exon, a fragment of about 20 amino acids, is different), but PKM1 is expressed in adult tissues, and PKM2 is expressed exclusively in tumors and embryos. Our results reveal that parkin regulates cell metabolism through ubiquitylating PKM2 and PKM1 and reducing their enzymatic activity. We can expect that parkin ubiquitylates PKM1 in brain, affects neuron cell metabolism, and may be the mechanism, at least in part, by which mutation of parkin causes Parkinson disease. Further work will pave a new way for therapy of Parkinson disease in the future.
2017-09-01T21:46:28.244Z
2016-03-14T00:00:00.000
{ "year": 2016, "sha1": "4591afacf7da4590f64e7450d4ee188b8292be95", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/291/19/10307.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9c9ff576843fab1b2757fa4ff2c2fc52aee95f16", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
159077680
pes2o/s2orc
v3-fos-license
AGRIBUSINESS CHALLENGES TO EFFECTIVENESS OF CONTRACT FARMING IN COMMERCIALIZATION OF SMALL-SCALE VEGETABLE FARMERS IN EASTERN CAPE , SOUTH AFRICA The study investigated the key factors that influence small-scale vegetable farmers’ participation in contract farming arrangements. A sample of 70 small-scale vegetable farmers and 15 key informants of agribusiness firms involved in contract farming production of vegetables were selected in Amathole and Sarah Baartman (formerly Cacadu) district municipalities in the Eastern Cape province of South Africa. Focus group discussions and in-depth interviews were chosen as data collection tools to identify the factors that influence small-scale vegetable farmers’ participation in contract farming arrangements with agribusiness firms. The data was analyzed using open multi-stage coding with an inductive framework approach. Atlas.ti was used to sort and organize data. The findings indicated availability of farm assets, hydrological conditions, farming skills and distance of producer to the markets as key determinants of contract farming participation. The use of unmarketable cultivars, inappropriate agricultural practices and inconsistent supply in quality and quantity of vegetables were found to be bottlenecks to contract participation. The study recommends a more meaningful state support and incentives for agribusiness firms; otherwise, growth of small-scale farmers in contract farming is unlikely because of the financial implications for private sector companies. INTRODUCTION Agriculture continues to be a strategic sector in the development of most low-income nations where smallscale agriculture is the dominant livelihood activity (Gani and Hossain, 2015).Currently, governments in developing countries consider the intensification of production and commercialization of small-scale farmers as a focal point in the development of subsistence agriculture and rural development (Gani and Hossain, 2015).However, the commercialization of small-scale farmers' production and enhancing their integration into lucrative markets and more inclusive value chains remain a challenge for the majority of governments in Sub-Saharan Africa (Ha et al., 2015).Commercialization of small-scale agriculture has been a difficult task owing to inappropriate policies, insufficient access to technology, poor infrastructure and institutional obstacles (Mor and Sharma, 2012). Agribusiness value chains have been increasingly promoted by governments in Sub-Saharan Africa as one of the development strategies for enhancing growth in the agricultural sector and reduce rural poverty (Ha et al., 2015).Linking small-scale farmers into agribusiness value chains through contract farming is one of the rural development strategies being promoted to address the challenges faced by small-scale farmers (Ha et al., 2015).The topic has given rise to a body of literature analyzing various aspects of the phenomenon.However, the factors enhancing the participation of small-scale farmers into agribusiness value chains through contractual arrangements remain open to debate (Sikwela and Mushunje, 2013;Jordaan, 2012).The dynamics influencing the use of contract farming arrangements in the transition to commercial farming by small-scale farmers have not been thoroughly explored (Mmbengwa et al., 2012;Sikwela and Mushunje, 2013).The majority of past studies that sought to understand the factors influencing the participation of small-scale farmers into commercial lucrative markets were carried out in response to mixed results of structural adjustment programs that were meant to provide market-driven opportunities for rapid economic growth and development, though they failed to do so (Sikwela and Mushunje, 2013). Despite the efforts and substantial investments made and the various policies instigated to fast-track the linkages of small-scale farmers into high-value markets, the success stories of previously disadvantaged farmers successfully operating in commercial agribusiness chains are rare (Jordaan, 2012;Mmbengwa et al., 2012;Ortmann and King, 2010).The insufficient number of success stories of small-scale farmers successfully operating in commercial agribusiness chains shows that the objectives to allow small-scale farmers to improve their livelihoods through participation in commercial agribusiness chains have not yet been met (Baloyi, 2010;Jordaan, 2012).This study aims to fill the research gap by providing empirical information on Amathole and Sarah Baartmen districts' small scale vegetable farmers' involvement in contract farming arrangements and determine the key factors that influence the participation of small-scale vegetable farmers in contract farming arrangements. LITERATURE REVIEW Several studies have been conducted on the bottlenecks and constraints faced by small-scale farmers in their attempts to raise income by participating in agribusiness supply chains through contractual arrangements.The aim of the studies was to broaden the knowledge base on the obstacles that limit the farmers from successfully navigating the transition into commercial farming (Sikwela and Mushunje, 2013).The studies have recognized the need to integrate small-scale farmers into high-value chains (Sikwela and Mushunje, 2013).However, despite the valuable knowledge generated by these studies, there is still a remarkable scarcity of scientific information on the factors that affect the participation of smallscale farmers in high-value chains through contractual arrangements (Sikwela and Mushunje, 2013).The linkage of small-scale farmers to contractual arrangements is seen as a business solution to poverty if the mechanisms of redistribution work (Kirsten and Sartorius, 2006;Koranteng, 2010;Little and Watts, 1994;Minot, 2011;Oya, 2012). As regards South Africa, linkages of small-scale farmers to contract farming with agribusiness firms can provide some answers to the collapse in support services, which has occurred in most African countries following structural adjustments (Koranteng, 2010;Little and Watts, 1994;Minot, 2011).Jointly, agriculture and agribusiness firms are Africa's largest economic sectors.They are among the fastest-growing sectors in Africa since the mid-1990s (World Bank, 2011).Van Schalkwyk et al. (2009;2012) argue that South African agribusiness firms operate on commercially sustainable premises.These companies are well-positioned and have the necessary experience and knowledge to provide the proper support services crucial to the development of small-scale farmers.However, engaging with small-scale farmers comes at a cost to agribusiness firms (Baloyi, 2010;Louw et al., 2007;Van der Meer, 2006).Hence many agribusiness firms tend to procure their commodity or raw materials from more established large-scale farmers who, in most instances, also export their produce (Baloyi, 2010;Louw et al., 2007).This is done to ensure that the product procured meets the local as well as the international standards and to maintain low transaction costs (Baloyi, 2010;Louw et al., 2007).This trend leaves small-scale farmers marginalized and further excluded from profitable niche markets (Baloyi, 2010;Louw et al., 2007). Owing to past South African government policies and the lack of appropriate support structures for the beneficiaries of redistributed and restituted land by the new democratic government, small-scale farmers in South Africa do not have adequate infrastructure, innovative technology or capital that is required to meet the demands of agri-processing firms, supermarket chains and the retail sector (Ayinke, 2011;Baloyi, 2010).There is a quantum difference in both the quality and quantity of commodities produced by small-scale farmers compared to commercial farmers who meet the requirements of the agri-processing firms, food retailers and supermarket chains (Ayinke, 2011;Baloyi, 2010;Groenewald, 2004;Gulati et al., 2007;Koranteng, 2010).These factors hold critical ramifications not only in terms of barriers to commercialization, but also in terms of the relationship between agribusiness and small-scale vegetable farmers (Ayinke, 2011;Baloyi, 2010). Theoretical framework This study is guided by the Social Exchange theory of sociologist George Homans (1950Homans ( , 1958Homans ( , 1961Homans ( , 2017)).According to Homans (1950Homans ( , 1958Homans ( , 1961Homans ( , 2017)), human relationships are shaped by a subjective cost-benefit analysis and a comparison of alternatives.Thibaut and Kelley (1959) argue that in order for a dyadic relationship to be viable, it must provide rewards and/or economies in costs which compare with those in other competing relationships.Thibaut and Kelley (1959) further explain that people choose relationships around them that provide the most rewards or require the least costs.Blau (1968) and (Levinger, 1979) greatly expand on the importance of the Social Exchange theory arguing that individuals enter into and maintain a relationship as long as they can satisfy their interest and at the same time ensure that the benefits outweigh the costs.According to Rwelamina (2015), cited in Rugema et al. (2018), people participate in collective action owing to their expectations such as access to services and maximization of self-interest and benefits.Therefore, the Social Exchange theory (SE) was seen appropriate for exploring the key determining factors of small-scale vegetable farmers to participate in contract farming arrangements with agribusiness firms. Conceptual framework Contract farming definitions are as many as the number of authors on the subject (Duma, 2007).However, despite the multiple definitions and various terms by which contract farming is known, what is referred to remains, in essence, farmers growing crops for contractors with the assistance of the contractors (Duma, 2007).The factors determining inclusion or exclusion and their impact on the broader dynamic of contract farming remain vague and inadequately addressed by earlier research.In South Africa, small-scale farmers practice subsistence farming, oriented by insufficient uses of production inputs, lack of irrigation infrastructure and being highly dependent on rain-fed conditions, which result in difficulties in increasing productivity levels.Contract farming arrangements have been reported by a number of researchers as a potential strategy to address the barriers that small-scale farmers face when migrating into commercial agricultural markets while increasing their productivity (Duma, 2007). This study deflates this viewpoint and rekindles the debate through a critical exploration of the factors impacting the relationship between agribusiness and small-scale vegetable farmers under contract farming.This relationship -and how it is impacted by the government -forms the conceptual framework of this study.As shown in Figure 1, small-scale farmers are caught at the core of the problem, affecting their access to credit, lucrative markets and support services.Despite the current government's pro-smallholder farmer policies aimed at enhancing farmer production capabilities, small-scale farmers remain largely excluded from commercialization due to factors inhibiting their participation in the contract farming system. METHODOLOGY Description of survey area The study was conducted in two districts, namely: Amathole district and Sarah Baartman (formerly Cacadu district municipality) purposively selected in the Eastern Cape Province.The Eastern Cape is one of South Africa's nine provinces (Altman et al., 2009).The province is located on the east coast of South Africa between the Western Cape and KwaZulu-Natal provinces (Altman et al., 2009).Amathole district municipality occupies the central portion of the Eastern Cape and Sarah Baartman district municipality shares border with the Western Cape and Northern Cape (Altman et al., 2009).The districts are largely rural, with low urbanization rates as well as limited budgetary capacity and municipal staff.Amongst these obstacles are high poverty rates resulting from high unemployment, low incomes and lack of basic skills that are required for local economic development (Altman et al., 2009).Many people in the district municipalities rely on agriculture, gifts, state pension and labor remittances for household survival.Contract farming has been adopted in the municipalities to grow various crops. Data collection The methodology employed in this study was a qualitative research approach.The rationale behind the use of a qualitative methodology is the descriptive nature of this approach.The use of a qualitative approach enabled the researcher to reach an in-depth analysis of the most important factors that influence small-scale vegetable farmers' participation in contract farming arrangements.In contrast to the quantitative approach which focuses on statistics and figures, the qualitative approach focuses on words of the respondents and the themes emerging from their narratives.Two qualitative Indicates the "missing links" in contract farming to become a viable agrarian reform strategy.Central to this are agribusiness incentives from the state and adequately revised policy and practical measures; the state must provide the elements still lacking at present. Red arrow indicates the polarized, dualistic environment and the conflict of interest that continue to characterize contract farming and the interaction between agribusiness and small-scale farmers.research techniques were applied to gather primary data, namely focus group discussions and individual indepth interviews.The combination of the two methods allowed gaining substantive insights into the studied phenomenon. Socio Since an accurate list of the population needed for this study was not available, it was seen as not reasonable or possible to develop a sampling frame from which a probability sample could be obtained.According to Statistics… (2011), South Africa lacks comprehensive statistical information on small-scale farmers and subsistence agriculture.Most researchers commonly rely on survey estimates derived on specific studies carried out across the country by various agricultural departments, universities and a handful of non-governmental organizations (NGOs) (Moyo, 2011;Raleting and Obi, 2015).There are no comprehensive statistics on the number of subsistence agricultural producers in South Africa (Moyo, 2011;Raleting and Obi, 2015).Thus, the total number of subsistence farmers in the survey area could not be realized from Statistics South Africa or the Department of Agriculture.After carefully considering all the weaknesses and strengths of the various types of non-probability sampling methods, it was decided to make use of the purposive (judgment) sampling procedure.Purposive sampling can be very useful when one needs to reach a targeted sample quickly and where sampling for proportionality is not the primary concern. About 70 small-scale vegetable farmers 1 and 15 key informants of agribusiness firms 2 that met the criteria for inclusion in the study were purposively selected to provide a holistic view of contract farming. The central topic for small-scale farmers' focus group discussion and individual in-depth interviews with the key informants was to gain invaluable insight on the key requirements for contractual relationships and out-grower schemes between small-scale farmers and agribusiness firms.This allowed the researcher to gain a deep insight into the conditions required by agribusiness firms to engage small-scale vegetable farmers in contractual arrangements.These discussions sought to clarify the factors that affect the participation of small-scale farmers in contract farming arrangements.It was easy to collect data from the farmers and the key informants because of their daily interaction with agribusiness firms.The participants were all African, mostly Xhosa.Therefore, all the focus group discussion took place in isiXhosa, the language spoken by the participants.All sessions were audio-taped, translated 1 Small-scale farmers were involved in primary production with a farm size between 3 ha to 12 ha as per definition by Agri-bank. 2 Agribusiness firms were involved in processing, value adding and marketing and sales of agricultural commodities; were operating for more than five years. Study objectives Themes and transcribed into English.Although there were seven focus groups, data saturation was reached after the 6 th group.Therefore, the researcher used point-of-data saturation in deciding on the final number of focus groups needed to collect sufficient data: the point at which no new information or themes relevant to the study emerge from each subsequent interview. Data analysis The study employed open multi-stage coding with an inductive framework approach to analyze the data collected.Coding allowed topics to emerge from the data by conceptualizing data and breaking it down into discrete units and organizing into categories or codes named to represent the specific phenomenon.According to Richards ( 2005) and Strauss and Corbin (1990), this process makes themes emerge from the data, leading to the development of theories.Codes relevant to the research question were created, themes were then established and the data was systematically examined to see the ways in which themes were portrayed.The analysis followed with organizing the data into manageable themes, patterns, trends and relationships (Mouton, 2001).According to Mouton (2001), this usually results in the identification of recurring patterns that cut through the data. The inductive analysis allowed categories and patterns to emerge from the data leading to sets of smaller and similar data units that are more workable.A comparative method was used to compare one unit of data with another looking for recurring regularities and patterns in the data in order to assign the data into categories.The developed data categories were further sub-divided to determine links between categories and to form hypotheses that lead to the development of theories.A computer-based qualitative data program Atlas.ti was used to sort and organize the data. Participation conditions Findings from the focus group discussions and key informant interviews revealed serious reservations by agribusiness firms in engaging in contract farming programs with producers who do not have a collateral in the form of ownership of their land or who have small farms.Land is one of the most important agricultural resources, playing a fundamental role in agricultural productivity and high-value market participation by a farmer.This resource, complemented by other ones, is widely acknowledged by a number of researchers as a crucial determinant of the income-earning potential (Baloyi, 2010;Moyo, 2010).Large farms were mostly preferred by agribusiness firms when selecting vegetable contract farmers.Having a small farm increased the likelihood of a farmer being excluded from the contract farming arrangements.The respondents felt that access to a significant area of land is crucial for a farmer to be able to rotate crops and rest the soil.Resting the soil and rotating crops is difficult for farmers cultivating on small tracts of land.None of the respondents commented on the preferred size of land in hectares. The study also revealed that access to irrigation systems played a vital role in promoting participation of small-scale vegetable farmers in contract farming arrangements.With access to irrigation systems, a farmer will be able to deliver the required quality and quantity of produce.The interviewed key agribusiness firms' informants mentioned that they assist contracted farmers with supply of water and irrigation systems by drilling boreholes and installing irrigation systems in cases were the assets are lacking and the farmer owns the land.This is done to ensure that the farmer has sufficient water supply available to produce the required quantities and quality of vegetables.When farmers do not have ownership of the land they occupy and produce from, investing in the land is problematic and risky.The availability of irrigation systems and quality irrigation water is a principal resource in agriculture, particularly in horticulture production (Baloyi, 2010;Dawe et al., 2015).Dawe et al. (2015) links the success stories of the Philippines and Indonesia in the production of rice to adequate access to high-yielding varieties and fertilizers as well as access to irrigation systems. Younger age, literacy and related skills positively influenced participation in contract farming between small-scale vegetable farmers and agribusiness firms.The results indicated that agribusiness firms preferred to contract younger small-scale farmers that have attained higher levels of education rather than contracting with old farmers with low levels of education.Similar results were found by Duma (2007) with household head age and education levels being found to be significant predictors in contract farming programs.Baloyi (2010) and Jari (2009) argue that old age might pose a threat to the sustainability of partnership, particularly farming.This is owing to old age being the time at which most people lack energy, are physically weak and have poor health, which renders them unsuitable as potential farmers for contractual arrangements or obtaining loans from potential agribusiness firms, especially when no collateral is available.According to Jayne et al. (2010), education levels determine human capital levels of household and the ability to interpret agricultural information and make informed decisions. The distance between the farm and the market was found to be amongst the most critical factors to agribusinesses in terms of engaging farmers in contract farming.Goetz (2015) found the distance from farm to market to be a critical factor in market participation when studying the selective model of household food marketing behavior in Sub-Saharan Africa.Similar results were found by Bahta and Bauer (2012) in their study of the determinants of market participation of small-scale livestock farmers in South Africa and by Omiti et al. (2009) when studying the factors influencing the intensity of market participation by small-scale farmers in the rural peri-urban areas of Kenya.Short distances from markets were mostly preferred by the agribusiness firms.According to the majority of respondents, market transaction costs are usually lower when the producer is located close to markets.Respondents indicated that the preferred distance between the agribusiness firms and the contract farmers varies in the range of 20 km to 35 km.Any additional distances from the agribusiness firms' premises or their distribution centers or inability of the farmer to deliver agricultural commodities to the agribusinesses premises inhibits the chances of a farmer participating in contract farming programs. Experience in agriculture, coupled with farming skills, was indicated by the respondents as another key determinant used to select vegetable contract farmers.Van Schalkwyk et al. (2012) argue that techniques of farming demand that the farmer possesses some degree of experience and skills in farming.The authors further argue that the lower the number of years in farming, the higher the probability that the farmer will be technically constrained.Raphela (2014) explains farming experience as thorough knowledge and understanding of the dynamics of the agricultural sector. Participation constraints The respondents indicated that, owing to lack of training and because small-scale farmers are risk-averse, they usually follow inappropriate agricultural practices in their production.Lack of good agricultural practices was stated to be one of the factors that deter agribusiness firms from obtaining their commodities from smallscale vegetable farmers.The Agricultural Products Standards Act (No. 119) of 1990 provides control over the sale of agricultural products, ensuring that all agricultural products procured comply with the minimum quality standards outlined and specified in the Act.The agribusinesses required contract farmers to have a good safety certification audit.The respondents agreed that small-scale farmers struggle to comply with these standards owing to lack of infrastructure and resources.With regard to disease control, the interviewees revealed that small-scale vegetable producers use domestic pesticides and herbicides owing to their low costs as compared to expensive commercial pesticides and herbicides.These products are neither accepted nor recognized under the Agricultural Products Standards Act.The use of domestic pesticides by the producers leaves more residuals on their produce, making it problematic for the product to meet the pesticides and herbicides residue requirements set by the firms and government in response to the demand of consumers and export markets. Trust was stated to be another crucial factor in the success of contract farming arrangements between agribusiness firms and vegetable contract farmers.When choosing primary producers to contract with, the interviewees divulged that trustworthiness of producers to deliver a consistent quality and quantity of produce on time is crucial.The agribusiness firms identified that delivery and quality of small-scale producers are inconsistent, and quality and quantity of produce is not guaranteed.These shortcomings stem from a lack of planning by the farmers.When analyzing agricultural contracts with small-scale farmers in the Winterveld region in South Africa, Haggblade et al. (2012) found that trust between agribusiness firms and contract farmers plays a significant role in reducing transaction costs to the parties.According to the authors, trust between the contracting parties reduces or eliminates the costs that are associated with screening, investigating and enforcing the contracts (Haggblade et al., 2012). The cultivation of unmarketable cultivars by smallscale farmers and lack of communication infrastructure were pointed out as being another of the challenges that limit agribusiness firms from engaging with small-scale farmers in contracts.According to the respondents, small-scale farmers are isolated and lack communication lines, which prevent them from obtaining any up-to-date information on commodity prices, vegetable cultivars in demand or markets changes.This contributes to a poor relationship with agribusiness firms and leaves the farmers with the perception that they have been cheated. CONCLUSIONS AND RECOMMENDATION Small-scale farmers find it difficult, if not impossible, to engage in contractual arrangements with agribusiness firms owing to a number of unique, substantial challenges they continue to face.A major concern about the exclusion of small-scale farmers from agribusiness value chains are the strategies used by agribusiness firms in the sourcing and procurement of their agricultural raw materials.These strategies have a negative effect on the participation of small-scale farmers in agribusiness chains by effectively excluding them.Fundamentally, these strategies are rooted in economic benefit: the bottom line for agribusiness firms is profit margins; and this very basic commercial principle is what mitigates against the effectiveness of contract farming.The basic production and infrastructural specifications of agribusiness programs put contract farming beyond the means of the majority of small-scale vegetable producers.Unless the conditions related to agribusiness are met, the divide will widen between small-scale producers and large agribusiness firms, with agribusiness firms opting to do business with a network that satisfies their requirements, while small-scale producers continue to be entrapped in the vicious cycle of poverty.A strategic overhaul of contract farming is needed but will only have a chance of success if supporting private-state mechanisms address the broad spectrum of shortcomings that bar the majority of small-scale producers from achieving commercial status.From the agribusiness perspective, the state also needs to address the high transaction costs inherent in contract farming, such as transport.Without more meaningful state support and incentives, growth of small-scale farmers in contract farming is unlikely because of the financial implications for private sector companies.Without such interventions, the conflict of interest that exists between agribusiness and small-scale farmers in the context of contract farming will continue to hamper the development potential of contract farming as well as to hinder the agrarian reform within the Eastern Cape Province. Fig. 1 . Fig. 1.Agricultural dichotomy: the polarized contract farming environment Source: own elaboration based on research.
2019-05-21T13:04:11.046Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "f51abe501641b0c8d44948ab9a143392aa9095f2", "oa_license": "CCBYNC", "oa_url": "http://www1.up.poznan.pl/jard/index.php/jard/article/download/1165/1004", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f51abe501641b0c8d44948ab9a143392aa9095f2", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Business" ], "extfieldsofstudy": [ "Business" ] }
211023624
pes2o/s2orc
v3-fos-license
A Spatial-Temporal Resolved Validation of Source Apportionment by Measurements of Ambient VOCs in Central China Understanding the sources of volatile organic compounds (VOCs) is essential in the implementation of abatement measures of ground-level ozone and secondary organic aerosols. In this study, we conducted offline VOC measurements at residential, industrial, and background sites in Wuhan City from July 2016 to June 2017. Ambient samples were simultaneously collected at each site and were analyzed using a gas chromatography–mass spectrometry/flame ionization detection system. The highest mixing ratio of total VOCs was measured at the industrial site, followed by the residential, and background sites. Alkanes constituted the largest percentage (>35%) in the mixing ratios of quantified VOCs at the industrial and residential sites, followed by oxy-organics and alkenes (15–25%).The values of aromatics and halohydrocarbons were less than 15%. By contrast, the highest values of oxy-organics accounted for more than 30%. The model of positive matrix factorization was applied to identify the VOC sources and quantify the relative contributions of various sources. Gasoline-related emission (the combination of gasoline exhaust and gas vapor) was the most important VOC-source in the industrial and residential areas, with a relative contribution of 32.1% and 40.4%, respectively. Industrial process was the second most important source with a relative contribution ranging from 30.0% to 40.7%. The relative contribution of solvent usage was 6.5–22.3%. Meanwhile, the relative contribution of biogenic emission was only within the range of 2.0–5.0%. These findings implied the importance of controlling gasoline-related and industrial VOC emissions in reducing the VOC emissions in Wuhan. Introduction As major precursors of photochemical smog, volatile organic compounds (VOCs) are crucial in atmospheric chemistry. VOCs can react with NOx to form ozone, and take part in the formation of secondary organic aerosols (SOAs) through photochemical reactions and gas-phase particle reactions [1,2]. Research on pollution characteristics and VOC sources can provide insights into the pollution control of urban photochemicals and fine particulate matter [3,4]. However, the sources of VOCs are often complex because of various species, polluting industries and emission, sources largely depending on the levels of local energy consumption levels and the industrial structures [5,6]. Automobile exhaust, solvent usage, fuel evaporation, technological process in industries, and biomass burning are the major sources of anthropogenic regional emission [7][8][9]. Natural emission is also important in VOC sources. Thus, the estimation of VOC emission remains highly uncertain [10,11]. Positive matrix factorization (PMF) analysis is an effective method for identifying VOC sources and quantifying the relative contributions of various sources. Extensive research on resolving VOC sources through PMF have been conducted in recent years in different regions of China, including Beijing-Tianjin-Hebei, the Yangtze River Delta, and the Pearl River Delta [5,[12][13][14][15][16][17]. However, limited studies are available on VOC resources in central China [18,19]. As the capital city of Hubei Province in central China, Wuhan has more than 10 million residents and is characterized by its unique terrain and a rapidly growing economy. In recent years, the number of registered motor vehicles have rapidly increased from~1300 thousand in 2012 to >3200 thousand in 2019. With the rapid development of technological industries and the number of automobiles, Wuhan is facing severe haze pollution [20]. Petrochemical industries, such as basic chemicals, optoelectronics, car spraying, and printing, worsen the situation. Thus, Wuhan has been chosen as the study region for investigating the mechanism and related scientific issues of VOCs. However, previous studies on the source characteristics of VOCs have been conducted in residential areas [21]. Sources are accurately resolved by using the measurements of different functional areas due to the high complexity of emission sources. Therefore, this study mainly aims to acquire the characteristics and sources of VOCs from three different functional areas of Wuhan from July 2016 to June 2017, and the source categories of VOCs will be identified through the PMF model. Understanding the mixing ratios of VOC species will provide support to the local government in taking effective measures for the reduction of VOCs and O3 not only in Wuhan but also in other highly polluted regions. Information about the Sample Site Wuhan has 10 national environmental automatic air quality monitoring stations. The VOC samples were collected by three national control monitoring stations ( Figure 1).These monitoring stations were grouped into three categories on the basis of their geographic locations corresponding to the (1) residential site (shown as the Zi-yang [ZY] site and located in the central area of Wuhan; 114 • 18" E, 30 • 30" N), (2) industrial site (shown as the Zhuan-kou [ZK] site and located in the Wuhan Economic and Technological Development Area; 114 • 9"E, 30 • 28"N), and (3) background site (shown as the Mulan Lake [ML] site and located in the northwestern suburb area of the city, which was approximately 50 km from central Wuhan; 114 • 24" E, 316" N). The three types of sites reflected the VOC characteristics from different sources. Environmental instantaneous samples were collected for three days each month from July 2016 to June 2017 at 9:00 a.m. and 15:00 p.m. local time. The specific collection time of each month's environmental samples follows two main principles: On the one hand, the sampling date should-cover every month in the entire year, including both working days and weekends. The system sampling design was adopted with minimal human intervention to ensure the time representativeness of the collected samples. On the other hand, sampling in extreme weather such as wind, rainfall, and snowfall, should be avoided to rule out the accidental effects-caused by such extreme conditions. The meteorological parameters during the sampling periods were summarized in Table 1. A total of 359 valid VOC data were obtained from the regional measurements, with the exclusion of abnormal samples. Instrument and Methods Ambient VOC samples were instantaneously collected using 3.2 L fused silica stainless steel canisters, which had been precleaned with high-purity nitrogen (purity N 99.999%) and evacuated with an automated canister cleaner. A flow-limiting valve was used to collect instantaneous samples. A total of 101 VOCs species were analyzed by a three-stage cryofocusing preconcentration system (Entech 7200; Entech Instruments Inc., Simi Valley, CA, USA) coupled with a gas chromatography-mass spectrometry/flame ionization detection (GC-MS/FID) system (TH-300B, Wuhan Tianhong Instrument Co., Ltd., Hubei, China), composed of 28 alkanes, 11 alkenes, onealkyne, 17 aromatic compounds, nineoxy-organics, 34 halohydrocarbons and carbon disulfide. First, the samples were pumped into apreconcentration system in two ways. A Teflon filter was-utilized to prevent particulate matters from entering the system, and a water trap and an carbon dioxide removal tube were used to remove H 2 O and CO 2 . Second, the cryofocus unit was cooled down to −160 • C with liquid nitrogen to trap the VOCs in the air samples. Therefore, the VOC components were trapped-in the GC system. The system was equipped with two columns and two detectors, in which the C2-C5 non-methane hydrocarbons(NMHCs) were separated with a PLOT(Al 2 O 3 /KCl, 15 m × 0.32 mm ID) column and quantified by FID, whereas the C5-C12 hydrocarbons were separated via a semipolar column (DB-624, 30 m × 0.25 mm ID, J&W Scientific (Palo Alto, CA, USA)) and quantified by using aquadrupole MS detector. The entire process took~41.6 min. Pure helium (purity N 99.999%) was used as the carrier gas. MSD was operated in the scan mode with amass range of 29-350 amu. The ionization method was applied to determine the electron impact (EI, 70 eV). The standard gas including the Photochemical Assessment Monitoring Stations (PAMS)-(57 NMHCs) and TO-15 s-(65 compounds), standard mixtures, purchased from Linde Spectra Environment Gases (Danbury, CT, USA) is used to calibrate the C2-C12 VOCs. Calibration was performed at five different concentrations from 0.5 to 8 ppbv by the 57 PAMS gas standard and 65 TO-15 gas standard. Bromochloromethane,1, 4-difluorobenzene, chlorobenzene-d5 and 4-bromofluorobenzene were used as the internal standards for each sample to calibrate the system. The precision of each species was within 10%. A gas standard (diluting from 1 ppm to 2 ppbv) was measured daily to check the stability of the system. The method detection limits-of various compounds ranged from 2 to 50 pptv. PMF Model PMF (V5.0, US EPA, Washington, DC, USA) is a widely used receptor model for the source apportionment of air pollution. The PMF model decomposes a matrix of element data into two matrices using Equation (1), wherein the actor contributions and factor profiles that must-be interpreted by an analyst into specific source types are represented using the measured source profile information, wind direction analysis, and emission inventories. This method is briefly introduced here and more detail will be found elsewhere [22,23]: (1) where, x ij is a data matrix X of i by j dimensions, in which i number of samples and j chemical species were measured. p is the number of source, f and g are the species profile of each source and the amount of mass contributed by each source to the sample e ij is the residual for each sample and specie [22]. The other details of PMF applied to VOC data for source profiles and the contributions of each VOC species have been introduced in previous studies [24][25][26]. In this study, 28 VOC species were input into the PMF model. The equation-based uncertainty file provides species-specific parameters to calculate uncertainties for each sample. Values below the method detection limit (MDL) were replaced by half of the MDL values and their uncertainties were set as 5/6 of the MDL values. Mixing Ratios of VOC Species at Different Sites Overall, the highest mixing ratios of total VOCs were measured at the ZK point (industrial area), followed by the center city site (ZY). The background point of ML had the lowest ratio. The mixing ratios of the total VOCs in the ZK, ZY, and ML sites were 63.56 and 61.91 ppbv, 52.90 and 39.66 ppbv, and 28.02, and 30.11 ppbv at 9:00 am. and 15:00 pm., respectively. Figure 2 also shows that the high values of alkanes in ZY and ML account for over 35% of the total, followed by oxy-organic VOCs (OVOCs) and alkenes (15-25%).The values of aromatics and halohydrocarbons were less than 15%. By contrast, the highest values of OVOCs (more than 30% of the total VOCs) characterized the ZK (industrial site). The ZK OVOCs likely came from the emissions of automobile spraying and solvent coating given that the site was located in the development zone of Wuhan, which was surrounded by a large number of automobile manufacturing, repair, and other supporting industries, to form a relatively complete automobile and supporting industrial park. The 10 most abundant VOC species measured at the three sampling sites and other cities of China are listed in Table 2. The results showed that most VOC species concentrations measured at the ZY and ZK sites were slightly higher than those measured at the ML site, except for isoprene. Among the 10 VOC species with the highest concentrations, propane was the most abundant in ZY and ZK, this funding is consistent with that of previous research [21]. The measured concentrations of ethylene, and benzene measured were lower than previous study in Wuhan [19], whereas most of the abundant species were higher than those of a previous study. The propane concentration level-of different cities was significantly higher than that of other cities, especially at the ZY and ZK sites, probably due to vehicular consumption of LPG. The i-pentane concentration level at the ZY and ZK sites was higher than that of Shanghai, Jinan, and Hong Kong, thereby indicating serious motor vehicle emissions in Wuhan in recent years. [27]; c [28]; d [29]; e [24]. Temporal Variationin VOC Speciation The seasonal variations in VOC species are affected by different emission sources and photochemical and mixing processes. In this section, the seasonal variations in VOC species and five representative VOC ratios are analyzed to obtain the seasonal variation-in the VOC sources and photochemistry shown in Figure 3. The mixing ratio of total VOC was measured in autumn, followed by measurements in summer and winter, this approach was different from many other cities [30][31][32][33]. In addition, the significantly high error range of the mixing ratios the total VOC and five main VOC species in autumn, indicated that the chemical composition of ambient VOCs could be affected by the relative contributions of emission sources and photochemical and mixing processes. Among the five VOC species, alkanes were the dominant species from 8.36 ppbv to 14.67 ppbv during each season throughout the year, and its concentration was higher in autumn and winter while lower in summer. The seasonal variation of alkenes and aromatics were similar to that of alkanes. However, the seasonal variation of oxy-organics was higher in summer but lower in winter. Meanwhile, the concentration difference of halohydrocarbons was weak throughout the year, exhibited the difference-in emission sources in the five VOC groups. The main source of alkanes is direct emission, of which the increase in the concentration in summer and autumn is mainly due to the significant volatilization-effect under high temperature. Thus, during July and August, the high temperature of Wuhan (exceeding 30 • C) implied high alkane productivity. Combustion in urban regions would emit large amount of olefin, especially in autumn and winter. In 2018, the comprehensive industrial energy consumption of Wuhan reached 24.1904 million tons of standard coal, excluding bulk coal, This finding indicated the important influence of combustion on the chemical composition of VOCs. The concentration of aromatic hydrocarbon was higher in autumn than in the three other seasons, mainly due to the paint and solvent source. No significant seasonal variation existed in halogenated hydrocarbon-because of the influence from environmental temperature and local industrial activity. The 72 h backward transport trajectory of-airflow over Wuhan at 9:00 a.m. and 15:00 p.m. during the sampling period was generated by using the backward trajectory model HYSPLIT4 ( Figure 4). As shown in Figure 4, the major trajectories can be grouped into three directions. Trajectory (1) showed a short transport pattern-starting from Henan, and passing over northern Hubei. Trajectory (2) originated from Xinjiang and showed extremely long transport patterns, across-Shanxi and Henan. Trajectory (3) began in Jiangsu, and passed through Anhui before arriving at Wuhan. In November, the trajectory was mainly from southwest Hubei, which was the shortest distance path of air masses among the months. Overall, the air mass transportation mainly originated in Henan, Hunan and Jiangsu, and showed small-scale and short-distance features. The large-scale and long-distance air transport mainly started from Xinjiang. The short-distance transportation was mainly used in autumn. To certain extent, this finding could explain the significantly higher VOC concentration levels in autumn those that of other seasons (Figure 4). Correlations between VOC Species The ratios between the mixing ratios of pairs of ambient VOC species are useful indictors of the major emission sources, photochemical process and the influence of the generating functions. The ratio of ambient mixing ratios of two hydrocarbons with similar chemical reactivity are theoretically equal to those of their relative emission rates from sources [26,34]. Thus, we examined the monthly variations in several groups of VOC species ratio to reveal the typical characteristics of different emissions. Figure 5 shows the monthly variations in the average mixing ratios of isopentane versus acetylene, toluene versus ethene and isoprene versus 1,3-butadiene. The reactivity for these two compounds of hydrocarbon pairs was similar even with different emission sources. Vehicular exhaust is the main source of acetylene, ethene and 1,3-butadiene [35], but gasoline evaporation is also one of the most important contribution sources to isopentane [36]. Meanwhile, toluene is usually applied in shoemaking, furniture, adhesives, printing and other solvent and paint usage [32].Isoprene influenced by biogenic emissions is mainly determined by ambient temperature and solar irradiation [35]. Figure 5a shows that the ratios of isopentane versus acetylene are higher in June and August than that of other months. This finding was highly correlated with averaged ambient temperature given that high temperature could accelerate the evaporation rate of VOCs from paint and gasoline. The variation of toluene versus ethene is similar to the isoprene versus acetylene, which is higher from March to November than that in winter (December to February). The rate of isoprene versus 1,3-butadiene (0.95 ppbv ppbv-1) was lowest in January, more close to the motor vehicle exhaust emission of isoprene/1,3-butadiene ratio (0.3-0.5) measured in Beijing [35], indicating that the motor vehicle exhaust emission was the major emission source of VOCs in January. However, the rate of isoprene versus 1,3-butadiene was in the range from 5 to 35 from May to September, which was over 10 times higher than that in January. This finding demonstrated the clear emission characteristics of the biological source. Performance of PMF Modeling Only some VOC species should be-subjected to source identification in PMF. The general principles in choosing the particular species were as follows:(1) The species with signal-to-noise ratio (which indicated whether the variability in measurements was real or within the noise of the data) less than 0.2 must be eliminated (US EPA, 2014). (2) The significant source tracer of species must be included, even those with low concentrations. (3) Highly reactive species should be excluded because they reacted with other short-lived airborne substances, except for source identification species [37,38]. (4) Calculating the source characteristics of species and high concentration species was unnecessary [33]. Therefore, 28 VOC species were selected in PMF to resolve their relative contribution to the VOC concentration of all kinds of emission sources in the atmosphere, including nine kinds of C 2 -C 8 alkanes, seven types of C 2 -C 5 olefins, acetylene, five kinds of aromatic hydrocarbons, five types of halogenated hydrocarbons, and methyl tert-butyl ether (MTBE). Figure 6 shows the factor (% species) of each pollution source in the entire year. Six factors including solvent usage, gasoline evaporation, industrial process, motor vehicle exhaust emissions, combustion source and biogenic emission, were resolved via PMF analysis. Factor 1 was closely associated with solvent usage because of high loadings of C 6 -C 7 aromatic hydrocarbons, especially for-toluene, ethylbenzene, m/p-xylene, and o-xylene. The factor profiles identified were based on previous studies on source-appointments and VOC source emissions [24,35,38,39]. Pollution Factor Identification-Using the PMF Model Factor 2 was identified as gasoline evaporation because of the high fractions of isopentane, with low levels of MTBE. Isopentane is a major constituent of gas oils [40,41]. In this study the average fraction of isopentane in Factor 2 was 43.1%. The main components of Factor 3 were halogenated hydrocarbon, including methyl chloride, methylene chloride, chloroform, and 1,2-dichloroethane, and 1,2-dichloro propane, the average fractions of which were up to 36.1%, 55.5%, 40.3%, 58.9% and 56.7%, respectively. Such-species were mainly used in daily chemicals and electronic products in the manufacturing industry. Hence, Factor 3 was classified as industrial process. High loadings of C 2 -C 6 alkane, alkene (ethylene, propylene, and 1,3-butadiene), acetylene, MTBE, benzene, and toluene were found in Factor 4. MTBE is a kind of high-octane gasoline additive, mainly existing in gasoline products. Isopentane is a tracer in gasoline volatilization, and ethylene, propylene, acetylene and1,3-butadiene are gasoline combustion products [14,42,43]. Therefore, Factor 4 was identified as motor vehicles exhaust. In recent years, vehicle increased by more than 15% per year in Wuhan at 2.6 million in 2016 (Wuhan Statistical Yearbook, 2016). Consequently, vehicle exhaust emission has been a major VOC source. High loadings of typical combustion emission species in Factor 5, included ethane, propane, acetylene, benzene, and ethylene, with average fractions up to 71.6%, 57.3%, 53.1%, 25.9%, and 25.4%. Therefore, Factor 5 was identified as combustion. Factor 6 was identified as biogenic emission, because isoprene, which is a tracer for biogenic emission, demonstrated had the highest factor loading (40.5%) in the entire year [44][45][46]. Figure 7 shows the average relative contribution of VOC sources in different months. The monthly variation pattern, of vehicle exhaust was unclear with relative contributions ranging from 20% to 32%. The relative contribution of combustion was significantly higher during winter and spring (January to April) at 43-52%likeydue to the significant increase in burning activities, such as straw and civil coal burning in winter. However, the relative contribution of solvent usage, gasoline evaporation, and biogenic emission were large from May to August. Especially in July, the relative contributions of biogenic emission exhibited high values (>20%) because of the high light intensities and ambient temperatures. Figure 8 illustrates the spatial distributions of relative contributions (%) in VOC sources at the three sites. The extremely high levers of VOCs were combustion, vehicle exhaust, and solvent usage in the ZK site, whereas combustion, vehicle exhaust, and industrial process were the dominant emission sources of ambient air in the ML and ZY sites. Combustion was the largest contributor (>30%) of VOCs in Wuhan. In 2014, the standard coal consumption was 2.307 million tons, which was-higher than those of other cities in China. Combustion, including industrial, civil coal burning, LPG, and biomass burning, was an important VOC source in Wuhan. Meanwhile, the relative contributions of solvent usage exhibited higher values (22.3%) in the ZK site, but lower values (6.5%) in the ML site likely due to the high density of industries in the different sites. As mentioned before, a large number of auto manufacturing and vehicle maintenance companies surrounding the ZK site used a considerable amount of solvents, such as toluene, xylene, and trimethylbenzene, in the car-spraying process. Vehicle exhaust exhibited high (30.6%) contributions in the ZY site likely due to a larger number of vehicles in central Wuhan. The largest contributor (5.0%)of biogenic emissions was found in the ML site (suburban north) with high vegetation coverage. The relative contributions from the two other kinds of sources, including gasoline evaporation, industrial process and combustion, did not show clear spatial distribution characteristics. Conclusions Field measurements of VOCs were conducted in the morning and afternoon at the ZK(residential site), ML (industrial site) and ZY (control site) in Wuhan in central China, from July 2016 to June 2017.We quantified up to 101 VOC species, and the highest mixing ratios of the total VOCs were measured at ZK, followed by ZY, and reasonably the lowest at the background point ML. ZK obtained a very high level of oxy-organics, whereas ZY and ML achieved high mixing ratios of alkanes. The varying composition of VOCs with different seasons reflected the heterogeneous distribution of VOC sources in the region. The monthly variations in VOC ratios (e.g., i-pentane/acetylene, toluene/ethane and isoprene/1,3-butadiene) with similar chemical reactivity indicated the typical characteristics of different emissions. The PMF model was applied to identify the possible sources and evaluate the contribution of each emission source to the total VOC concentrations. Combustion was identified as the largest contributor (>30%) of VOCs in different types of points. The relative contributions of solvent usage exhibited higher values (22.3%) in ZK site but lower values (6.5%) in the ML site. Vehicle exhaust exhibited high (30.6%) contributions in the ZY site. The largest contribution (5.0%) of biogenic emission was found in the ML site (suburban north) with high vegetation coverage. The relative contributions from the two other kinds of sources, including gasoline evaporation, industrial process, and combustion, demonstrated unclear characteristics of spatial distribution. Overall, gasoline-related emission (the combination of gasoline exhaust and gas vapor) and combustion were the two most important VOC sources. Conflicts of Interest: The authors declare no conflict of interest.
2020-01-30T09:05:39.310Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "19b4d58265d789dd3f7f31ca2812a6ecac776975", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/ijerph/ijerph-17-00791/article_deploy/ijerph-17-00791-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce1037c5644abb6b6b4572fdd36ea1906e221991", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
255730415
pes2o/s2orc
v3-fos-license
The role of an elastic interphase in suppressing gas evolution and promoting uniform electroplating in sodium metal anodes , -while This journal is © The Royal Society of Chemistry 2023 being more resource abundant than lithium. 8,9A sodium metal anode, with its high theoretical capacity (1166 mA h g À1 ) and low electrochemical potential (À2.71 V versus standard hydrogen electrode), is an ideal negative electrode for these future sodium batteries. 10,113][14] There has been a great deal of study into the similar failure processes that occur with the lithium metal anode, 15,16 and as sodium is a fellow monovalent alkali metal it is tempting to conclude that it will exhibit consistent behaviors with lithium.This is not always the case. 10,17Important differences include sodium being more reactive to carbonate solvents, 18,19 not showing the same degree of performance improvement with known lithium electrolyte additives, 20,21 and the differing mechanical properties of the sodium metal dendrites. 19Dedicated study of the failure mechanisms of the sodium metal anode is necessary to understand these differences and to develop strategies tailored to improving sodium-ion battery performance. The SEI forms on the metal anode due to electrolyte decomposition and electrochemical reactions with the metal.Its structure and composition are centrally important in governing the metal electroplating and stripping performance, 22 as it is the interphase that mediates ion diffusion between electrolyte and electrode.An ideal SEI should be uniform and compact, as heterogeneity will favor irregular metal deposition and lead to dendrite growth. 10,23The high instability of the sodium metal SEI, more unstable than that of lithium, 24 is a major contributor to the dendrite formation and SEI overgrowth issues that lead to capacity fade in sodium metal batteries.Carbonate ester solventbased electrolytes, widely used in lithium battery chemistries, have been found to form particularly unstable interphases on sodium metal anodes, yielding much lower coulombic efficiencies than their lithium anode counterparts. 18,25Several strategies have been employed to construct a more stable SEI and improve Na metal anode cycling performance, including adjusting the sodium salt and its concentration, [26][27][28] using different solvents, 25,29 and tailoring electrolyte additives. 20,30Employing ether solvent electrolytes has shown particularly promising results, with excellent electroplating and stripping performance; Cui et al. reported that simply changing the electrolyte from a typical carbonate solvent to ether glyme solvents yielded high-reversibility and non-dendritic sodium metal cycling. 25hanging the electrolyte alters the formed SEI, and thus can influence the anode performance.2][33] In their work, Cui et al. similarly suggested that the improved performance of the glyme based sodium electrolyte was due to it forming a more stable, uniform, and dominantly inorganic SEI. 25 To develop an accurate understanding of how manipulating the electrolyte and SEI composition improves the electrochemical performance measured at the cell level, we need to diagnose precisely how it influences and changes the processes that are occurring at the electrode interface.5][36][37] In situ optical microscopy has been employed to study the sodium metal deposition structure from carbonate-based electrolyte, with needle-like sodium dendrites and significant levels of gas evolution discovered. 30,38owever, finer structural and interfacial dynamics require the higher spatial resolution afforded by in situ liquid-cell transmission electron microscopy (TEM). 39ere, we employ operando electrochemical TEM to reveal the differences in electroplating and stripping behavior when using carbonate or ether solvent sodium electrolytes.The high resolution imaging reveals significant bubble formation with carbonate solvent electrolytes, with particularly intense gas evolution localized at the metal-electrolyte interface during electrostripping, leading to delamination of the SEI from the metal.No bubble formation at the interface was observed when cycling from an ether solvent electrolyte.Atomic force microscopy (AFM) characterization suggests the ether electrolyte forms an SEI that is better able to maintain a conformal coating of the electrode during cycling, limiting opportunities for gas producing side reactions and minimizing sodium loss to SEI reformation. Results and discussion 1 M NaPF 6 dissolved in ethylene carbonate (EC) and dimethyl carbonate (DMC), EC : DMC = 1 : 1, and 1 M NaPF 6 in monoglyme (dimethoxyethane, DME), were employed as representative carbonate and ether electrolytes.In order to better understand the Na degradation mechanisms in carbonate-based electrolyte, the 1 M NaPF 6 in EC:DMC was also studied with the commonly used fluoroethylene carbonate (FEC) and vinylene carbonate (VC) additives (10%), as these carbonate additives are known to facilitate better performance 30,40,41 and thus represent the best of the candidate carbonate solvent Na electrolytes.The electrochemical performance and impedance behavior were measured in Na8Cu cells.In the first cycle (Fig. 1a), the carbonate electrolyte cell yielded a poor coulombic efficiency (CE) of 24%.As expected, the inclusion of VC or FEC additives improved the first cycle efficiency significantly, but still only reaching modest CEs of 70% and 74%, respectively.In contrast, the glyme electrolyte cell yielded a CE of 94%, surpassing all the carbonate electrolytes.Following ten cycles (Fig. 1b), the performance of the carbonate electrolyte cells did not change significantly, with measured CEs of 22%, 73%, and 75% for the no additive, VC additive, and FEC additive cells, respectively, suggesting that the low CE of the first cycles were not simply Na loss to initial SEI formation, but instead represent a continued loss mechanism occurring over repeated cycling.After ten cycles the ether electrolyte stabilized at a high CE of over 99.8% (Fig. 1b) and maintained it over 50 cycles (Fig. S1, ESI †), in good agreement with previous findings, 25 indicating the lower CE of the first cycle was due to initial interphase formation losses, and that once formed the interphase remained stable over View Article Online the subsequent cycles.Cells cycled at various rates all exhibited excellent stability with ether electrolyte (Fig. S2, ESI †), and similar resilience to capacity decay was seen in Na 3 V 2 (PO 4 ) 3 /Na full cells (Fig. S3, ESI †).Electrochemical impedance spectroscopy (EIS) measurements (Fig. 1c) showed a significantly improved ionic conductivity for the ether electrolyte.Together, the cycling performance and EIS measurements suggest a critical role for the SEI in discriminating the performances of the two electrolytes. To diagnose the contributions of initial SEI formation to Na loss, and to better quantify the overall size and composition of the SEI, we performed online mass spectrometry (online MS) on the cycled electrolyte cells.The coin cells were plated at 0.5 mA cm À2 for 0.5 mA h cm À2 , stripped at 0.5 mA cm À2 to 1 V (vs.Na + /Na) for one cycle, and then transferred to our online MS setup where they were titrated with deuterated water (D 2 O).By measuring the resultant gas evolution (D 2 , HD and CO 2 ), the electrically isolated 'dead' Na and some SEI components were able to be quantitatively identified (Fig. 1d-g).The quantity of reversible Na is known from the coin cell cycling performance, while the Na consumed by SEI formation is equal to the amount of non-reversible Na minus the quantity of 'dead' Na.Following the first cycle, a significant quantity of Na was lost to SEI formation for the carbonate electrolytes (Fig. 1d); 34%, 21%, and 19% for the no additive, VC additive, and FEC additive electrolytes, respectively.Employing an ether solvent electrolyte instead reduced this loss to SEI formation to just 3%, less than a tenth of the relative Na loss compared to the standard carbonate electrolyte.These MS results, along with X-ray photoelectron spectroscopy (XPS) showing a rapid decrease in the intensity of the F 1s peak (indicative of NaF 42 ) with depth for the ether electrolyte electrode (Fig. S4, ESI †), evidence a thinner SEI formed on the electrodes cycled in ether electrolyte.This is supported by the EIS (Fig. 1c), with the carbonate electrolyte resistance measurement roughly seven times higher, as the formation of a thicker SEI layer would be expected to impede transport. Along with the thinner SEI, the ether electrolyte largely prevented Na loss due to 'dead' Na detachment and isolation, limiting it to 3% (Fig. 1d), significantly outperforming even the additive-including electrolytes.'Dead' metal formation is understood to largely arise due to the uneven dissolution of dendritic electroplated structures, leading to metal detachment from the electrode and subsequent electrical isolation. 43Our online MS results suggest the ether electrolyte suppresses dendritic Na deposition, and thus limits possible Na detachment during electrostripping. Analysis of the products measured, both after one cycle (Fig. 1e-g) and three cycles (Fig. S5, ESI †), allows the quantification of the inorganic SEI product NaH, and organic SEI products (CH 2 OCO 2 Na) 2 and NaOCO 2 R (see methods).As per the collated data in Fig. 1d, the relative mass of SEI products was greater in the carbonate solvent electrolyte cells.Interphase NaH forms by reaction of sodium metal with hydrogen gas (or some sodiumbased hydrated compounds), 44 and has been suggested to hinder Na + ion transport at the electrode interface. 12The measured organic SEI products can further decompose into inorganic Na 2 CO 3 , 45 which has been shown to be unstable when exposed to sodium electrolytes. 46Thus the comparatively high quantities of NaH and Na organics found in the carbonate electrolyte SEIs support the poor electrochemical performances measured for them; the NaH will contribute to the insulating character of the SEI (Fig. 1c), and the organics will lead to continued SEI dissolution and replenishment and thus contribute to the sustained low CE (Fig. 1b) and high cumulative SEI product formation (Fig. S5, ESI †).In contrast, the ether electrolyte SEI remains chemically stable, with negligible increase in the quantity of SEI products measured following three cycles compared to one cycle. The mass spectrometry measurements demonstrated that ether electrolyte limits 'dead' Na formation, implying that it inhibits dendritic Na morphologies that are more prone to detachment and isolation.To verify this, we employed operando TEM to directly capture the electrodeposition and dissolution processes from the ether and carbonate electrolytes.Operando electrochemical TEM with a liquid-cell permits the nanoscale visualization of the plating and stripping dynamics from the electrolytes of interest on to a Pt working electrode (WE). 35alvanostatic electroplating and stripping at 10 mA cm À2 was performed from both DME and EC:DMC electrolytes (Fig. 2).The contrast in TEM is dependent on the relative densities of the imaged structures; 47 unfortunately, the density of Na metal is very close to the density of the surrounding solvent (Table S1, ESI †), making the delineation of deposited metal morphologies challenging due to the low contrast, particularly in the case of the ether electrolyte.However, careful examination of the complete electroplating and stripping process from the ether electrolyte reveals a largely uniform deposition and dissolution of sodium (Fig. 2a).This is more clearly seen in the captured real time movie (Movie S1, ESI †).The change in intensity due to sodium plating and stripping is confirmed by box-averaged intensity profile measurements extracted from the same location across the three frames (Fig. 2b).Alongside the largely uniform plating over the WE surface, two micron sized Na structures can be seen to deposit and dissolve, as indicated with arrows in Fig. 2a.The largely flat morphology of the plated Na supports the mass spectrometry measurements, as flat plating and uniform stripping will preclude any 'dead' Na detachment.Cycling from the EC:DMC carbonate electrolyte yielded a more disperse deposition morphology (Fig. 2c and Movie S2, ESI †).As opposed to the darker contrast of Na plated from the ether electrolyte, Na metal from the EC:DMC electrolyte appears slightly lighter than the surrounding electrolyte due to the relative density of Na being less than the EC:DMC electrolyte (Table S1, ESI †).To highlight the areas where plating occurred, background subtraction and a false-color look up table were applied to frame 2 (Fig. 2d), revealing the irregular bushy Na deposition morphology.Postmortem scanning electron microscopy (SEM) imaging of coin-cell electrodes electroplated with Na confirmed the looser and more porous Na morphology for deposition from carbonate solvent electrolyte (Fig. S6, ESI †), with the surface roughness difference of the electrodes also observable by eye (Fig. S7, ESI †). Cycling from the carbonate electrolyte also revealed many localized bubbles formed across the Na metal deposit (white contrast).Bubble formation started once the bias polarity switched from galvanostatic plating to stripping.These bubbles are not a result of beam damage, as evidenced both by a control experiment (Fig. S8, ESI †), and by their immediate formation upon switching the bias polarity, which instead strongly suggest an electrochemical cause.These small gas bubbles rapidly nucleated on the Na metal surfaces, grew, and eventually dissolved into the electrolyte.Following the completion of the galvanostatic cycle, moving the sample revealed a large bubble (Movie S2, ESI †).This indicates that gas evolution from the cycling process had been sufficient to overcome the saturation limit of the electrolyte in the liquid-cell, leading to degassing/gas accumulation into the large bubble.The excessive bubble formation during stripping, localized directly on the plated Na, while no such effervescence was observed for the ether electrolyte, presents a possible mechanism for the difference in their cycling performances; bubble formation at the stripping interface physically displaces the electrolyte away from the electrode, thus will impede uniform dissolution of the plated Na and lead to the lower CEs exhibited by the carbonate electrolytes. In order to further explore the localized bubble formation seen in the carbonate electrolyte system, operando electrochemical high angle annular dark field (HAADF) scanning mode TEM (STEM) was performed on cyclic voltammetry cycled NaPF 6 in EC:DMC electrolyte with 10% FEC additive (Fig. 3).We studied the carbonate electrolyte with FEC additive as it has been shown to limit gas formation, and so represents the 'best case' carbonate electrolyte.HAADF-STEM is a dark field imaging technique, as such the contrast is inverted compared to TEM imaging; i.e., low density features like gas bubbles are dark and high density features like the Pt WE are bright.Faint Na deposits can be initially distinguished in Fig. 3b (white arrows), which increase in contrast with plating time (Fig. 3c).The contrast between metallic Na and the surrounding solvent is low due to the close relative densities, yet the dynamic deposition is observable in the real time movie (Movie S3, ESI †).A bubble can also be seen to have started forming away from the working electrode (Movie S3 and Fig. S9, ESI †), showing that gas formation occurs throughout the cycle and not just during stripping, in agreement with in situ optical microscopy observations.However, once the electrostripping stage of the cyclic voltammetry sweep starts, bubble formation localized at the Na metal surfaces was once more observed (Fig. 3d, yellow arrows).Interestingly, these bubbles are seen to grow toward the working electrode, expanding to fully occupy the spaces enclosed by the SEI shells (Fig. 3g and h).The electrostripping reveals apparent SEI shells left behind, delaminated from the retreating sodium metal, within which the bubbles continue to grow and are contained.It is important to note that it is not possible to reliably distinguish between Na metal and electrolyte inside the SEI shell (darker blue in Fig. 3h) due to their similar contrast, thus liquid electrolyte may be entering the SEI shell following Na dissolution, with the growing gas bubbles subsequently displacing it.These localized bubbles eventually dissipate (Fig. 3e and f, green arrows), presumably through gaps in the SEI shell. That the SEI shells remain protruding from the surface following stripping suggests a SEI with less elasticity forms from carbonate electrolyte.Such mechanical properties mean that the SEI cannot maintain conformality with the retreating metal; the SEI does not follow the electrode's expansion and contraction through the electroplating and stripping cycle, but rather remains at the maximum extent.This may be aided by interceding gas bubbles delaminating the SEI away from the Na metal as it is stripped.The observed detachment would necessitate the SEI to be reformed every cycle, as new Na deposition would be left exposed to the electrolyte, which would explain the sustained capacity loss to SEI formation over repeated cycling experienced by carbonate electrolytes (Fig. 1d). To further characterize the differences in electroplated morphology, and to evaluate the mechanical properties of the SEI formed, we performed AFM imaging and nanoindentation measurement (Fig. 4 and Fig. S10, S11, ESI †).The morphology of the working electrode electroplated from ether electrolyte was much smoother compared to that plated from carbonate electrolyte (Fig. 4a and c), with a surface roughness of 26 AE surface after the first cycle with ether electrolyte exhibited a smooth texture, with an even smaller roughness than that of the pristine Cu foil (roughness: 12.4 AE 2.1 nm, Fig. S12, ESI †), evidencing the excellent flat plating morphology achievable from ether Na electrolyte.This flat morphology will be robust toward detachment, and thus supports the low quantities of measured 'dead' Na lost with ether electrolyte (Fig. 1d).AFM indentation measurements reveal the mechanical properties of the SEI formed from the two electrolytes on the Na metal, with stiffer and good elastic performance found for the ether electrolyte SEI and more brittle character for the carbonate electrolyte SEI (Fig. S10 and S11, ESI †), which is in good agreement with similar experiments performed recently on Sn anodes. 48The less elastic SEI formed from the EC:DMC electrolyte may be attributed to the larger amount of carbonate ester (NaO-CO 2 R) and brittle inorganic components (NaH and Na 2 CO 3 ) formed, as seen in our mass spectrometry measurements (Fig. 1).These measured mechanical properties support our liquid-cell STEM observations, where the non-conformal SEI was seen to rigidly remain following Na electrostripping. In situ liquid-cell AFM imaging captured the roughness evolution over the course of plating, and revealed the dynamics behind the exceptionally flat plating morphology achieved with the ether solvent electrolyte (Fig. 4e-h).After 600 s of plating, Na metal was evenly distributed on the Cu working electrode.With further plating to 1200 s, Na metal was observed to grow at lower sites, highlighted by dashed yellow regions, which were further away from the counter electrode.The same phenomenon was also observed toward the end of plating (1800 s), where the regions labeled by dashed blue curves again demonstrated growth of Na metal at lower sites.The tendency of Na deposition to preferentially occur at lower sites is reflected in the decreasing surface roughness measured over the course of the plating cycle (Fig. 4g), suggesting that the deposition dynamics from ether electrolyte maintains the smoothness of the electrode surface.This electroplating behavior is unexpected, as deposition should typically occur at protrusions rather than recesses due to the higher local electric field, leading to dendrite growth. 49,50The smoothness of the electrode was found to be maintained after stripping (Fig. 4h), with a roughness of 7.1 AE 2.1 nm, confirming the good stripping performance with ether electrolyte. A possible explanation for this unusual recess deposition mechanism may be the combination of the ether SEI's mechanical properties and its good ionic conductivity.With a higher Young's modulus and a wider elastic region of the SEI formed from the ether electrolyte, a greater compression on Na can be maintained without breakage of the SEI.The compressive stress has been modelled to have a suppression effect on the deposition kinetics by shifting the electrochemical potential at the interface. 51Such suppression of deposition kinetics would be the strongest at the tip of any deposited Na, where the elastic deformation from compressive stress would be the largest, leading to Na deposition at the valleys, as seen in our in situ AFM (Fig. 4e).The SEI layer formed from the ether electrolyte was also significantly thinner, as evidenced by our online-MS, which identified that the Na consumed to form SEI from the EC:DMC electrolyte was ten times more than for the ether solvent in the first cycle.This thinner SEI ensured faster Na + transport across it compared with that formed from the carbonate electrolyte (Fig. 1c), facilitating a less diffusion-limited process of Na deposition at the interface along with a lower overpotential of Na deposition.This lower overpotential of Na deposition will have aided a denser and more homogeneous growth of Na.The low deposition overpotential from an ionically conductive SEI, together with a strong suppression of deposition kinetics at Na tips, might contribute to a self-regulating Na interface that maintains a flat morphology, as observed in our experiments. Confirmation by AFM of the enhanced elastic properties of the ether derived SEI formed on Na presents a potential explanation for why interfacial bubble formation was not observed on electrostripping with the ether electrolyte.For the carbonate electrolyte derived SEI, the dissolution of Na metal during stripping led to the rigid SEI delaminating, losing conformal attachment to the anode.This leads to the exposure of previously secluded NaOCO 2 R and Na 2 CO 3 SEI components to the electrolyte, where they may subsequently react with NaPF 6 and release CO 2 gas. 46The more elastic SEI derived from the ether electrolyte prevents this, due to it maintaining an intimate conformal contact with the retreating sodium metal during stripping, and it also being composed of fewer NaOCO 2 R (Fig. 1g) and Na 2 CO 3 components. 48To confirm this mechanism we performed differential electrochemical mass spectrometry (DEMS) on sodium battery cells cycled with ether and carbonate electrolytes, and compared the measured CO 2 evolved from the respective cells.The cells were sealed during operation, allowing the evolved gases to accumulate, and then following cycling the gases were released and carried to the MS instrument (see methods).The results show significantly more CO 2 evolution following cycling from the carbonate electrolyte cell then from the ether electrolyte (Fig. 5).The greater levels of CO 2 evolved from carbonate electrolyte cycled cells support our proposed model of its SEI being more susceptible to CO 2 producing side reactions. We performed further coin cell studies to illustrate how the electrolyte solvent consideration remains relevant for cells beyond that of the metal anode half-cell (i.e., the Na/Cu coin cell).Na cells cycled with hard carbon anodes demonstrate that the use of ether electrolyte drastically improves cell cycle life, with less capacity fade in comparison to an equivalent carbonate electrolyte cell (Fig. S13, ESI †).Recent works using an ether-rather than carbonate-based Na electrolyte with hard carbon anodes have shown this performance improvement as well, and have attributed it to the distinct thin and conformal character of the formed interphase layer. 45,52,53Similar performance improvement with ether solvent electrolyte have been found with Sn anodes.This suggests that the beneficial role of an ether solvent is agnostic to the anode material.The thinner SEI layer formed, with more accommodating mechanical properties that favor conformality during cycling, appears to offer generic utility for any Na anode. Conclusion We have explored the mechanisms behind the improved sodium anode performance when cycling in ether rather than carbonate based electrolyte.Operando electrochemical TEM Energy & Environmental Science Paper This journal is © The Royal Chemistry 2023 revealed extensive gas bubble formation during stripping along the electrode interface when operated in carbonate electrolyte, while no such interfacial effervescence was observed when cycled with ether electrolyte.This gas formation at the interface will displace the electrolyte, and thus impede complete dissolution of plated sodium.The TEM imaging, alongside AFM, also highlighted the far smoother surface morphology for Na electroplated from ether electrolyte, which in situ AFM showed to be the result of the more elastic and robust SEI.We propose that these favorable mechanical properties also prevent gas formation from SEI reactions that occur during stripping, with the more inflexible non-conformal and brittle SEI derived from carbonate electrolyte more prone to delaminating from the Na and undergoing CO 2 producing side reactions.The SEI was imaged by operando TEM to remain fixed in place, losing conformality with the electrode during electrostripping, and thus exposing fresh areas to side reactions with the electrolyte. Our work shows the critical importance of designing electrolytes such that they yield an elastic and robust SEI layer, as these properties promote uniform flat electroplating and inhibit gas producing side reactions.Realizing this beneficial SEI by employing ether electrolytes will still require accommodation for their relatively poor high-voltage stability, 54,55 which currently prevent their use with high-voltage cathode materials. 56Continued research into strategies that overcome their limited oxidation stability are ongoing, [57][58][59] and include employing a high salt concentration, 60,61 forming a localized high concentration via use of a cosolvent, 62 or by inclusion of stabilizing additives. 63,64thods They were plated at 0.5 mA cm À2 for 0.5 mA h cm À2 and then stripped at 0.5 mA cm À2 to 1 V.The electrochemical measurements were performed on a Biologic VMP3 system. Electrochemistry Real-time liquid-cell (S)TEM We used a Protochips Poseidon 510 TEM holder to flow electrolyte into the liquid-cell with a syringe pump, as per our previous work. 32A continually replenished thin layer of electrolyte was thus confined between two Si-SiN chips inside the vacuum of the TEM.The flow rate was 120 mL h À1 during (S)TEM imaging and 240 mL h À1 for electrolyte replenishment after each cycle.Before we introduced the electrolyte we flowed dried DMC or DME for 40 minutes at a 240 mL h À1 flow rate.A Gamry reference 600 was used for cyclic voltammetry and galvanotactic measurements between the reference, working and counter Pt electrodes patterned on the electrochemical chip.The TEM and high angle annular dark field (HAADF) STEM imaging were performed with a JEOL 3000F (300 kV) using a 50-micron condenser aperture.The beam effect is shown in Fig. S6 (ESI †), suggesting the beam dose used was acceptable.The STEM beam current was calibrated by a Faraday cup (10 pA).All of the STEM images were recorded with a pixel dwell time of 3 ms pixel À1 and at 512  512 pixels (calculated pixel size of 1.2  10 4 Å 2 ).These imaging conditions correspond to a radiation dose of B1.6  10 À2 e À Å À2 .All the electrolyte preparation was performed in an argon glovebox or sealed systems.Cyclic voltammetry used a scan rate of 20 mV s À1 . XPS The Na was deposited from glyme or EC:DMC electrolytes in Na8Cu coin cells at a current density of 0.5 mA cm À2 for 1 hour.XPS measurements were conducted using a PHI5000 Versa Probe III instrument (Ulvac-PHI, INC. ).An Al monochromatic source was used to generate X-rays using a power of 25 W, voltage of 15 kV, and a beam spot size of 100 mm.A pass energy of 55 eV was set for the analyzer.An electron neutralizer gun was used to prevent any surface charge build-up.Depth profiling of samples was done by an Ar ion source at 2 kV and 1.8 mA over an area of 3  3 mm 2 for 60 seconds and 660 seconds, respectively.The spectra were calibrated according to the signal of adventitious carbon at 284.8 eV.The results were analyzed and fitted via CASAXPS software.Samples were transferred from the glovebox into the chamber via a sealed Ar-filled vessel without exposure to air. Online MS The procedure for online MS (built in-house) is illustrated in our previous work (Fig. S14, ESI †). 32First, the cycled Na8Cu CR2032 coin-cells, assembled with two glass microfiber separators (Whatman GF/D, dried under vacuum oven) and plated at 0.5 mA cm À2 for 0.5 mA h cm À2 and then stripped at 0.5 mA cm À2 to 1 V (vs.Na + /Na) for one cycle, were disassembled in an argon glovebox.Then the copper foil and the glass fiber separators on the Cu side were transferred into a well-sealed vial (along with a magnetic stirrer) without further treatment.After the vial was connected to the MS the gas composition inside the vial was analyzed and recorded.When the gas content was stable (checking no D The carrier gas flow rate (r) is controlled at 1 mL min À1 by a digital flow meter (Bronkhorst), so the total amount n in moles of the target gas (e.g.D 2 , HD), can be calculated by following equation: Here, V m is molar volume of gas (24.79 L mol À1 at 25 1C) and P is the percentage of the target gas in the carrier gas stream.More setup details can be found in our previous work. 65,66e ran a 'blank' experiment to control for any potential gas emission from other components of the cell, e.g. the separator, electrodes, etc.For this we prepared an identical coin cell but did not subject it to cycling.This was then disassembled and characterised by online MS as per the standard experiments.The results showed essentially no emission of HD or D 2 for the control cell. DEMS The sealed vial cell was assembled in a glove box, using Na metal attached to a stainless steel mesh as the working electrode and a Cu foil as the counter electrode.Input and output gas channels were supplied into the vial, and an isolation valve allowed carrier gas to flow to the MS while bypassing the cell.This valve was shut during cycling experiments (of 1 cycle, 5 cycles, and 10 cycles, in that order), with gas produced by the cycling thus sealed inside the vial.Following cycling the valve was opened and the evolved gases, conveyed by the Ar carrier and across a cold-trap to condense any solvent vapor, were detected and recorded by MS.The cell was cycled at a rate of 0.5 mA cm À2 , electroplated for 10 min, and electrostripped to a cut-off voltage of +1 V vs. Na. AFM The Bruker Dimension Icon AFM was used to characterise both ex situ and in situ samples in a glovebox filling with argon (o0.1 ppm H 2 O, o0.1 ppm O 2 ).All AFM probes used were calibrated according to standard samples (Sapphire and Ti roughness sample from Bruker), interpolating the actual spring constant and the tip radius.The PeakForce QNM model was conducted to capture the topography of cycled electrodes corresponding mechanical data.The 3D topography was reconstructed by NanoScope Analysis 2.0 software with calculated root-meansquare roughness values from at least three different regions.The mechanical nanoindentation experiments were conducted in a 5 mm  5 mm region with over 100 points evenly disturbed in the field of view.The AFM probes used have a spring constant of around 20 N m À1 with a tip radius of 10 nm.To measure the Young's modulus of the SEI on the cycled electrode without any plastic deformation, the PeakForce setpoint was controlled to 20 nN within the elastic region.The higher 350 nN setpoint was deliberately applied to penetrate and measure the elastic limit of the SEI on the cycled electrode.The force response curves were fitted by Derjaguin-Muller-Toporov (DMT) model to calculate the Young's modulus.All the Young's modulus and elastic region results were statistically summarised to be representative for the samples.This methodology has been widely used to characterise the mechanical of the SEI. 48or the in situ AFM study on the DME electrolyte, a closed electrochemical cell with three electrodes was used, as illustrated in Fig. S15 (ESI †).Cu foil was used as the working electrode, a concentric ring of Na metal was used as the counter electrode, and a flake of Na metal on the Cu wire was used as the reference electrode.After the AFM probe approached the cell and immersed into the electrolyte, the cell was closed by a rubber ring around the probe holder.The cell was connected to a Gamry potentiostat to electrochemically plate Na onto the Cu working electrode.The PeakForce QNM in fluid mode was used with fluidcompatible probes with a spring constant around 15 N m À1 .After the pristine scan on the Cu surface, the topography of the working electrode was captured during plating from the ether electrolyte at 0.5 mA cm À2 with different capacities of Na plated.The pristine morphology of Cu foil under liquid electrolyte was captured and compared to an identical scan performed in the air, verifying the morphology and capture resolution excluding any side effect from liquid electrolyte (Fig. S12, ESI †). SEM and FIB Electroplated Na (plating at 0.5 mA cm À2 for 0.5 mA h cm À2 ) were characterized by a Carl Zeiss Merlin SEM.The crosssectioning of plated Na (plating at 0.5 mA cm À2 for 3 mA h cm À2 ) was examined by Thermo Scientific Helios G4 Plasma FIB DualBeam (or PFIB) system.The cells were disassembled in a glove box, and the sample transfer process was performed by an air-tight holder, making sure the samples were not contaminated by air. Fig. 1 Fig. 1 Electrochemical performance comparison of sodium electrolytes with carbonate or ether solvents.Deposition capacity at 0.5 mA cm À2 for 0.5 mA h cm À2 after the (a) first and (b) tenth charge-discharge cycle.(c) EIS characterization of the ether and carbonate Na electrolytes.Magnified view of the EIS of NaPF 6 in DME shown in the insert.(d) Relative distribution of Na after the first charge-discharge cycle from titration online MS.Quantitative online MS measurements, after one cycle, of (e) 'dead' Na, (f) NaH, and (g) CO 2 , the signature of organic SEI components such as (CH 2 OCO 2 Na) 2 and NaOCO 2 R. Fig. 4 Fig. 4 AFM surface profiling of cycled electrodes and in situ AFM imaging of electroplating and stripping.(a-d) AFM images of Cu electrodes galvanostatically cycled (plating at 0.5 mA cm À2 for 0.5 mA h cm À2 ) in ether and carbonate solvent Na electrolytes, imaged following (a and c) electroplating and (b and d) stripping.The height is indicated by the colour scale.(e) In situ liquid-cell AFM topography time-series of the Cu WE during Na electroplating at 0.5 mA cm À2 .(f) Perspective views of the images in (a).(g) Evolution in electrode roughness with electroplating time.(h) The Cu WE after galvanostatic stripping to 1 V (vs.Na + /Na). Fig. 5 Fig. 5 Differential electrochemical mass spectrometry (DEMS) comparison of quantified gas evolution from carbonate and ether electrolyte cells.Measured CO 2 evolution following cycling with (a) carbonate (EC:DMC) and (b) ether (DME) solvent electrolytes.Shaded areas are the periods where the cell is sealed and cycled, un-shaded areas are where Ar carrier gas is flowed through the cell to the MS.The cells were cycled cumulatively for 1, 5, and finally 10 cycles at a current density of 0.5 mA cm À2 . All the electrolyte and coin cell preparation were performed in an argon-filled glove-box (H 2 O o 0.1 ppm, O 2 o 0.1 ppm).The prepared electrolytes were 1 M NaPF 6 in EC : DMC = 1 : 1 (v/v) (battery grade, Kishida Chemical) with and without 10 wt% FEC and VC (anhydrous, Z99%, Sigma-Aldrich), and 1 M NaPF 6 in DME (anhydrous, Z99.5%, Sigma-Aldrich).The hard carbon and Na 3 V 2 (PO 4 ) 3 cathode were both purchased from MTI Kejing Corporation (Shenzhen, China), and were used without any further processing.The active mass loading of hard carbon and/or Na 3 V 2 (PO 4 ) 3 used was around 3 mg cm À2 .The water content of prepared electrolytes was measured by Karl Fischer titration (C30S Coulometric KF Titrator, Mettler Toledo) with methanol-free reagents three times, each showing a water content value of o5 ppm.During the electrolyte preparation process, all the tools we used (which include syringes, vials, tweezers, etc.) were dried in vacuum oven for over 12 hours prior to bringing into the glovebox.The coin-cells for online MS were Na8Cu CR2032 coin cells with two pieces of glass microfiber separators (Whatman GF/D, dried under vacuum oven). injected into the vial then the released D 2 , HD and CO 2 were detected by MS (Prima BT, Thermo Fisher Scientific).Finally, the quantity of metallic Na, NaH and some organic SEI species from the cycling can be calculated from the amount of D 2 , HD and new CO 2 gas detected by MS, according to the following 2 , O 2 , CO 2 and almost 100% Ar carrier gas), excess degassed D 2 O (499.96 atom %D, Sigma-Aldrich) was
2023-01-12T17:35:30.826Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "3e0a757bd72f21403018241e17e961515937da7f", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/ee/d2ee02606f", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "932bb3a3e3c9499846283f43d57ebe17e02e4ba1", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
119225966
pes2o/s2orc
v3-fos-license
Cosmological constraints on the Hu-Sawicki modified gravity scenario In this paper we place new constraints on a f(R) modified gravity model recently proposed by Hu and Sawicki. After checking that the Hu and Sawicki model produces a viable cosmology, i.e. a matter dominated epoch followed by a late-time acceleration, we constrain some of its parameters by using recent observations from the UNION compilation of luminosity distances of Supernovae type Ia, including complementary information from Baryonic Acoustic Oscillations, Hubble expansion, and age data. We found that the data considered is unable to place significant constraints on the model parameters and we discuss the impact of a different assumption of the background model in cosmic parameters inference. I. INTRODUCTION The recent cosmological data from Cosmic Microwave Background Anisotropies, galaxy surveys and luminosity distance of type Ia supernovae are all providing supporting evidence for a dark energy component, responsable for more than 70% of the total energy budget in our universe (see e.g. [1]). Several candidates have been suggested for explaining this component, as, for example, minimally coupled scalar fields (see e.q. [2] and references therein). However, it may also be that the cosmological evidence for acceleration comes from the wrong assumption of general relativity, i.e. that no dark component is present but actually a modification to gravity is at work. In this respect, f (R) theories seem to provide a quite large number of viable models (for a recent review see [3] and [4]). A particular f (R) model that evades solar system test has been proposed by Hu and Sawicki ([5], HS hereafter). The model has a modified Einstein-Hilbert action: L m is the matter lagrangian, k 2 = 8πG and with m 2 = k 2 ρ/3 and c 1 , c 2 and n as free parameters. As shown in [5] this model is able to reproduce the late time accelerated universe but with distinctive deviations from a cosmological constant. In this paper we investigate the cosmological viability of the HS model in more detail. After a brief description of the model, in the next section we will show that the HS model satisfies indeed the general conditions presented by [6] as a viable f (R) model. In Sec. III we compare the HS model with current data from SN-Ia luminosity distances from the UNION catalog ( [8]), Baryonic Acoustic Oscillation data ( [11]) and age constraints from the analysis of Simon, Verde and Jimenez ( [9]). We show that the current data is fully compatible with the HS model and that, unless a prior on the matter density is used, the parameters of the model are unconstrained. In particular, we analyze the impact of the HS model in the determination of the current matter density. As we will show, assuming the HS model instead of the standard cosmological model, could relax the constraints on the effective matter density. A future incompatibility between the values of the matter density Ω M determined from different datasets and under the assumption of the standard ΛCDM model could therefore provide an hint for a modified gravity scenario. II. THE HU-SAWICKY MODEL Let us briefly review in this section the basic equations and results of the HS model. Varying the action in Eq. 1 with respect to the metric g µν one obtains the modified Einstein equations: where f R = df /dR and f RR = d 2 f /dR 2 and assuming a flat FRW metric, the modified Friedmann equation: with ′ ≡ d/dlna and ρ the matter density at present time. Defining the new variables y H = (H 2 /m 2 ) − a −3 and y R = (R/m 2 ) − 3a −3 the Friedmann equations can be expanded in a system of two ordinary differential equations: In order to compare the HS model with the cosmological constraints usually derived under the assumption of dark energy, it is useful to introduce an effective dark energy component with present energy densityΩ x = 1 −Ω m and equation of state w x , whereΩ m is the effective matter energy density at present time. Of course, in reality no dark energy component is present, the only component present is matter and modified gravity gives the acceleration. Considering the Friedmann equation: the effective equation of state parameter w x for the dark energy component is given by The free parameters c 1 and c 2 that appear in Eq. 2 can be expressed in function of the effective density parameters by: These relations show that the free parameters of the model areΩ m , n, and f R0 . The latter is constrained to |f R0 | 0.1 by solar system tests [5] and we will not investigate larger values in the next sections. Solving the differential equations system for different values of n, f R0 andΩ m , it is possible to obtain various evolution trends for the equation of state parameter w x . In Fig.1 and Fig.2 we plot the behavior of w x in function of As we can see in Fig.1 and Fig.2 the equation of state parameter w x follows a peculiar behavior in function of the redshift. At the present time (z = 0) w x has always a value higher than the one predicted by the ΛCDM model (w = −1) and, moving towards higher redshifts, it decreases crossing into the phantom region, i.e., assuming values lower than −1. For even higher redshifts, w x moves asymptotically towards −1. The same behavior is shown for any value of n and f R0 , moreover decreasing the absolute value of f R0 brings w x closer to −1, while increasing n shifts the phantom crossing at lower redshift. The predicted expansion must be consistent with standard cosmological results, i.e., should produce an accelerated era after radiation and matter dominance. Modified gravity models consistent with current observations, for example, should not change the scale factor evolution during the matter era. Hence, it is possible to derive general conditions for the cosmological viability of f (R) theories. Introducing the parameters it is possible to show [6] that for f (R) theories the following conditions apply: • The model has a standard matter era with a following accelerated phase if m(r) ≈ 0 and m ′ (r) > −1 with r ≈ −1 • The accelerated phase goes asymptotically towards the one produced by a dark energy with equation of state • The expansion is not of the phantom type (w < −1) if It is possible to calculate m(r) for the Hu and Sawicki model and to show the cosmological viability of this model. In Fig.3 we show that, for example, setting n = 1,Ω m = 0.3 andΩ Λ = 0.7, one obtains two solutions for m(r), one living outside the viability region and the other corresponding to an acceptable expansion. A. Method In order to constrain the free parameters of the Hu and Sawicki model (Ω m , n and f R0 ), we predicted the expected theoretical values for a set of observables. As now common in the literature, we considered the luminosity distance, defined by: and the Hubble parameter: Moreover, we also considered the quantity: where z * = 0.35, Γ(z * ) = z * 0 dz/ǫ(z) and ǫ(z) = H(z)/H 0 . The value of this parameter can be obtained from observations of Baryon Acoustic Oscillations (BAO) [7]. Hence, we have another way to compare model prediction with data. We used the superovae data from Kowalski et al. [8] to obtain the observational trend of d L (z) and we considered H(z) values obtained by Simon, Verde and Jimenez [9] and a prior on the Hubble parameter H 0 = 0.72 ± 0.08 derived from measurements from the Hubble Space Telescope (HST, [10]), Finally, we used the value of A from Eisenstein et al. [11]. We compute a χ 2 variable for each observational quantity and then combine the results in a single variable χ 2 = χ 2 SN + χ 2 BAO + χ 2 H + χ 2 HST . Once the theoretical evolution of the observational quantities is defined, we can define a likelihood function as a function of n and f R0 as where χ 2 min is the minimum value in the considered range of n and f R0 . B. Results Combining the results obtained from the comparison between the experimental data for H(z), A and d L (z) and their theoretical values, we can constrain the free parameters n and f R0 for different values ofΩ m andΩ x . SettingΩ m = 0.2 andΩ x = 0.8 it is possible to find an upper limit on n and on f R0 , n < 1.6 and f R0 < −0.03 at 2 σ, while, performing the same analysis with different values ofΩ m andΩ Λ , we can see that raisingΩ m brings to more loosely constrained parameters. We can note anyway that for higher values ofΩ m , higher n are preferred, while smaller values of n are more in agreement with data for smallerΩ m . As we can see from Figure 4, forΩ m = 0.3 both parameters are almost totally unconstrained; this points out the need of an independent measurement of the effective matter content (Ω m ) in order to obtain some constraints on n and f R0 . It is interesting to compare the best fit values of χ 2 obtained in the modified gravity framework with the χ 2 of a cosmological constant model. In order to quantify the goodness-of-fit of the two models we use the Akaike information criterion (AIC) [12] and the Bayesian information criterion (BIC) [13], defined as where L is the maximum likelihood, k the number of parameters and N the number of points. In Fig. 5 we plot the best fit values of the AIC and BIC tests in function ofΩ m for the standard model based on a cosmological constant and the HS model respectively. As we can see, while the cosmological constant gives slightly better values for the overall best fit, when larger or smaller values ofΩ m are considered the AIC and BIC tests provide definitly better values for the HS model. In few words, there is a weaker dependence of the observables considered fromΩ m in the case of HS scenario. It is therefore important to quantify the impact of a different choice of the theoretical background model on the derived constraints onΩ m . In Fig. 6 we compare the constraints on theΩ m parameter derived under the assumption of the HS scenario with the similar constraints but assuming general relativity and dark energy. As we can see theΩ m parameter is less constrained respect to the ΛCDM scenario. This is certainly due to the larger amount of parameters present in the HS model. In the future, with the increasing experimental accuracy, if a discrepancy between independent constraints on the matter density will be found then a modified gravity scenario could be suggested as possible explanation. This result anyway shows that one should also be extremely careful in considering the current cosmological constraints because of their model dependence. IV. CONCLUSIONS In this paper we have compared a modified gravity scenario, the HS model, with several current cosmological datasets. We have found that the model is in excellent agreement with recent SN-Ia, BAO and H(z) data. Moreover, the parameters of the model are substantially unconstrained by the data considered. This has important effect on the current constraints on some parameters as the matter density. We have shown that the assumption of the HS model enlarges the current constraints on this parameter by ∼ 30%. If a discrepancy between two experimental determinations of the matter density will be found in the framework of general relativity, then a possible solution could be the introduction of a modified gravity scenario. It will be duty of future experiments to scrutinize this interesting possibility.
2009-06-12T14:43:40.000Z
2009-06-12T00:00:00.000
{ "year": 2009, "sha1": "2b6225a7fdc5a7d04437b20d19ef79af204f7e13", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2b6225a7fdc5a7d04437b20d19ef79af204f7e13", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1264568
pes2o/s2orc
v3-fos-license
NMR Metabolomics in Ionizing Radiation. Ionizing radiation is an invisible threat that cannot be seen, touched or smelled and exists either as particles or waves. Particle radiation can take the form of alpha, beta or neutrons, as well as high energy space particle radiation such as high energy iron, carbon and proton radiation, etc [1]. Non-particle radiation includes gammaand x-rays. Publically, there is a growing concern about the adverse health effects due to ionizing radiation mainly because of the following facts. (a) The X-ray diagnostic images are taken routinely on patients. Even though the overall dosage from a single X-ray image such as a chest X-rays scan or a CT scan, also called X-ray computed tomography (X-ray CT), is low, repeated usage can cause serious health consequences, in particular with the possibility of developing cancer [2,3]. (b) Human space exploration has gone beyond moon and is planning to send human to the orbit of Mars by the mid-2030s. And a landing on Mars will follow. ("Obama Promises Renewed Space Program". The New York Times. Retrieved April 15, 2010). Completely shield the high energy space radiation in outer space is a big challenging [4,5]. (c) The impact of past nuclear disasters such as Chernobyl disaster (1986/4/26) and Fukushima Daiichi nuclear disaster (2011/3/12) are long lasting, including leaving behand radiation contaminated sites that are very difficult to clean [6,7]. And (d) Radiological hazards are likely to be employed by terrorists via nuclear detonation, radiological dispersion devices, and covert placement/ distribution of radioactive substances [8]. The worst case scenario for a radiation incident would involve a nuclear detonation-either from an improvised nuclear device or an actual warhead. All cells can be damaged by ionizing radiation, but actively dividing cells are far more radiosensitive than cells that are neither meiotically nor mitotically active. The most radiosensitive cells in the human body include the bone marrow stem cells, gastrointestinal villi cells, and the gametes in the ovaries and testes. Acute Radiation Syndrome (ARS) is an illness caused by partial or whole-body exposure to high doses of ionizing radiation over a short period of time (usually a few minutes or less). According to American military radiologists, the pathophysiology effects dependence upon the irradiation doses are summarized in Table 1 [9]. Although the manifestations of radiation injury vary depending on total absorbed radiation dose and the preexisting health of the victim, it is clear from Table 1 that in most radiation scenarios, injury to the hematopoietic system and GI tract are This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. the main determinants of survival. If left untreated, a victim exposed to a total dose of 3.5Gy (LD 50 is about 4.0 Gy) and above is unlikely to survive. The classical model of molecular injury involves immediate cellular damage following irradiation, which can result in membrane and intracellular injury, i.e, inflammation, DNA single and double strand break that subsequently turn on various genes and lead to cell proliferation, fibrosis, cancer or cell death [10][11][12]. Significant investigations at molecular level have been done at the genetic and protein levels by studying changes associated with DNA, RNA and proteins extracted from cells and animal tissues using genomic [13,14] and proteomic [15,16] methods. Although expensive and labor intensive, genomic and proteomic methods, may have potential as powerful tools for studying different levels of the biological response to radiation-induced injury, including searching for radiation specific molecular biomarkers. However, careful studies have generally shown a low correlation between the pattern of gene expression and the pattern of protein expression [17,18]. Moreover, even in combination, genomic and proteomic methods still do not provide the range of information needed for understanding integrated cellular function in a living system, since both ignore the dynamic metabolic status of the whole organism. It is well-known that alterations in DNA, RNA and protein are associated with changes in metabolic profiles. Metabolites are chemical compounds that participate as reactants, intermediates, or byproducts in a cellular metabolic pathway, and include carbon compounds with a molecular weight typically in the range of 100-1000 Da. Radiation exposure will disturb the ratios and concentrations of endogenous metabolites, either by direct chemical reaction or by binding to key enzymes or nucleic acids that control metabolism. If these disturbances are of sufficient magnitude, toxic effects will result. Therefore, metabolomics, defined as a comprehensive and quantitative analysis of all metabolites in a biological system [19][20][21], will be an important new systems biology tool for elucidating the molecular mechanisms of radiation. Metabolomics is a new technique and has only been recently applied in the field of radiation, emerging as a field of great significance for both translational and basic research [22][23][24][25]. Unlike approaches in which biomolecules/metabolites are selected and analyzed one or a few at a time, metabolomics focuses on broad identification and analysis of multiple metabolites simultaneously. The state of metabolome cumulatively reflects the stages of gene expression, protein expression, and the cellular environment as well as multidirectional interactions among these elements. Metabolomic information is complementary, yet distinct, from that generated by genomic and proteomic approaches. Moreover, metabolic changes are among the earliest cellular responses to environmental or physiological changes. It is well-known that there are estimated 30,000-40,000 genes (genome) associated with DNA, more than 100,000 transcripts (transcriptome) associated with RNA, and more than 1,000,000 proteins (proteome) yet there are only approximately 5000 metabolites (metabolome) in human cells [26,27]. It is clear that complexity is greatly simplified with metabolomics which, although in its infancy, has already proven capable of detecting and diagnosing a disease and evaluating the efficacy of therapy in an early stage [22,23,25,28]. Therefore, it is highly likely that metabolomics will provide valuable new information about the impact of radiation on human health. Nuclear Magnetic Resonance (NMR) spectroscopy is a quantitative, non-destructive method that requires no or minimal sample preparation, and is one of the leading analytical tools for metabonomic research [19,[29][30][31][32][33]. Unlike mass spectrometry based methods, where the peak intensity depends on the efficiency of ionization of the molecules that are different for different types of molecules and the ion suppression issues when multiple species coelute, the peak intensity in an NMR spectrum is directly proportional to the number or concentration of molecules. The easy quantification associated with NMR is a big advantage over other techniques. 1 H NMR is especially attractive because protons are present in virtually all metabolites and its NMR sensitivity is high, enabling the simultaneous identification and monitoring of a wide range of low molecular weight metabolites, thus providing a biochemical fingerprint of an organism "without prejudice". It is expected that NMR metabolomics will play an important role in understanding the damage at molecular level by ionizing radiation as have demonstrated recently by us [34,35]. Figure 1 shows an example [35] of applying 1 H NMR metabolomics to study the changes in metabolic profile in the spleen of C57BL/6 mouse after 4 days whole body exposure to 3.0 Gy and 7.8 Gy gamma radiations. As an integrated part of NMR metabolomics, principal component analysis (PCA) [36], an unsupervised statistical method, and orthogonal projection to latent structures analysis (OPLS) [37], a supervised statistical method, are employed for classification and identification of potential biomarkers associated with gamma irradiation. The results from the PCA and OPLS analysis have shown [35] that the exposed groups can be well separated from the control group. Leucine, 2-aminobutyrate, valine, lactate, arginine, glutathione, 2-oxoglutarate, creatine, tyrosine, phenylalanine, πmethylhistidine, taurine, myoinositol, glycerol and uracil are significantly elevated while ADP is decreased significantly. These significantly changed metabolites are associated with multiple metabolic pathways and may be considered as potential biomarkers in the spleen exposed to gamma irradiation. Example of applying 1 H NMR metabolomics to study the changes in metabolic profile in the spleen of C57BL/6 mouse after 4 days whole body exposure to 3.0 Gy and 7.8 Gy gamma radiations. Hu et al. Page 7 Table 1 The phases of acute radiation syndrome and prognosis varying by dose.
2018-04-03T03:34:46.803Z
2016-09-08T00:00:00.000
{ "year": 2016, "sha1": "251a7fd33e573f80e999850fef12197db7e3c718", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e34b6d0a60ea475585a0a7270fe23c76fa676e42", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
244341243
pes2o/s2orc
v3-fos-license
The Relationship Between Senior High School Students’ Motivation in Learning English and Their Writing Ability : Numerous researches conducted research on learners’ motivation in learning English. In this research, researcher only studied intrinsic motivation and extrinsic motivation. This study aimed to describe students’ motivation in learning English at two state senior high schools; SMAN 1 Gambut and SMAN 1 Martapura in South Kalimantan. The research design used was correlational research design. There were 129 eleventh graders as research participants in this study. The researcher used a questionnaire and a writing test as instruments to collect data. The finding of the study showed a strong, significant, and positive correlation between motivation in learning English and writing ability. The following figure presents motivational classification which is taken from Yuan-bing (2011). Yuan-bing summarizes Brown's dichotomies by providing examples. Table 1.Motivational Dichotomies According to Ellis (2008), there are various kinds of motivations which have been identified; instrumental, integrative, resultative, and intrinsic. Instrumental motivation is the motivation that encourages learners doing some efforts to learn an Second Language (L2) for some functional reason such as to pass examination, to get a better job, or to get a place at university. Next, integrative motivation can be defined motivation which involves integrating oneself with target culture. Resultative motivation means that the success the students obtain in learning affects their motivation; either it will increase or decrease the motivation. The example of resultative motivation is if someone wants to get a job, so the target is getting job. Intrinsic motivation relates to students' personal feeling and interest when it comes to learning activities. Motivation as told before is important to learn English. English is an international language that has four skills to be mastered. The four skills which are required to be mastered by L2 learner are listening, speaking, reading, and writing. Listening and reading are famous to be called as receptive skills, meanwhile speaking and writing are regarded as productive or active skills. The most difficult skill that should be mastered by L2 learners is writing skill (Richard & Renandya, 2002). Some researches has showed that students face problems in engaging a writing activity e.g. producing a writing text (Ardila, 2016;Fareed, et al, 2016). There are several text types in writing such as narrative, argumentative, recount, and report. Narrative text is a text type used to tell a story to amuse readers. Next, descriptive text is usually used to described people or things. Argumentative text is a text type used to give argument of the topics meanwhile recount text is a text type which is used to tell experience in the past time. Last is report text which is used to report some specific thing such as news item. One of previous studies about motivation has been done by Saheb (2014) at Stockholm's upper secondary schools for adults (KOMVUX). The study showed that there was no significant correlation between adult students' level of English and their degree of socially oriented motivation. Another study was also conducted by Al-Tamimi and Shuib (2009). The result or the finding shows that the subjects' greater support of instrumental reasons for learning the English language and it also shows that most of the students have positive attitude and orientation towards learning English language. In addition, Yuan-bing (2011) states that motivation, especially intrinsic motivation will encourage better performance in language learning process. Another previous study came from Nanyang (2018). The research found there was a significant correlation between the initial motivation of writing tutors and the students' attitude in class activities (r = 0.57, p < 0.05). This research revealed that the more the students were given motivation by the writing tutors such as compliment, feedback, asking questions, and interesting group work, the more the students were active in class activities. The other previous research is coming from master's thesis about motivation and literacy skills across gender conducted for university students (Agustrianti, 2016). The research reveals the significant correlation between motivation and writing skill. It is also revealed that no significant correlation between literacy skills and gender. Similar finding (Nasihah and Cahyono, 2017;Lo and Hyland 2007) stated that there is a significant increased writing achievement when there is an increase of motivation. This present research was conducted by the researcher to know the relationship between motivation and writing ability of the students especially writing recount paragraph in two public senior high schools namely SMAN 1 Martapura and SMAN 1 Gambut, Banjar Regency, South Kalimantan. This study is different with the previous study since it is done in senior high school not in university. Both of the schools are chosen based on consideration they got first and second ranks of the schools in the regency. Recount paragraph is chosen because the students have learned it before according to the syllabus. METHOD This study aimed to find out the relationship of eleventh grade students' motivation in learning English and students' writing ability especially in writing recount paragraph. The design used in this study was correlational research design. It was used to measure the degree of relationship between two or more continuous variables (Latief, 2010), such as the correlation between students' vocabulary and their writing skills. Researcher measured two variables and assessed the statistical relationship between them with no influence from any extraneous variable. The correlational research is different with experimental study. In correlational research, researcher do not (or at least try not to) influence any variables. He/she only measure the variables and looks for relations (correlations) between some set of variables. In addition, the variables are not manipulated or controlled by the researcher in correlational research (Latief, 2012). The two or more variables can be stated as Intrinsic Extrinsic Integrative L2 learner wishes to integrate with the culture (e.g., for immigration or marriage). Someone else wishes the L2 learner to know the L2 for integrative reasons (e.g. Japanese parents sent kids to Japanese language school). Instrumental L2 learner wishes to achieve goals utilizing L2 (e.g., for a career). External power wants L2 learner to learn L2 (e.g., corporation sends Japanese businessman to US for language training). positive correlation if they vary directly or negative correlation if they vary inversely (Ary, et. al, 2010). There were two variables in this research: motivation as X variable and writing ability as Y variable. The relationship of both variables mentioned previously can be seen on figure 1. R Figure 1. The Relationship of Variables X and Y Based on Figure 1, it can be seen that the term R means the relationship between X (motivation in learning English) variable and Y (writing ability). Motivation in learning English is called X variable or predictor variable meanwhile writing ability is called Y variable or criterion variable. Next, the term R was analyzed by using Pearson correlation product moment formula in SPSS 22 Program. There is no independent variable and dependent variable in this study due to its correlational study design. In order to conduct this research, the researcher needed to find appropriate population and sample. Population means a larger group that can be generalized meanwhile sample is a small group to be studied taken from the population (Ary, et. al , 2010). The population of this research was 578 second graders that came from two state senior high schools, namely SMAN 1 Martapura and SMAN 1 Gambut, Banjar Regency, South Kalimantan. There were 129 eleventh grade students who were chosen as the participants of this study. They were taken from two classes of the eleventh grade students at first semester. The composition was as follows: the first sample, two classes of SMAN 1 Gambut (64 students) were from XI MIPA 2 and XI IPS 1 and the second sample, two classes of SMAN 1 Martapura (65 students) were from XI MIPA 4 and XI IPS 3. Instruments There were two instruments in this present research e.g motivation questionnaire and writing test. The motivation questionnaire was adapted from Gardner's AMTB Questionnaire (2004) and the writing test used analytic scoring rubric which was adapted from Jacob, et al. (1981). Questionnaire The instruments used in the data gathering for this study were questionnaire. The questionnaire used closed-ended questions since the research had large participants. It was faster an easier to be analaysed rather than open-ended questions (Cohen, et al., 2007) . The 4-point Likert scale ranging from "not at all like me (1)" to "not me (2)" and "me (3)" to "just like me(4)" were used in the questionnaire. The questionnaire was translated into Indonesian Language to ease the students to understand the questions. For collecting data, researcher asked the participants to fill out the questionnaire. The questionnaire was adapted from Gardner (2004), namely AMTB (Attitude Motivation Test Battery) questionnaire in English version. This test was originally used by Gardner (1959) to measure attitude and motivation in studying French language. There were 50 items of the questionnaire for students consisting of 25 positive statements and 25 negative statements. Then, the questionnaire was translated into Indonesian language to help students understand the questionnaire well. Last, the obtained data was calculated by using SPSS 22 program. For the data analysis, researcher fixed students' questionnaire score as follow: the minimum score was 50 and the maximum score was 200. If the students reached the minimum score (50), it meant they had tendency to feel unmotivated in learning English, meanwhile if the students obtained maximum score 200, it meant that they had high motivation in learning English. Writing Test Another instrument used in this study was writing test. The analytic scoring rubric was adapted from Jacob, et al. (1981). It was chosen in order to get detail scoring in writing test better rather than using holistic scoring rubric. The scoring rubric was divided into five sections, namely, content, organization, grammar, vocabulary, and mechanics. The rate for each section was starting from 1 until 4. To get reliable results, researcher asked two different raters to grade students' writing test scores. Raters were selected by considering their ability and knowledge in teaching English, especially in writing and they had been previously attending Assessment course. To ensure that both raters were using the same standard to their scoring, raters had to undergone a training first. Researcher gave the raters two model of writing test consisting of two criteria: low and high ability writing test results then asked them to score it based on the standard recount scoring rubric. Raters were trained by the researcher to avoid the appearance of inconsistency score due to raters' subjectivity (Latief, 2012). Then raters were chosen based on their consistency in giving score during training. This term was called as an inter-rater reliability. Data Collection The data were administered by the researcher through several steps. First, researcher collected the data by distributing Motivation Questionnaire to measure students' motivation which took 40 minutes duration of time. It is called self-administered questionnaire in presence of the researcher (Cohen, et al., 2007). Students were asked to tick the statements in the questionnaire according to their motivation in learning English. Then, researcher asked the students to write a 100-word recount paragraph in no more than 40 minutes about their memorable past experience as the topic of the writing test. The data collection was conducted by the researcher in two separate days at the two state senior high schools in Banjar Regency. The first collection of data was conducted on October 10 th 2018 at SMAN 1 Gambut. It was conducted at two classes of eleventh grade students. There were 64 students who participated in the study that came from XI MIPA 2 and XI IPS 1. Next, the second data collection was held at October 16 th 2018. It was conducted at two classes of the eleventh grade students of SMAN 1 Martapura. The total number participants were 65 students who came from XI MIPA 4 and XI IPS 3. Data Analysis The primary data in this current research were in the form of Motivation Questionnaire results and writing test results in the form of recount paragraph essay. The questionnaire was used to measure learners' motivation in learning English. Once the learners completed the questionnaires, researcher scored them. Next, raters were asked to rate students' writing test from 1 until 4. After the raters giving students' rate, researcher continued to score the calculation by giving the weighting which was in accordance to the scoring rubric to obtain the final students' writing scores. After that, both raw scores of motivation questionnaire and writing test, were ready to be computed in SPSS 22 Program. The program was used to help the researcher to answer the descriptive statistics of the data, and research question. RESULT Before going to answer the problem of the study. Researcher had to compute descriptive statistics of the data. This step is essential in order to know the minimum scores and maximum that obtained by the students. It could be seen that motivation scores have minimum score at 82 and maximum scores at 200 (ranging from scale 50 until 200) with mean 75.08 and standard deviation (df) 33.107, meanwhile, the writing scores have minimum score at 50 and maximum score at 100 (ranging from scale 0 until 100) with mean 75.08 and standard deviation (df) 15.306. Table 2. Descriptive Data of Minimum Scores and Maximum Scores After calculating the descriptive statistics of the data, the researcher is required to have normality test. This test is needed to know the normalilty of distribution of the data (Ghozalli, 2007). To check on the normality test, the researcher used normal P-Plot of Regression Standardized Residual. The picture showed that the points are spreading normally. This showed that the data was distributed normally. If the data was normal, the correlation analysis could be continued to analyze. This recent study was primarily to know the answer of the research question: do the higher level of motivation, the higher level of students' writing ability would be? The finding data taken from this research was tabulated and presented quantitatively to see the relationship of students' motivation in learning English and writing ability. The data displayed in this part is the obtained findings from SPSS 22 Program output. In order to analyze the data, the researcher continued the data analysis by running Pearson correlation formula. The table above showed that the correlation (R) is strong and significant. The number of correlation of coefficient (R) was .744 higher compared than R table. The number of significant was .036 which was less than .005. This number meant that the null hypothesis (H0) was rejected, while the alternative hypothesis (Ha) was accepted. DISCUSSION Based on the positively significant correlation betweem motivation and writing ability reported in this research, it can be assumed that students with higher level of motivation tend to show better writing ability. The finding is supported by Agustrianti's research (2016). The research was conducted for EFL students in Tadulako University in Palu.It showed a research on the correlation between motivation and literacy skills (reading skill and writing skill). It revealed that the correlation was significantly positive. Similar finding also delivered by Nasihah and Cahyono, 2017;Lo and Hyland (2007). Both studies showed that there is a significant increase of writing achievement when there is an increase of students' motivation. It was found that writing achievement can be predicted by students' motivation in learning. Another finding which is also in line with the finding of this recent research is the research conducted by Yuan-Bing 2011). The finding stated intrinsic motivation will support better performance in language learning process. Another previous study came from Nanyang (2018). It revealed that there was a significant correlation between the initial motivation and the students' performance in writing class. However, the result of the present reseach is also confirmed Self-Determination Theory by Ryan and Deci (1985). The motivation that comes from inside and outside of the learner tended to be the most dominant predictor in English learning activity. This statement is approved by other experts since they stated that motivation is the most influential component in learning process (Brown, 2007;Ellis, 1997;William and Burden, 1997). Moreover, Gardner's theory about instrumental and integrative motivation in learning foreign language also strengthened by this present research. In addition, there is another result of a research conducted about motivation in writing activity (William & Alden, 1983). Their finding of the study revealed that motivated student tended to view writing as as unimportant thing to do, that they would not join a writing class if it is not required, and they did not enjoy writing. In a nutshell, students are only motivated to write for their grade at class, not for self determination, discovery, or pleasure. CONCLUSIONS Based on the findings of the research, there is a significant, strong and positive correlation between motivation in learning English and writing ability shown by R or  value is .744. with significant number .036. The finding is classified as a positively strong correlation. The researh also strengthens motivation as a predictor in English learning especially in writing ability. It means motivation is the most dominant or important factor that can predict or contribute to writing ability. It is highly suggested that teachers frequently deliver more interesting English teaching and learning for the students. This strategy is expected to motivate students better in learning English and improve their English achievement in the class especially in improving their writing ability. Next, future researcher is suggested to take more variables aside motivation to be conducted with students' writing ability.
2021-11-19T00:32:41.137Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "9cf43dce107a0960d5a08a21ec045a0dfd655be4", "oa_license": "CCBYSA", "oa_url": "http://journal.um.ac.id/index.php/jptpp/article/download/14781/6411", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f61a33ab617ce226fbf892b34b8f591ed3c212f7", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
2457859
pes2o/s2orc
v3-fos-license
Comparison of Nevirapine Plasma Concentrations between Lead-In and Steady-State Periods in Chinese HIV-Infected Patients Objectives To investigate the potential of nevirapine 200 mg once-daily regimen and evaluate the influence of patient characteristics on nevirapine concentrations. Methods This was a prospective, multicentre cohort study with 532 HIV-infected patients receiving nevirapine as a part of their initial antiretroviral therapy. Plasma samples were collected at trough or peak time at the end of week 2 (lead-in period) and week 4, 12, 24, 36, and 48 (steady-state period), and nevirapine concentrations were determined using a validated HPLC method. Potential influencing factors associated with nevirapine concentrations were evaluated using univariate and multivariate logistic regression. Results A total of 2348 nevirapine plasma concentrations were collected, including 1510 trough and 838 peak values. The median nevirapine trough and peak concentration during the lead-in period were 4.26 µg/mL (IQR 3.05–5.61) and 5.07 µg/mL (IQR 3.92–6.44) respectively, which both exceeded the recommended thresholds of nevirapine plasma concentrations. Baseline hepatic function had a moderate effect on median nevirapine trough concentrations at week 2 (4.25 µg/mL v.s. 4.86 µg/mL, for ALT <1.5×ULN and ≥1.5×ULN, respectively, P = 0.045). No significant difference was observed in median nevirapine trough concentration between lead-in and steady-state periods in patients with baseline ALT and AST level ≥1.5×ULN (P = 0.171, P = 0.769), which was different from the patients with ALT/AST level <1.5ULN. The median trough concentrations were significantly higher in HIV/HCV co-infected patients than those without HCV at week 48 (8.16 µg/mL v.s. 6.15 µg/mL, P = 0.004). Conclusions The 200 mg once-daily regimen of nevirapine might be comparable to twice-daily in plasma pharmacokinetics in Chinese population. Hepatic function prior to nevirapine treatment and HIV/HCV coinfection were significantly associated with nevirapine concentrations. Registration Clinicaltrial.gov ID: NCT00872417 Introduction Nevirapine is a human immunodeficiency virus type 1 (HIV-1) specific non-nucleoside reverse transcriptase inhibitor that binds directly to the viral reverse transcriptase of HIV-1 to block polymerase activity by causing disruption of the enzymes catalytic site [1]. Combination antiretroviral therapy (cART) with nevirapine has been proven safe and effective in HIV-infected individuals. As a result, nevirapine is frequently used as a part of first-line regimens for the management of treatment-naive patients in resource-limited countries. It is typically dosed at 200 mg once daily during the first 2 weeks (lead-in period) and 200 mg twice daily thereafter (steady-state period), due to metabolic autoinduction of cytochrome P450 isoenzymes [1]. The substantial benefits conferred by cART, however, require strict patient adherence to the prescribed medication because poor adherence will lead to virologic failure. Adherence might be improved with more convenient dosing regimen. A possible measure for simplifying antiretroviral therapy is the use of oncedaily dosing regimen. The long plasma elimination half-life of nevirapine (,25 to 30 h) after multiple dosing may justify once daily dosing throughout the therapy with equivalent efficacy to that of twice daily regimen [2][3][4]. However, several clinical trials [3,5] investigating 400 mg once-daily dosing resulted with significantly lower trough (C trough ) and higher peak concentrations (C max ). Because high nevirapine concentrations are associated with increased adverse events [6][7][8][9][10][11][12] and low plasma concentration may lead to virologic failure and drug resistance [13,14], adoption of once-daily dosing regimen in clinical practice remains controversial. Our group has found that the pharmacokinetic profiles of nevirapine in Chinese patients are different with those in Caucasians and blacks [15]. It was demonstrated that patients treated with standard nevirapine dosing regimen, i.e. 200 mg twice daily, had excessively high plasma concentrations, which may contribute to the higher prevalence of hepatotoxicity in these patients. Another prospective study [16] further confirmed a significant positive correlation between nevirapine C trough and liver toxicity among Chinese HIV-infected patients, especially in males. To date, Food and Drug Administration approves one-pill, oncedaily nevirapine extended-release tablets for use in combination with other antiretroviral agents for treatment of HIV-1 infection in adults. In the context of attempts to simplify treatment regimen while securing efficacy, reducing toxicity and enhancing adherence, there is great interest in nevirapine 200 mg once daily dosing for the treatment of Chinese HIV-infected patients in a long run. The present study aims to investigate the potential of nevirapine 200 mg once daily regimen in Chinese HIV-infected patients by comparing nevirapine plasma concentrations during the lead-in period with the steady-state concentrations. The influence of patient characteristics was evaluated on nevirapine concentrations. Baseline characteristics Five hundred and thirty-two (n = 532) eligible HIV-infected patients were included and completed this study ( Figure 1). Demographic characteristics of these patients were summarized in Table 1. There were 265 males and 95 females in d4T group and 132 males and 40 females in AZT group. Forty-three patients with ALT level $1.56ULN and 23 with AST level $1.56ULN at baseline were enrolled. Seventy-four (14%) patients were HBV coinfected and 64 (12%) were HCV co-infected. Influence of patient characteristics on nevirapine concentrations In the univariate stratification analysis, the median C trough of nevirapine at week 2 in patients with baseline ALT level ,1.56ULN was significant lower than those with ALT level $1.56ULN at baseline (4.25 v.s. 4.86 mg/mL, P = 0.045) ( Figure 4A). There was no significant association between C trough at week 2 and gender, weight, HBV or HCV coinfection, baseline AST level, CD4 cell counts and viral load. No significant difference was observed in median C trough of nevirapine between lead-in and steady-state periods in patients with baseline ALT level $1.56ULN (4.86 v.s. 6.12 mg/mL, P = 0.171) and baseline AST level $1.56ULN (4.86 v.s. 5.94 mg/ mL, P = 0.769), whereas the median C trough during lead-in period was significantly lower than the steady-state levels in patients with ALT/AST ,1.5 ULN prior to initial treatment ( Figure 4). Of the HIV and HCV co-infected patients, the median C trough of nevirapine at week 4 was significantly lower than the values at week 24, 36 and 48 (5.60 v.s. 7.28, 7.75, 8.16 mg/mL, respectively, P,0.05). The median C trough of nevirapine at week 48 in patients with HCV coinfection was significant higher than those without HCV coinfection (8.16 v.s. 6.15 mg/mL, P = 0.004). The difference was not observed in the patients with HBV coinfection ( Figure 5). In the univariate and multivariate logistic regression model, gender, age, weight, HBV or HCV coinfection, baseline dosing regimen, ALT or AST level, CD4 cell counts and viral load appeared to have no significant association with nevirapine trough concentration ,3.0 mg/mL and ,3.9 mg/mL at week 2, respectively (Table 3). Discussion In the present study, we compared the nevirapine plasma concentrations between lead-in and steady-state periods. The median C trough and C max of nevirapine during the lead-in period were 4.26 mg/mL and 5.07 mg/mL respectively, which both exceeded the current recommended thresholds of nevirapine plasma concentrations, i.e. 3.0 mg/mL [17] and 3.9 mg/mL [16]. From this perspective, the 200 mg once-daily dosing regimen of nevirapine is worth of further evaluation for its role in Chinese population. In addition, hepatic function prior to nevirapine treatment and HIV/HCV coinfection were significantly associated with nevirapine plasma concentrations. The efficacy and safety of nevirapine 400 mg once daily in treatment of HIV-infected patients had been assessed in several studies [4,18,19]. No significant difference was shown between 400 mg once-daily and 200 mg twice-daily dosing. However, van Heeswijk et al [5] reported that C min and C max for nevirapine 400 mg once-daily regimen were significantly lower (2.88 versus 3.73 mg/mL) and higher (6.69 versus 5.74 mg/mL) compared with the 200 mg twice daily. The 2NN sub-study confirmed this findings and showed that C min (3.26 versus 4.44 mg/mL) was lower and C max (7.88 versus 6.55 mg/mL) was higher in the 400 mg once-daily dosing [3]. The 2NN study also demonstrated that high C max might result in a higher incidence of toxicity in patients with nevirapine once-daily dosing than those assigned twice-daily administration [20]. The increased drug related adverse events, especially liver toxicity, remain significant obstacles to routine use of nevirapine 400 mg once daily dosing strategy. The liver toxicity is one of the most common hypersensitivity reactions to nevirapine and may be associated with its plasma concentrations. Gonzalez de Requena et al. [8] investigated the effect of nevirapine plasma exposure on liver enzyme elevations and observed that among patients with chronic HCV coinfection, nevirapine concentrations .6 mg/mL were associated with a 92% risk of liver toxicity. Our previous studies [16,21,22] reported that a high frequency of liver toxicity was observed in Chinese HIVinfected patients when administered nevirapine twice-daily standard dosing, including approximate 23% patients with severe liver toxicity within 12 weeks of initial therapy. Most importantly, Wang J et al. [16] found a significant positive association between nevirapine C trough and liver toxicity among Chinese HIV-infected patients, especially in males (P = 0.015). The steady-state pharmacokinetic study in 15 Chinese patients with HIV infection [15] confirmed that the standard therapeutic regimens of nevirapine 200 mg twice daily led to longer half-life, higher concentrations and lower clearance of nevirapine than the values in Caucasians [5]. A non-compartment model was used to describe the pharmacokinetic parameters of nevirapine as median including t 1/2 (30.94 h), AUC 0-12 h (92.82 mg?h/mL), Cl/F (0.71 L/h), C max (10.09 mg/mL) and C trough (7.88 mg/mL), respectively [15]. The mean C trough in Chinese males and females (8.95 mg/mL and 6.59 mg/mL) were significantly higher than the levels in Caucasians and blacks [23] (3.34 mg/mL and 3.46 mg/ mL). These findings indicated that nevirapine 200 mg twice daily dosing produced excessive drug load in plasma and might contribute to the higher prevalence of hepatotoxicity in Chinese patients. Considering the unfavorable safety associated with 400 mg once-daily in the literature, a lower dosage, nevirapine 200 mg once-daily dosing regimen, would be worthy of evaluation for clinical application. The present study showed that the median C trough of nevirapine 200 mg once daily at the end of week 2 was significantly lower than the twice-daily dosing in later weeks (4.26 versus 6.15 mg/ mL, P = 0.000). Similarly, the median C max of nevirapine during the lead-in period was also lower than the steady-state levels (5.07 versus 6.51 mg/mL, P = 0.000). The decreased nevirapine con-centrations with 200 mg once daily dosing may suggest a lower incidence of liver toxicity than twice daily. To test this assumption, an small trial in seven treatment-naïve Chinese HIV-infected patients was carried out by our team. The seven HIV+ adults patients were administered with nevirapine 200 mg once daily as a part of initiating antiretroviral therapy. Till now, they maintained a high level of efficacy and comparable tolerability during 2-month follow up compared with the twice-daily dosing regimen. On the other hand, there are some concerns about virologic failure which was related to low nevirapine plasma levels in clinical practice. The relationship between nevirapine concentration and virologic response has been explored in previous studies. The INCAS trial [24] suggested that a nevirapine plasma concentration range of 3.45-3.88 mg/ml at week 12 was predictive of virologic success after 52 weeks of therapy. Vries-Sluijs et al. [13] demonstrated that nevirapine plasma concentration #3.0 mg/mL was directly associated with risk of treatment failure. The guidelines from Department of Health and Human Services in the United States proposed a minimum target nevirapine C trough of 3.0 mg/mL [17]. Our previous study [16] suggested a target cut-off value of nevirapine C trough at 3.9 mg/mL for Chinese patients with HIV infection, higher than the commonly recommended 3.0 mg/ mL. The current study demonstrated that median C trough (4.26 mg/mL, IQR 3.05-5.61) and C max (5.07 mg/mL, IQR 3.92-6.44) of nevirapine in Chinese HIV-infected patients receiving 200 mg once daily were above the recommended thresholds of nevirapine concentrations, suggesting that nevirapine 200 mg once daily regimen may produce adequate viral inhibition in Chinese HIV-infected patients. The ongoing pilot trial in Chinese patients confirmed this assumption. Certainly, we cannot rule out the possibility that some patients will have nevirapine levels lower than the target thresholds due to inter-individual variability and unpredictable features, which may result in virologic treatment failure and even drug resistance. So routine therapeutic drug monitoring should be carried out. Previous studies have found ethnicity, gender, weight and underlying hepatic disease to be predictive of nevirapine plasma concentrations [25][26][27][28][29][30][31][32]. Our data confirmed that hepatic function prior to antiretroviral treatment was significantly associated with the nevirapine lead-in trough concentrations (4.86 mg/mL v.s. 4.25 mg/mL, for ALT level$v.s.,1.56ULN, P = 0.045) and might exert an influence on the metabolism and clearance of nevirapine at steady state, because no significant difference was observed in median C trough of nevirapine between lead-in and steady-state periods in patients with baseline ALT and AST level $1.56ULN (P = 0.171, P = 0.769), which was different from the patients with ALT/AST level ,1.56ULN. Gender, weight, HBV or HCV coinfection, baseline CD4 cell counts and viral load appeared to have no significant influence on nevirapine plasma concentrations. Consistently, the univariate and multivariate logistic regression models showed that none of the examined factors was found to predict nevirapine trough concentration at week 2 lower than 3.0 mg/mL or 3.9 mg/mL. The nevirapine trough concentration increased gradually in HIV/HCV coinfected patients during the follow-up periods and finally was significantly higher than those patients without HCV coinfection at week 48 (8.16 v.s. 6.15 mg/mL, P = 0.004). We assumed that a high incidence of liver toxicity in Chinese patients and particularly a great risk of severe hepatotoxicity in HIV/HCV coinfected patients were both significantly associated with plasma nevirapine exposure. This finding was consistent with our previous study [16] and further confirmed that nevirapine 200 mg twice daily dosing produced excessive drug load in plasma and dosage adjustment based on therapeutic drug monitoring would be necessity in Chinese HIV-infected patients, especially in those with HCV co-infected. To date, an increasing evidence demonstrated that host genetic polymorphisms may in part explain the observed inter-individual variability of drug disposition and response [29][30][31][32]. In an ethnically diverse population, both non-Caucasian ethnicity and carriage of the variant allele of CYP2B6 G516T single nucleotide polymorphism, which linked to the altered enzyme function, were significant predictors of nevirapine C trough . It was indicated that Chinese patients with CYP 2B6 G516T polymorphism reduced enzyme function leading to a greater plasma exposure of nevirapine. The impact of weight should also be considered when explaining the difference in nevirapine drug concentration. The previous study found that higher body weight was significantly associated with lower nevirapine concentration [27], it was indicated that Chinese patients with relatively lower body weight were more likely to achieve an adequate drug level with once-daily nevirapine dosing compared with the Caucasians. Several limitations of our study must be addressed. Firstly, due to inter-patient variability of nevirapine C max , i.e. the exact peak time for nevirapine is different from patient to patient, influencing factors affected C max was not evaluated using univariate and multivariate analyses. Secondly, unrecognized confounders may have affected nevirapine concentrations, i.e. dosing in relation to food, concurrent medications, or genetic polymorphisms of CYP2B6, which may significantly influenced nevirapine metabolism and clearance. Lastly, all these patients were administered nevirapine according to the international treatment guidelines, i.e. 200 mg once daily for 14 days followed by 200 mg twice daily. So the plasma nevirapine concentration for 200 mg once daily could only be obtained for as long as 2 weeks. It was impossible to get the long-term efficacy and safety data of this dosing regimen. Although the pilot clinical trial mention above suggested that this regimen was safe and effective during 8-week period, however, the cohort was small and the follow-up was short. A large and longterm prospective clinical study is necessary to fully evaluate the efficacy and safety of this regimen. In conclusion, this is the first report demonstrating that the 200 mg once daily dosing regimen might produce adequate plasma nevirapine concentrations for both inhibiting HIV and reducing hepatic toxicity in Chinese population, which is worth of further evaluation in a prospective randomized study. Hepatic function prior to antiretroviral treatment and HIV/HCV coinfection were found to be significantly associated with the nevirapine concentrations. The benefit of dosage adjustment based on therapeutic drug monitoring among Chinese HIVinfected patients would optimize nevirapine containing antiretroviral therapy. Patients were recruited from January 2009 to December 2010. Male and female antiretroviral-naïve patients with documented HIV-1 infection were eligible for inclusion if they were between the age of 18 and 65 years with CD4+T cell count ,350 cells/ mm 3 for more than 4 weeks. Main exclusion criteria were acute HIV infection, AIDS-defining illness within 2 weeks of entry, alcohol and injection drug users, acute or chronic pancreatitis, severe peptic ulcers, severe psychiatric and neurologic diseases; if female, pregnant, breastfeeding, or of child-bearing potential and not using adequate contraception. Laboratory exclusion criteria included white blood cell ,2.0610 9 /L, absolute neutrophil count ,1.0610 9 /L, hemoglobin level ,90 g/L, or platelet count ,75610 9 /L, transaminase and alkaline phosphatase level .3 times upper limits of normal value (6ULN) and serum creatinine level .1.56ULN. Another important exclusion criterion was nonadherence to the study treatment regimen which was defined as less than 95% adherence. Study design The cohort study was approved by institutional review boards and carried out in accordance with the Declaration of Helsinki and the principles of Good Clinical Practice. Written informed consent was obtained from each patient and the study protocol was approved by the ethics committee of Peking Union Medical College Hospital. The protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1. All the patients received a standard ART regimen based on nevirapine together with two nucleoside reverse transcriptase inhibitors including stavudine plus lamivudine or zidovudine plus lamivudine. Nevirapine (Desano Pharma, Shanghai, China) was administered at 200 mg once daily for 2 weeks and 200 mg twice daily thereafter. The bioequivalence of nevirapine in reference to ViramuneH from Boehringer Ingelheim was demonstrated in the previous study [33]. During the treatment period, patients were monitored at baseline, week 2, 4, 12, 24, 36, and 48 for clinical features (particularly severe adverse events), plasma nevirapine concentrations and laboratory values including blood routine examination, hepatic and renal function, and hepatitis B (HBV) or hepatitis C (HCV) serological state. Sampling and bioanalysis Sample collecting. Blood samples were drawn prior to the next drug administration for C trough and/or 2 h post-ingestion for C max . All samples were collected in spray dry powdered EDTA tubes and centrifuged in the real time to obtain plasma, which were stored at 280uC and thawed at the day of analysis. The exact time of nevirapine dose and blood sampling was recorded. Plasma nevirapine concentration determination. The nevirapine concentration in plasma was determined by a validated HPLC assay modified from a previous study [34]. The nevirapine concentration were analyzed on a Shim-pack CLC-ODS column (6 mmID615 cm, 5 mm) with a mobile phase consisting of wateracetonitrile (23:77) at a flow rate of 1 mL/min, and the wavelength for detection was 260 nm. Tegafur was used as an internal standard. The calibration curve of nevirapine was linear in the range of 0.05-10 mg/mL (r = 0.9999), and the limit of detection was 0.05 mg/mL. The RSDs of intra-and inter-run validations were less than 7%. The mean recoveries fell in the range of 90-110% for the high, middle and low concentrations. The nevirapine plasma samples demonstrated satisfactory stability. Statistical analysis Statistical analyses were performed with Statistical Product and Service Solutions for Windows (SPSS, version 13.0). Mean (6 standard deviation, SD), median (interquartile range at 25th and 75th, IQR) and frequencies (%) were used to describe characteristics of patients as appropriate. Normal distribution of values was examined by Kolmogorov-Smirnov methods. Categorical variables were tested with Chi-Square or Fisher's exact test, and continuous variables were tested with Kruskal-Wallis or student's t test or one-way ANOVA. The differences between lead-in and steady-state periods for nevirapine plasma concentrations were assessed by the Kruskal-Wallis H test. ANOVA was used to determine whether there were significant differences in nevirapine C trough at the 6 follow-up visits in the 105 patients. An independent t-test or Mann-Whitney U test was used when two groups or two variables were compared. Factors affecting nevirapine plasma exposure were estimated using univariate stratification analysis. Risk factors predicting nevirapine trough concentration at week 2 lower than the recommended cut-off thresholds were evaluated using univariate and multivariate logistic regression. Odd ratios (OR) and 95% confidence intervals (95% CI) were also obtained. For all tests, P,0.05 was considered to be statistically significant. Supporting Information Checklist S1 CONSORT Checklist.
2017-07-09T10:54:46.356Z
2013-01-24T00:00:00.000
{ "year": 2013, "sha1": "5af95fb57191563fbc005dac8821d1741d5fbcde", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0052950&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5af95fb57191563fbc005dac8821d1741d5fbcde", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268670217
pes2o/s2orc
v3-fos-license
High‐ and Low‐Frequency Waveform Analysis the Marsquake of Sol 1222: Focal Mechanism, Centroid Moment Tensor Inversion and Source Time Function The seismometer onboard InSight NASA Mars mission discovered a seismically active planet. We focused on the strongest event named S1222a (4 May 2022, Mw ∼ 4.7), which was recorded by the Very Broad Band sensors and associated channel ELYSE and is located 37.2° away from InSight. We use two different methods based on a point source approach for an elastic, horizontally layered medium to retrieve source parameters of S1222a. In the first case, the seismic moment tensor inversion of high‐frequency seismogram data is calculated using a matrix method for the direct waves. The process includes the generation of records in displacement using the frequency‐wavenumber integration technique. A method of inversion of the moment tensor of direct P‐ and S‐waves, less sensitive to path effects than reflected and transformed waves, is presented, which significantly increases the accuracy and reliability of the method. In the second case, tensors were calculated using common low‐frequency full‐waveform inversion and the tests to verify the plausibility of this solution obtained from the single station calculation were performed and the uncertainty estimations for inversions can be useful in future research. Introduction Seismic data recorded by the SEIS experiment (Lognonné et al., 2019(Lognonné et al., , 2020) ) onboard the InSight mission (Banerdt et al., 2020) have shown that Mars is seismically active with more than 1,300 events detected and cataloged by the InSight Marsquake Service (MQS) (Ceylan et al., 2022;Clinton et al., 2020;Giardini et al., 2020).The global seismic event rate sets Mars as moderately active, between the weak lunar activity and the terrestrial intraplate seismicity (Banerdt et al., 2020) and provided enough seismic records to perform the first structure inversions of the planet (see Lognonné et al., 2023 for a review of the results).On 4 May 2022, NASA's InSight Mars lander detected the largest "extraterrestrial" quake ever observed on another planet: an estimated magnitude M w 4.7 temblor that occurred on the 1,222nd Martian day, or Sol, of the mission (Kawamura et al., 2022).We can state that the double couple is very likely a valid approximation due to the small magnitude of the quake and the fact that only relatively long-period (LP) waves are used, while a 1D model is an assumption we know to be far from reality, especially due to the large scattering we do see on seismic waves (Karakostas et al., 2021;Lognonné et al., 2020;Menina et al., 2021). The basis for quantifying the sources of seismic events is the seismic moment tensor (MT).The inversion of full waveforms for MT parameters is a widely used approach applicable to all scales: from small to large earthquakes.The accuracy and reliability of MT inversions depend on whether two major assumptions hold.First, it is assumed that the point source approximation is valid and second, that the effect of Mars's structure on seismic waves is modeled correctly.If either of these assumptions does not hold, the resulting MT may contain a large non-doublecouple component, even if the source mechanism is a double-couple.To comply with the point source approximation in the MT inversion presented here, seismic waves are used only with wavelengths longer than the fault plane dimensions.Also, the analytical expressions are drawn out relating the components of MT to the components of displacements in the immediate vicinity of the source.The second assumption provides challenges because Green's functions used in MT inversions depend on the structure between the source and receiver, which is not precisely known.Obviously and until more seismic stations constrain Mars, these two limitations will remain for SEIS data analysis. The first study of the focal mechanisms of three well-recorded events on Mars (S0173a, S0183a, S0235b) is presented by Brinkman et al. (2021).They have shown a method that is adapted to the case of a single, multicomponent receiver and based on fitting waveforms of P and S waves against synthetic seismograms computed for the initial crustal velocity model.Later, Jacob et al. (2022) developed a method of seismic moment tensor solution and applied it to nine tectonic Marsquakes.For this purpose, body waves P and S waveforms were inverted, and amplitudes of secondary phases (PP, SS, PPP and SSS).More information about Marsquakes detected during the InSight mission can be found in for example, Giardini et al. (2020), Clinton et al. (2020), Stähler et al. (2021), Ceylan et al. (2022). Therefore, the aim of this study is to determine the focal mechanism of high-and low-frequency waveform analysis of the Marsquake of Sol 1222.To achieve this goal, the authors consider two different methods.One of them uses only the direct waves which are less sensitive to path effects than reflected and transformed waves (Malytskyy, 2010(Malytskyy, , 2016;;Malytskyy & Mikesell, 2021).Second method (low-frequency full-waveform MT inversion) is based on ISOLA software (Sokos & Zahradník, 2008).Here is an opportunity to show the advantages and disadvantages of each of them. Theory: Waveform Inversion Two different approaches were chosen to address the problem of the unavoidable inaccuracy of seismic waves modeling: (i) focusing only on direct waves and (ii) using low-frequency inversion.The choice of these methods is associated with high-frequency and low-frequency analysis of waveforms to determine the focal mechanism of the Marsquake of Sol 1222. Theory-High Frequency Analysis (i) In the first case we propose to invert only the direct waves instead of the full field.An advantage of inverting only the direct P-and S-waves is that, compared to reflected and converted waves, they are less sensitive to the structural model used in the inversion.For example, waveforms of converted and reflected waves depend strongly on velocity contrasts below the source and receiver, and thus imprecise knowledge of subsurface structure will lead to inaccurate modeling.Waveforms of direct phases are less sensitive to subsurface layering, and scattering and may carry a less distorted imprint of the source.The advantage of choosing a matrix method for calculating synthetic seismograms is its ability to analytically isolate direct waves from the full wave field.In the earlier version of our method, as well as in most other MT inversions, waveforms at several seismic stations are simultaneously inverted (Malytskyy & D ' Amico, 2015; Malytskyy, 2016).Although much more information on the source should be contained in the waveforms from several stations, we show in our study that all the components of seismic moment tensor contribute to the waveforms at only one station and, at least theoretically, can be retrieved from them. We use a point-source approximation, assuming the location and origin time proposed by Kawamura et al. (2022).Based on forward modeling, a numerical technique is developed for the inversion of observed waveforms for the components of seismic moment tensor M(t), obtained by generalized inversion (Malytskyy & Mikesell, 2021). Our method enables us to obtain the solution of the focal mechanism using records at only one station.Moment tensor solution by waveform inversion consists of two parts: forward modeling (mathematic modeling of seismic Earth and Space Science 10.1029/2023EA003272 waves in layered media by matrix method) and inverse modeling (spectra of moment tensor components are calculated using a solution of generalized inversion).Thus, based on the Thomson and Haskell matrix method (Haskell, 1951;Thomson, 1950), we develop a new analytical approach to calculate synthetic seismograms on the upper surface of layered inhomogeneous media and present an approach for determining the seismic MT and source time function (STF) from observed waveforms.We use the point source model.The source is located inside a layer and is represented by seismic moment tensor (Malytskyy, 2016). Forward Modeling We consider the equation of motion in an elastic layered medium.The vector of displacement is represented through vector and scalar potentials (Malytskyy, 2016).The solution of the equation of motion for scalar and vector potentials is represented in the form of FBM (Fourier-Bessel-Mellin) integrals.As a result, we obtain components of displacements u (0) z (t,r,ϕ) , u (0) r (t,r,ϕ) and u (0) ϕ (t,r,ϕ) on the free surface in the matrix form Malytskyy (2010) and Malytskyy and Mikesell (2021): where The source is located at r = 0 and represented by six independent seismic moment tensor components: φ and k are the station's azimuth and horizontal wave number respectively (Figure 1).Functions g i = g zi ,g ri ) T and g φi contain parameters of structure model and propagation effects between the source and the receiver. Equation 1 is then expressed for the direct P-and S-waves at one station in the spectral domain (Malytskyy, 2016): where vector T contains the components of displacement of direct Pand S-waves, vector T consists of the six independent components of MT, and matrix K contains parameters of the structure model.Mathematical expressions for the parameters of the matrix K can be found in the article by Malytskyy and Mikesell (2021). Inverse Modeling Following Aki and Richards (2002), vector M can be obtained from Equation 2 in the matrix form (a generalized inversion): where the wave denotes complex conjugation and transposition, and 1 inversion. The seismic moment tensor M(ω) is calculated using Equation 3and can be transformed into M(t) by applying the inverse Fourier transform: where STF(t) is the source time function and M ij is the seismic moment tensor. In the article Vavryčuk and Kuhn ( 2012) is shown that the focal mechanism depends on the frequency range of the studied seismic waves.We obtained the results of seismic tensor solution (Equation 3) and time-independent focal mechanism using records at one station for only direct P-and S-waves.Thus, the inverse problem consists of determining six independent components of the moment tensor M under the conditions that the source location and structure model are known. Low-Frequency Full-Waveform Moment Tensor Inversion (ii) In this case the standard MT inversion was performed.This method (low-frequency full-waveform MT inversion) is based on ISOLA software (Sokos & Zahradník, 2008), and it is commonly used for local seismic networks.The method closely follows the description outlined in previous work (e.g., Křížová et al., 2016).We consider a seismic source at a point (Aki & Richards, 2002) where u stands for displacement which is changeable in time and could be represented as convolution * of moment tensor M and Green's tensor G inscribed using Cartesian coordinates p, q.Then MT could be represented by six elementary moment tensors According to AXITRA code (Bouchon, 1981;Coutant, 1989) The final MT could be written as Earth and Space Science 10.1029/2023EA003272 The M 1 to M 5 tensors in Equation 7 represent five DC (double-couple) focal mechanisms, M 6 is a purely isotropic MT so only the parameter a 6 carries information about the isotropic part of MT (a 6 = Tr(M)/3). When we convolve elementary MT from Equation 7 with Green's function then we get elementary seismogram E i each for every elementary MT.We get overestimated linear inverse problem for displacements which could be solved by least-squares method (Tarantola, 2005) The comparison between real u and synthetic s seismograms during moment tensor inversion is also standard.We are trying to maximize the waveform fit between real (observed) and synthetic seismograms, here is represented by global variance reduction We calculate full MT and deviatoric MT.The deviatoric inversion is just a simplification when the parameter a 6 is set to zero.The deviatoric MT is an approximation and can be used only when a negligible isotropic component of MT is assumed. The main uncertainty comes from insufficient knowledge of Green's function (which is connected with inaccurate crustal model and errors in event location) and 3D structure.To reduce the influence of input parameters uncertainties we try to use as lowest frequencies as possible.During the inversion the MT is searched together with centroid position and origin time.This inverse problem is linear in MT and is solved using the least square method but is non-linear in time and space, so a grid search is used.The low-frequency full-waveform MT inversion is used to minimize the errors of MT components in accordance with Vavryčuk (2007), where the authors mentioned that complete MT can't be determined only from the P and S wave amplitudes when we have data from a single three-component station.Limitations in determining the focal mechanism due to the use of only one station (Šílený et al., 2022) are taken into account for the low-frequency full-waveform inversion.To show how the obtained solutions are affected by using only one station we calculate synthetic seismograms where we added three more fictional stations (forward simulation).Synthetics correspond to the solutions given in Maguire et al. (2023).We made full-and deviatoric-MT inversion using one, two, three or four stations.The tests were made with noise free data and additional uncertainties estimation with random noise were also performed. Inversion Results for S1222a In this section, we present the inversion results for the S1222a event on Mars (2022-05-04, P-arrival 23:27:45, 3:54 LMST, M w 4.7, back azimuth 109°; Kawamura et al., 2022) which is located on Aeolis Southeast at 37.2°d istance from InSight.(station name XB.ELYSE).We used the S1222a waveform data (InSight Mars SEIS Data Service, 2019a, 2019b) provided by the VBB sensors (Lognonné et al., 2019(Lognonné et al., , 2020)).The raw data have only been corrected by the instrument transfer function and rotated.As it was mentioned above, the two different approaches were applied in this study. Results-High Frequency Analysis (i) Like a validation tests we first present the focal mechanism of the S0235b event on Mars (26 July 2019), located 25°from the epicenter (Brinkman et al., 2021).We compare the two methods: in the first, we propose to invert only direct waves (Malytskyy, 2016), and in the second we consider direct inversion for the full moment tensor (Brinkman et al., 2021).We tried three different source depths: 17, 32, and 56 km.The TAYAK velocity model was used (Jacob et al., 2022).The focal mechanisms for the source depth of 32 km shown in Figure 2 look very similar to each other.The focal mechanisms at depths of 17 and 56 km were also very similar. (i) We use the direct P-and S-waves.Only the beginning of the P and S waveforms are needed and there were free of glitches (Lognonné et al., 2020;Scholz et al., 2020).Modification of the TAYAK velocity model (Lognonné et al., 2020) used in the inversions of waveforms and for calculation of seismic tensor is listed in Table 1.The velocity model includes the upper crustal model based on Lognonné et al. (2020).The original displacement seismograms are filtered correspondingly between 0.2 and 9.0 Hz (see Figure 3a).The durations of direct P-and S-waves at the station are estimated visually from the records and delays of the reflectionconversion phases at the respective epicentral distance and source depth are considered.For the event S1222a, we estimate them to vary between 1.1 and 4.5 s for the P-wave (Figure 3b), and 1.4 s and 3 s for the S-wave (Figure 3c) at the station XB.ELYSE.The highest frequency, on the other hand, is controlled by the assumption of the point source and corresponds to a wavelength larger than the linear dimensions of the fault (often less than 1 km in small earthquakes).We know the time when the direct wave (P and S) arrives.The duration of the direct wave segment in the record corresponds to the duration of the process in the source. After the end of the direct wave, the record should be zero, unless other phases have arrived.Since we do not know the duration of the source, we cut off the segment of the direct wave in places where the amplitude of the direct wave is zero. The component of the moment tensor resulting from the inversion of the direct P-and S-waves forms at only the station ELYSE (at epicentral distance of 37.2°) using Equation 3 and the corresponding focal mechanism are shown in Figure 4. Figure 5 shows the fit of synthetic waveforms to the observed ones for direct waves with Qmu = 600 in the crust and mantle.Synthetics are calculated using data of moment tensor inversion for the direct P-and S-waves at a source depth of 22 km (see Figure 4b).They were computed in displacement for E, N, and Z components using the modified TAYAK velocity model.We also used Instaseis (van Driel et al., 2015) and AxiSEM (Nissen-Meyer et al., 2014).Prior to waveform fitting, we obtained the vertical Z, transverse T, and radial R components of synthetic and observed waveforms.Direct Pand S-waves were filtered between 3 and 12 s.We manually select the windows to fit seismic waveforms: the vertical Z and radial R components (Pz, Pr) for the direct P waves, and radial R and transverse T components (S R, S T ) for the direct S waves.The alignment was done based on cross correlation between the vertical components (for the P wave) and the transverse component (for the S wave).Estimated scalar moment of S1222a is M 0 = 3 × 10 15 Nm. Results-Low-Frequency Full-Waveform Moment Tensor Inversion (ii) (ii) In this case we obtained the origin time and centroid depth using a grid search during MT inversion.As an initial condition the origin time 23:23:07 and epicenter 171°E; 5°S (Panning et al., 2022) was chosen and the depth was 22 km (same as in the previous case = High frequency analysis (i)).Although we also performed calculations in the 1D layered velocity model mentioned in several frequency ranges, the solutions for the interval 0.028-0.072Hz were chosen to illustrate outcome from the full-waveform MT inversion.This range was used in Maguire et al. (2023). For the S1222a event, for which a faulting origin is most likely.Data from the single station does not have enough information to constrain an isotropic signature in the data and we only use a 6 from MT Equation 8 like parameter which stabilized inversion and with no physical meaning except contribution to the final value of scalar seismic moment M 0 .The isotropic component for the full MT solution is most likely associated with differences between the selected model and the real averaged structure along the source-station path and might include furthermore the signature of scattering and other 3D seismic structures not modeled.Despite of this fact the tests with synthetic data showed that for single station MT inversion is suitable to calculate full MT instead of commonly used deviatoric MT inversion.It is clearly visible in Figures 6a and 6b, where we at first in forward simulation calculated input data (for the depth 6 km and focal mechanism with strike = 292°, dip = 36°, rake = 61°) and then obtained solution for low-frequency MT inversion.In Figure 6a for full MT inversion you can see how the focal mechanisms changed less for different depths than in case of deviatoric MT inversion (Figure 6b).Because input data were without noise it is obvious that the best solution with highest correlation is in both cases right and it is in the depth of 6 km.This is the reason why we prefer to show the results for the full MT inversion and consider them more realistic.As has been written above, calculation of MT could be simplified by setting parameter a6 = 0 in Equations 9-11 only in case when negligible isotropic component is assumed.In such cases solutions for full MT and deviatoric MT inversions should be considered as almost the same if the accurate crustal model and huge data set are available.Although deviatoric MT inversion used to be more stable and it is widely used for tectonic events on Earth, in this special case when the single station was available and due to glitches in seismograms we assumed the deviatoric MT inversion is not suitable for this data set. The focal mechanism for full MT as a result of inversion is shown in Figure 6c at the depth of 20 km.(strike, dip, rake: 63°, 82°, 5°and see the stability of the focal mechanism close to the best solution and differ less than 5°in each parameter strike, dip and rake.Figure 6c is only cutout from the more extensive grid search in depths 5-30 km and in many origin time shifts. Even though comparing the results of different methods does not guarantee that the identical solutions are correct, we try to calculate inversion with fixed mechanisms presented in Maguire et al. (2023).He used "cut and paste" method which is less affected by inaccuracies in velocity model because the P and S waves are during calculation separated from the surface waves, but the main idea of the calculation is similar like in this study.In Table 3 there are focal mechanisms for four solutions ordered from the best to the worst and the found depth for global and local variance reduction (correlation) maximum for these prescribed mechanisms is also presented here. In the uncertainty tests we prescribed due to real noise variance in data samples 5.0e 15 in all cases and we make calculations for the 100 possible solutions across the depths used in inversions (for more details see Sokos and Zahradník (2013)).Here we show more interesting results in Figure 7 (results for our best solution and best solution in Figure 7a).In Figure 7b is obvious that we can easily obtained solution with opposite rake angle and reverse mechanism.The Figure 7c shows that deviatoric solution is more scattered than full MT solution (Figure 7b).Synthetic tests with more stations showed that solution for the single station ELYSE (Figure 7d) is more ambiguous than results obtained from the two or more stations (in this case is azimuth and distance: 129°+ 1,195 km; 52°+ 1,202 km; 254°+ 1,261 km; 289°+ 2,190 km).Just for illustration (Figure 8) we demonstrate that in the case of large distances between the seismic source and the receiver and noisy data we can obtain only partial agreement for real and synthetic seismograms.It is shown on the example of surface waves, which are crucial for our calculations. Discussion and Conclusions In our study, we first explored the possibility of retrieving source parameters of the S1222a event on Mars (2022-05-04, M w 4.7).Addressing the problem, we've chosen to invert only the direct waves instead of the full field.An advantage of the direct P-and S-waves consists in their much lesser distortion, if compared to reflected and converted waves, by inaccurate modeling of velocity contrasts.So, the advantage of the matrix method of calculating the wave field that we have developed is its ability to mathematically isolate direct waves from the full field. We presented a method to obtain the moment tensor solution of the direct waveforms at only one station.The method that we used in this study is based on an inversion approach described in Malytskyy (2016).The inversion scheme consists of two parts: forward modeling (propagation of seismic waves in vertically inhomogeneous media is considered and a version of the matrix method for calculation of synthetic seismograms on the upper surface is developed); inverse modeling (spectra of moment tensor components are calculated using a solution of generalized inversion).So, as another option modification of ISOLA software was used and uncertainty tests were performed, although it is primarily intended for local and regional distances. Recently, Maguire et al. (2023) used waveform fitting of body waves and surface waves to estimate the moment tensor solution of S1222a and found that either an E-W to SE-NW striking thrust fault or normal fault can explain the data.For the same structural model and frequency range used in this study and in Maguire et al. (2023) for low-frequency MT inversion waveform (within the estimated error) similar solutions were found.For example, the full-moment tensor solution we find here based on long-period waveform inversion (Figure 6c with error estimation in Figure 7a) closely resembles the reverse solution of Maguire et al. (2023).Additionally, both studies find a similar optimal source depth approximately 20 km.Brinkman et al. (2021) used waveform inversion of polarization of filtered body waves and found a NW-SE striking thrust fault solution best explains the data.This is similar to the direct P-and S-wave inversion solution we find here (Figure 4), although we find the best fitting fault planes strike NE-SW.Despite the range of possible moment tensor solutions of S1222a, it is encouraging that independent studies based on different methodologies, and using different structural models, point to similar solutions.However, further work should be done to understand the sources of uncertainty in single station moment tensor inversions of marsquakes, which may help us understand the discrepancies between solutions and provide more robust constraints. The differences among solutions obtained in this article are due to different approaches and frequency ranges.Method (i) does not use surface waves in contrast with method (ii), but absence of this part of the waveform in the first mentioned case is not considered important for us to compare the two proposed methods for determining the focal mechanism.In each case the best results were grid searched in several depths to obtain best fit between real and synthetic data. Figure 2 . Figure 2. (a) Focal mechanisms of the S0235b event obtained by inversion of only direct waves(Malytskyy, 2016) and (b) by direct inversion for the full moment tensor(Brinkman et al., 2021) for a source depth of 32 km. Figure 3 . Figure 3. (a) The waveforms (converted to displacements) of the event S1222a.(b and c) The durations of direct P-and Swaves at the station.Records were filtered in the frequency range between 0.2 and 9 Hz and are shown for N, E and Z components in green, blue and red lines respectively. Figure 5 . Figure 5. Waveform fits of body waves at 22 km depth using the modified TAYAK velocity model.Direct P-and S-waves are filtered between 3 and 12 s.The synthetic direct waves are shown in red and the observed ones in black lines respectively. Figure 6 . Figure 6.Solutions for the frequency range 0.028-0.072Hz.(a) The variability of focal mechanisms for different depths are shown for synthetic test for full MT inversion.(b) The variability of focal mechanisms for different depths are shown for synthetic test for deviatoric MT inversion.(c) The full MT solution-Results for grid search beneath epicenter. Figure 7 . Figure 7. Variation of nodal lines and P, T axes for 100 results of uncertainty tests.(a) our best low-frequency full-waveform MT solution (b) solution "C" from Table 3 (c) solution "C" from Table3when we consider deviatoric MT (d) solution for synthetic data for mechanism "D" for single station ELYSE (e) solution for synthetic data for mechanism "D" for four stations. Figure 8 . Figure 8.The normalized real (black) and synthetic (red) seismograms for north-south and east-west component showing a match for the part of surface waves in the frequency range 0.028-0.072Hz. Table 1 Modification of the TAYAK Velocity Model Used in the Inversion of Waveforms and forCalculation of Seismic Tensor (This Model Differs From Jacob et al., 2022) Table 1 Maguire et al. (2023)to present the results only for the slightly modified model Kim et al. (2023) (Table2) used in the paperMaguire et al. (2023).After testing Table 3 The Order of Solutions With Prescribed Focal Mechanism of Low-Frequency Full-Waveform Inversion for Frequency Range 0.028-0.072Hz MALYTSKYY ET AL.
2024-03-25T15:08:51.991Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "e9b6e6af2b6f368eec8f61d032bb76f518b2d0ec", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023EA003272", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "47e08ef7548f9e981878660d493a74722fcc4350", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [] }