id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
240299074 | pes2o/s2orc | v3-fos-license | Evaluation of the development of general competences in undergraduate courses in nutrition Avaliação do desenvolvimento das competências gerais em graduandos de cursos de nutrição
The National Curricular Guidelines direct education to changes in the teaching-learning process, seek the appreciation of training when they establish in their structure, general and specific competences and emphasize the importance of a systematic and permanent follow-up evaluation aimed at improving graduation. The present research had the objective of evaluating the development of the general competences of undergraduate students of nutrition. An analytical study with a quantitative approach and cross-sectional method using a questionnaire, elaborated and validated by Matia and Coelho (2015) composed of 44 questions, which was applied to students and teachers, from four educational institutions, through the Google forms Software. For the descriptive analysis of the data, we used measures of central tendency and measures of variability and the tests Mann Whitney, Tukey and Test-t, with level of significance of p <0.05, realized through Free software R version 3.1.1. A total of 210 answers were obtained, of which 110 were students of both sexes and 100 of teachers. The mean age of students was 24.66 ± 7.41 years. The sample consisted of a predominance of female participants (87.3%). In the view of the teachers, the evaluated students seem to be developing more competences and abilities of Management in health and Education in health and in the vision of the students there was small emphasis only in Education in health. It is concluded that the students of the courses of nutrition involved in the research, are not developing the general skills necessary for the training, pointed out in the guidelines, in its entirety.
Introduction
In Brazil, National Curricular Guidelines (NCG) for health courses seek to value training, propose changes in the teaching-learning process and envisage curricular changes which orientate professional training according to general and specific competencies, with an emphasis on the principles of the Unified Health System (SUS). They consider the needs of individual and collective health in order to rearrange the care model, value the promotion of health and reinforce the need for systematic evaluation, in order to determine an increase in undergraduate students' knowledge 1-2 . In recent decades, the education of health professionals has been redesigned, a process which has resulted from a diverse range of structural changes to the contemporary world: political, economic, cultural, social and technological. These changes have redirected education and health policies which, in turn, recover fundamental elements, in order to rethink the education of health professionals 3 .
According to the NCGs, a nutritionist is a professional with general, humanistic and critical training, qualified to work in all areas of food and nutrition, focusing on promoting, maintaining and recovering health and preventing diseases, contributing towards improving quality of life, based on ethical principles, and reflecting on the current reality 1 . Thus, professional practice should meet society's demand for nutritionists who adapt to constant changes.
The transition of traditional to competency-based teaching has aroused the interests of researchers on the subject and different concepts of competency have been postulated, with competency-based education associated with "being capable." Therefore, competency can be understood as a set of attributes and skills developed by individuals [4][5] . In health, competency can be understood as an individual's capacity to take care of another while engaging their knowledge, abilities and attitudes on prevention and promotion in different areas of activity [5][6] .
Assessing the development of competencies is a challenge but this activity is encouraged as a mechanism which enables the follow-up of knowledge construction. Forms of assessment need to be established for continuous improvements in the curricula and so that efficient, conscientious and critical professionals are trained for work and society [7][8] .
This work is justified since the NCGs define health training based on general and specific competencies which include social health needs, with a particular emphasis on SUS. They also highlight that training professionals should include excellence in health promotion and recognizing social factors which trigger the health-disease process.
General competencies may be considered a cornerstone for a professional and, therefore, their evaluation is important, since this will provide a proposal of actions for lecturers in order to resolve any problems which may arise. This study sought to evaluate the development of general competencies in undergraduate nutrition students at public and private higher education institutions in a municipality in the south of Brazil.
Methodology
This is an analytical study with a quantitative, crosscutting approach. The sample was made up of higher education students and lecturers attached to recognized nutrition courses, situated in a municipality in the south of Brazil. Enrolled students and lecturers who taught classes on the course were included in the research; they all agreed to take part in this research, signing an Informed Consent Form (TCLE). Public and private institutions were included in the research, and identified by the letters A to D; with institutions A, B and D having equal participation but less than institution C. Lecturers and students who did not fully complete the applied instrument were excluded from the research.
For data collection, versions 5A and 5B of an instrument to evaluate the development of general competencies for health courses were used, developed and validated by Matia and Coelho, and this was divided into three dimensions: healthcare, health management and health education. The dimensions were based on the curricular guidelines for medicine courses, which were recently reformulated 10 . The self-applicable data collection instrument comprised 44 questions and a Likert-type measurement scale. Version 5A of the questionnaire was applied to the students and Version 5B a similar, mirror questionnaire to the lecturers, with the same directives, but separate outlooks. Thus, for each responding student there was a questionnaire completed by a lecturer, which they used to evaluate the students.
The instruments were applied using Google Forms software. The responses were codified, and the Likert scale was presented in a numeric form from 1 to 5, where 1 represents "totally disagree" and 5 "fully agree". Measurements of central tendency and measurements of variability were used for descriptive data analysis. The Mann-Whitney U Test, T-Test and Tukey's tests were applied to verify if there was reason to believe that the values of one of the groups were higher than the other and, therefore, significant results to those p < 0.05 values were considered in all the tests. Statistical analyses were performed using the R statistics program, version 3.1.1. The project was referred to the Ethical Research Committee (CEP) CAAE: 53978315.4.0000.5580 and approved on 8th April, 2016.
Results
Two hundred and ten (210) higher education students and lecturers from four different higher education institutions (A-D) completed the questionnaire, comprising 110 students and 100 lecturers. Of the four participating institutions, three were private and one public. They have all been offering a nutrition course for more than 10 years, use a traditional curriculum and their undergraduate courses are completed in 8 semesters. The students had an average age of 24.66 ± 7.41 years, with a minimum age of 17 and maximum of 55. Table 1 presents descriptive data from the main characteristics of the study, expressed in numbers and the percentages of respondents in each category of variables used. It should be noted that not all of the periods were in progress while the research was performed, as some of the institutions had not held a selective process for entry every semester and, therefore, there was a lower participation of students from the second, third and eighth periods.
When evaluating the responses given by the students and lecturers, Tables 2 and 3 demonstrate the average values of the scores obtained from the responses given, according to the three evaluated dimensions; it should be noted that the average of the responses given for the health management dimension was the lowest awarded by students and lecturers, with averages of 4.07±0.53 and 3.28±0.64, respectively. Compared to the healthcare and health education dimensions, it is also highlighted that the lecturers' evaluations in all the dimensions were lower than those given by the students. Table 2. Average, median, minimum, maximum and standard-deviation for the three dimensions of the questionnaire applied to students from nutrition courses in the south of Brazil, 2016 Table 3. Average, median, minimum, maximum and standard-deviation for the three dimensions of the questionnaire applied to lecturers on nutrition courses in the south of Brazil, 2016 Significant differences can be observed in Table 4, where "Institution A" was identified as the one which received the best scores in the student evaluation in the health education dimension (D3). Responses from the undergraduates who were in the first four semesters (first two years) were statistically compared to those who were in the final half of the course (last two years). Analyses of the questionnaires answered by the students are expressed in Table 5, demonstrating the results for each question, dimension and general evaluation.
It is observed in the table that there was only a significant difference in some of the questions in the three dimensions. Table 5. Average and standard-deviation for the dimensions and questions by cycles of the questionnaires applied to students and lecturers on nutrition courses in the south of Brazil, 2016 (conclusion) For the lecturers, the students in the last periods of the course presented a better score in relation to those in the initial periods for the health management dimension, followed by the health education dimension. The health management dimension considers managerial activities (team, physical resources and materials), teamwork, problem solving (resolving conflicts), decision-making and leadership. This result may indicate that the courses evaluated look for technical training to the detriment of knowledge and developing the skills required for a professional's adequate and comprehensive training. Significant results may be observed in the general evaluation, where it is noted that almost all of the questions present significant statistical results (p<0.05) which may demonstrate that, from the lecturers' point of view, the students are gradually developing general competencies throughout the course, since the final score awarded is higher than the initial one.
Discussion
Competency can be noted when the execution of a task is observed and improves day by day from the teaching-learning process and the subjects' experiences. It is not just acquired knowledge; possessing knowledge and skills does not mean being competent, since it is possible to have knowledge without actually knowing how to apply it. Professional competencies are constructed through learning processes which are influenced by sets of human capacities: knowledge (informationknowing what and why), skills (techniques, capacity and knowing how) and attitudes (wanting to do, identity and determination); when developed, these dimensions generate individuals' capacity to act competently [11][12] .
In the health field, developing competencies implies linking different expertise, so that a practice is constructed, which is based not only on acquiring knowledge but also on incorporating skills and personal attitudes which can, in practice, transform the reality of health, particularly within SUS, and contribute to raising the quality of the population's health. From this perspective, health management is equally important as technical-scientific training 13-14 . Oliveira et al. 13 bring to light the discussion that many health courses are shown to be unrelated to organizing management and a critical debate about the systems which structure healthcare. They also emphasize that educational institutions maintain teaching which is dominated by conservative models directed towards and focusing on technical-scientific mastery of the profession.
Health education is linked to the concept of promoting health and can also be considered a health promotion strategy, aiming to raise individual or collective awareness of the population, contributing towards increasing physical and mental, environmental, personal and social well-being 14 .
Health professionals must be qualified to orientate individuals at all levels and at any cycle of life.
A nutritionist is the health professional responsible for education on healthy eating practices. In this context, a professional's actions requires a comparison of new practices with social representations of food and their meanings for subjects in the educational process. The challenge is also in considering health requirements, with an emphasis on society's demands and those of SUS [15][16] . Lecturers are responsible for encouraging students to keep up to date and to apply effective communication, in order to be fully trained in health education; they are then able to competently carry out the duties of a health educator.
Considering the results and reflecting on the need to develop competencies, we believe that profound changes are required, and these should be made progressively. It is highlighted that none of the courses evaluated presented a record of systematically using active methodologies on their curricula as teaching-learning strategies, keeping practice limited to the period of compulsory internships which take place at the end of the course. Therefore, we believe that implementing active methodologies is required, but they are still a challenge for teaching staff who experienced formal methodology in their own training 17 .
Article 9 of the nutrition NCGs highlight that the pedagogical project must be "...centred on the student as the subject of learning and supported by the lecturer as a facilitator and mediator of the teaching-learning process" 1 . This indicates the need to rethink professional training, independent of the institution.
Active methodologies cannot be simply applied without clear learning objectives. For Mitre, 11 when the aim is to attain more proactive students, the adoption of increasingly complex activities is required; in other words, activities which involve the students making decisions; these are always followed-up by evaluating the results, and feedback is provided on all of them. Maintaining the curriculum and including active methodologies such as the "flipped classroom", "problematization" and "Team Based Learning -TBL," among others, as teachinglearning strategies, motivate students. When presented with a problem, they seek to address bottlenecks and are therefore responsible for their own development. Focusing on professional practice helps to pressure the educational institutions to rethink their curricula and the evaluation processes used, thereby enabling the reorientation of educational planning based on desirable competencies for their undergraduate students 4 .
Theoretically, the construction of competencies is an undefined process which can and should be continuous, on account of the constant changes in the world, with respect to epidemiological, demographic, dietary, nutritional and technological transitions, leading professionals and institutions to tackle new and constant training challenges. Therefore, there cannot be a single method, or just one form of training. This training cannot be understood as solely a "project" which ends with the completion of an undergraduate course but a central component of the training process 12 . It should be borne in mind that a lecturer's competence cannot be transmitted to the students: it must be actively constructed. The proposed training models are only a reference, in order to develop student competencies: every individual leaves their own personal mark on their learning process when they are an active agent of their training.
Students should consider the constant need to search for knowledge, valuing their time in the classroom and practical experience which is more in line with their own learning and permanent education. Permanent education comes from meeting the curricular guidelines of undergraduate health courses, seeking to train a critical professional who is able to learn to learn, work in a team and is mindful of the social reality, thereby enabling humane and quality care. Permanent education considers everyday work or health training, forming a strategy which enables training that is truly in line with the needs of Brazilian public health [13][14][15][16][17][18] .
There is a consensus in scientific literature that the development of competencies should be assessed; in other words, students' practical performance should be observed, in order to examine the "level of doing." Therefore, the lecturer's outlook is important but, as in other courses, it is extremely difficult for lecturers to follow-up practical activities; in other words, when they are in-service. Therefore, other subjective evaluation methods should be introduced, bearing in mind that feedback is an essential part of educational evaluation 19 .
It is important to reflect that attaining a diploma is no guarantee that competencies are fully mastered, since this construction begins on an undergraduate course but they are underpinned during professional activities; they are forged in practice and the experience which everyday activities provides. Understanding the competency development process is in line with the situations, experiences and learning transformed into knowledge and skills which, in turn, are the basis for forming attitudes and constructing the capacity to act.
When considering this proposal, practice in academic life should be emphasized: it favors learning in different environments and often cannot be reproduced in the classroom. Experience provides significant learning.
Educational institutions and course managers need to be aware of changes in teaching scenarios.
Since developing competencies is presented as a new perspective for training health professionals, in addition to encouraging critical reflection, it can meet the requirements imposed by scenarios of social change and favor the development of citizenship 20 . In order to meet the current challenges of working with health, reflection is required on training and the profile of competencies desired for health professionals, focusing not only on specialized technical knowledge but, primarily, on the skills and attitudes to be developed for the population's health. There is no "template" or a single "way" of acting: the realities are different; problems are diverse and being technically qualified is important: recognizing health problems in the biological framework, but also knowing how to intervene when considering individual characteristics and food problems in the social, political, environment and cultural context. Thus, it is observed that this new scenario has required newly-qualified nutritionists to be fully aware and to reflect more deeply on their role as health professionals committed to promoting healthy practices and eating habits, with a view to achieving the human right to adequate food and the guarantee of sovereignty, food and nutritional safety in Brazil 6 . Historically, we have been criticized by employers for the inadequate production of workers qualified to meet the demands of the labor market, as if teaching institutions were placing a product in the labor market 18 . Training competent professionals is far removed from what was done mechanically in the recent past; "complete" professionals with the desired profile need to have skills as well as techniques.
A supervised internship can be considered one of the main experiences that academics experience during an undergraduate nutrition course. It is a time which encourages students to develop the general competencies proposed, since it is in this environment, and at different times, that they are required to adopt attitudes of leadership, make decisions about real situations, demonstrate their proactivity and communicate, among other general competencies the National Curricular Guidelines require. In this learning environment, students need to plan their actions, learn to be flexible with decision-making, work in a team, deal with various situations and learn to tackle the demands of the labor market and its realities 9 . The lecturer's role is extremely relevant. S/he becomes the motivator; someone who leads the students to reflect on their role in society as future professionals and mediates the learning process 21 .
For students to be considered qualified for professional activities, the development of general competencies must be guaranteed during their training. It is fundamental that training promotes factors related to specific knowledge of the area, which fosters the development of aspects of everyday work in the classroom, highlighting methodologies and contents 22 .
Limited scientific production on competencies when training nutritionists, and a lack of in-depth studies about undergraduate nutrition courses may have limited reflection on the approach undertaken. We consider that this study demonstrates the reality of a percentage of schools belonging to a group with specific characteristics, reflecting the local reality, and it may not correspond to courses in other regions. However, we believe that this study is useful to all of the academic community, and also generates reflection which may assist in altering the current nutrition training scenario, as a way of tackling the challenges of academic training.
Conclusion
It was concluded that the undergraduate nutrition students involved in the research were not developing the general competencies required for their comprehensive training. When considering the different times of the course, the lecturers see students in the final years of the course as more prepared than those in the initial years.
From the lecturers' point of view, the students evaluated develop more health management and health education competencies and skills, while from the students' perspective, there was only a slight emphasis on health education. When comparing the evaluated institutions, one of these, "Institution A", received a better evaluation in developing competencies in the health education dimension.
Nutrition plays a relevant role in maintaining health, preventing diseases, treating and recovering health. Therefore, developing competencies during an undergraduate course must be evaluated, since it may promote discussion and improve these courses.
New studies should be conducted in order to discuss and follow-up the training of undergraduate nutrition students on a state and national level, due to the low number of publications in this particular area.
Author contributions
Ravazzani EDA participated in the project, data collection, analysis and discussion of the results, writing the article. Matia G Participate in the analysis and discussion of the results, writing the article. Rabbit IM participated in the analysis and discussion of the results, writing the article. Esteve RZ participated in the contribution of the project design, analysis and discussion of the results, writing the article. | 2019-08-18T14:32:05.457Z | 2018-08-07T00:00:00.000 | {
"year": 2018,
"sha1": "232672099a2c4c2ac25727ac39145a22c2fd4ee0",
"oa_license": "CCBY",
"oa_url": "https://www5.bahiana.edu.br/index.php/educacao/article/download/1863/2034",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b46b33ea25f97d4055d349d5289891e812da3f03",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Education"
],
"extfieldsofstudy": []
} |
225067595 | pes2o/s2orc | v3-fos-license | A Software Architecture for Autonomous Vehicles: Team LRM-B Entry in the First CARLA Autonomous Driving Challenge
The objective of the first CARLA autonomous driving challenge was to deploy autonomous driving systems to lead with complex traffic scenarios where all participants faced the same challenging traffic situations. According to the organizers, this competition emerges as a way to democratize and to accelerate the research and development of autonomous vehicles around the world using the CARLA simulator contributing to the development of the autonomous vehicle area. Therefore, this paper presents the architecture design for the navigation of an autonomous vehicle in a simulated urban environment that attempts to commit the least number of traffic infractions, which used as the baseline the original architecture of the platform for autonomous navigation CaRINA 2. Our agent traveled in simulated scenarios for several hours, demonstrating his capabilities, winning three out of the four tracks of the challenge, and being ranked second in the remaining track. Our architecture was made towards meeting the requirements of CARLA Autonomous Driving Challenge and has components for obstacle detection using 3D point clouds, traffic signs detection and classification which employs Convolutional Neural Networks (CNN) and depth information, risk assessment with collision detection using short-term motion prediction, decision-making with Markov Decision Process (MDP), and control using Model Predictive Control (MPC).
Introduction
Autonomous Vehicles are intelligent and robotic vehicles that navigate in traffic without human intervention, dealing with all driving scenarios and respecting traffic legislation [1]. They combine a wide range of sensors and software components towards creating a rich representation of a scene to understand the external environment, make decisions, and take actions similarly to, or even better than human drivers [2]. Therefore, they are expected to transform urban traffic improving mobility, safety and accessibility, reducing pollutant emissions, among other benefits offered [3].
All workflow of an autonomous system is performed by various software components and algorithms, which use concepts of machine learning, computer vision, decision theory, probability theory, control theory, and other research fields. Such a heterogeneous characteristic increases the complexity of its development and evaluation. According to Kalra and Paddock [4], and Koopman and Wagner [5], to reliably assess an autonomous system, more than road tests are required, due to the complexity and variability of the driving scenario and number of components of the system. Autonomous driving challenges have been one of the major factors that contribute to the development and advance in the research field since DARPA's Grand Challenge (2004 and 2005) [6] and DARPA's Urban Challenge (2007) [7]. Other challenges, such as Intelligent Vehicle Future Challenge (IVFC), organized by the National Natural Science Foundation of China (NSFC) [8], Autonomous Vehicle Challenge (AVC), sponsored by Hyundai Motor Group in South Korea [9], and European Truck Platooning Challenge (EU TPC), promoted by The European Commission [10] have also been important for advances in the autonomous vehicle research field.
Although those challenges significantly impact the progress of the research field, the performance evaluation of autonomous system is limited to some particular situations created specifically for competitions or common traffic, since the primary concern in real-world tests is participants' safety, thus limiting the tests to more controlled situations.
According to Huang et al. [11], extensive evaluations of autonomous systems require simulation, which can evaluate single components or the complete system. One of the primary advantages of simulators is they can create specific scenarios to be repeated as many times as necessary for assessments of the performance of a component, as well as several driving situations with realistic dynamics and environmental feedback, such as weather conditions, sensor malfunctions, traffic violations, hazardous traffic events, traffic jams, crowded streets, among others. Moreover, simulation platforms can benchmark autonomous driving systems, since they can evaluate different systems approaches with the same set of events and conditions. Therefore, the CARLA (CAR Learning to Act) simulator staff promoted CARLA Autonomous Driving Challenge (CARLA AD Challenge) competition, in 2019, which comprised two categories and four tracks. Its major purpose was to evaluate safe driving task for autonomous systems developed by different worldwide research groups, which could also rely on different sets of sensors available on each track. This paper introduces the software architecture designed by the LRM-B team, who participated in the CARLA AD Challenge, won three tracks and was ranked second on the remaining track. CaRINA 2 (Intelligent Robotic Car for Autonomous Navigation) software architecture was used as a baseline for the architecture; it is an autonomous vehicle research platform developed by the Mobile Robotics Lab (LRM) at the University of São Paulo Paulo, Brazil [12], which conducts research on intelligent and autonomous vehicles technologies.
The main contributions of our study include: • a modular software architecture for autonomous vehicles that works with different set of sensors; • a risk assessment that uses short-term motion prediction for collision detection and decision-making; • a probabilistic decision-making approach for a high-level longitudinal control of vehicle that handles obstacles and signalized intersections; and • an obstacle detector based on sensor fusion that uses deep learning for image object detection and alignment with segmented objects into a 3D point cloud (stereo and Li-DAR).
The remainder of the paper is organized as follows: Section 2 provides an overview of CARLA AD Challenge; Section 3 highlights the concepts and components of a general autonomous vehicle software architecture related to this research; Section 4 describes the software architecture developed by the LRM-B team in the competition; Section 5 discusses the results of the competition and other experiments; finally, Section 6 addresses the final remarks and suggests some future work.
CARLA Autonomous Driving Challenge
The first CARLA Autonomous Driving Challenge (CADC) [13] was a competition for autonomous vehicles in urban environments that used CARLA (CAR Learning to Act) open source simulator, which supports the development, training and validation of several architectures and approaches for autonomous driving. CARLA provides free access to many digital features such as urban layout, buildings, traffic signs, vehicles, and pedestrians, supports various weather conditions (e.g. sunny day, fog, heavy rain), sensors (e.g. cameras, LiDAR, GPS), as well as simulations of many specific driving scenarios [14]. Fig. 1 shows some of the scenarios of the competition.
The major purpose of the competition was to evaluate safe driving task for autonomous systems and settle a benchmark of different approaches of worldwide systems. It was divided into two categories, namely Perception Heavy and Map-based, which were also divided into two tracks differentiated by the set of sensors available for each system design.
The main concern of Perception Heavy (track1 and track2) was the development of perception algorithms for modeling and understanding the surroundings and, therefore, supplying the navigation stack (composed of components for decision making, path planning, and motion planning). The sensors available for track1 were LiDAR, GPS and Cameras. In contrast, only cameras and GPS were available for track2. Both tracks were provided with a topological sparse representation of the route at the beginning of each route.
Tracks in Map-based category (track3 and track4) provided prior knowledge of the environment for the autonomous system. In track3, the sensors available were LiDAR, GPS, Cameras, High Definition Map that used 3D point cloud, and a set of waypoints in a geodesic coordinate system representing the route. Track4 provided all perception information (e.g. position of obstacles and traffic signs, status of traffic lights, and waypoints of lanes along the route), being the navigation stack the main concern for this track.
The evaluation metric of the competition relied on the traffic rules compliance of the autonomous system along the routes within a time limit (estimated from the route length), in which each traffic violation penalized the score, and the stretches of completed route added bonus points to the score. Different driving scenarios were used in each route, according to traffic scenarios selected from the NHTSA pre-crash typology [13], such as control loss with no previous action, avoidance of unexpected obstacles, and crossing negotiation at an unsignalized intersection. Their simulation enabled assessments of the performance of each autonomous system in hazardous and challenging traffic scenarios with which the autonomous vehicle should deal in real-world traffic.
This article presents the overall software architecture design for all tracks, developed by LRM-B team, winner of track1, track3, and track4, and ranked second on track2 1 . Although each track comprised a different set of sensors, the architecture provides a general interface between all components for all tracks, which enabled the team to take advantage of the features of layered architectures, such as modularity, reusability and maintainability.
Related Work
The layered architecture pattern establishes a topological and hierarchical organization of the software components. Each layer gathers software components with similar features and abstraction level. The communication between components can be classified into intra-layer and inter-layer. The former means the communication of components belonging to the same layer, while the latter between components of different layers. Both communication styles use a well-defined communication interface, which allows the replacement of specific components or the entire layer, keeping the system behavior. This property reinforces the modularity, scalability, reusability, and maintainability of the architectural style [15] [16]. This architecture style is suitable to design autonomous systems since the workflow of the system is carried out by many components which have correlated meaning and can be arranged hierarchically [16].
The layered architecture pattern establishes a topological and hierarchical organization of the software components. Each layer gathers software components with similar features and abstraction level. The communication between components can be classified into intra-layer and inter-layer. The former refers to communication of components that belong to the same layer, whereas the latter denotes communication between components of different layers. Both communication styles use a well-defined communication interface, which enables the replacement of specific components or the entire layer, keeping the system's behavior. Such a property reinforces the modularity, scalability, reusability, and maintainability of the architectural style [15] [16], which is suitable for the design of autonomous systems, since the workflow is carried out by several components with correlated functions and can be arranged hierarchically [16].
According to Liu et al. [2] and Kato et al. [17], the standard layers of an autonomous system can be categorized as sensing, perception, path planning, decision-making, control, systemmanagement, and human-vehicle interface. The sensing layer makes data from sensors available to other components of the system, which is in line with the architecture design of some autonomous vehicles such as A1 [9], BerthaOne [18], and Au-toWare [19].
The components of the perception layer use those data to obtain useful information, e.g., position of pedestrians, and vehicles or objects in the image. Another component of perception layer is localization, which estimates the position of the vehicle in a specific coordinate system (e.g., global or local) [20]. Wei et al. [21] used a filter that integrated GPS (Global Position System) with RTK (Real Time Kinematic) correction, wheel speed sensors, and IMU (Inertial Measurement Unit) for global localization systems with centimeter-level precision. Alternatively, Kato et al. [19] performed localization by scan matching between 3D maps and LiDAR (Light Detection And Ranging) scanners. The perception of both vehicles was also responsible for tasks such as obstacle detection, traffic signs detection, and traffic lights detection using algorithms based on image processing (e.g. Convolution Neural Networks [22,23,24]) and point cloud clustering [25,26,27,28,17].
The information from the perception layer is taken into account by the path planning layer and decision-making layer, which must find a path from the current position of the vehicle towards its destination, and decide on the behavior of the vehicle according to both driving situations and traffic laws. Jo et al. [9] and Wei et al. [29] claimed such layers can be divided into three stages, namely global planning, behavior generation, and motion planning. The former, also known as route planning, computes the overall route between the current position to the destination using road-network models (e.g. High-Definition Maps [30] and Lane-level Annotated Maps [31] [32]) and heuristic functions (e.g. shortest path, speed limits along the route, and traffic flow).
Behavior generation, also known as behavior reasoning or decision-making, makes tactical decisions on the actions of the vehicle, e.g., maneuvers to be performed (e.g. overtaking, lane-change, lane following, U-turn or emergency-stop), or the way the vehicles should interact with other traffic participants [33,34]. As stated in Paden et al. [33] and Katrakazas et al. [34], some of the algorithms used for the decision process are MDP (Markov Decision Process) [35], POMDP (Partially-Observable Markov Decision Process) [36], State Machines (e.g. Finite State Machines, or Hierarchical Concurrent State Machines) [9][18], Behavior-Tree [37], and Game Theory [38].
Based on the global plan and reasonable decision made by previous components, motion planning creates local, obstaclefree and dynamic feasible paths to be tracked by low-level controllers in the control layer, which generate brake, throttle and steering angle commands. According to Liu et al. [2] and Massera Filho et al. [39], a way to accomplish this task is to divide the movement into lateral and longitudinal controllers, which calculate actions considering kinematic and dynamic constraints of the movement. In brief, the lateral controller uses Ackermann geometry equations to calculate the steering angle, and the longitudinal controller uses a speed profile of the path to calculate the acceleration and braking required so that the desired speed can be reached. Model Predictive Control (MPC) is a well-known technique applied for this task; it optimizes a cost function based on defined constraints and the prediction of the system behavior using its dynamic model within a time horizon [40].
The system-management layer monitors all components of the system, detects and identifies faults or abnormal behaviors, and launches recovery protocols in case of faults and unexpected conditions. According to Jo et al. [9], when a fault or abnormal condition is detected, the system switches the operation mode of the vehicle to pause, so that a human driver takes control. Wei et al. [21] also used the switch of operation mode as a recovery protocol, and redundancy of hardware components, which can migrate running algorithms from the faulty component to another healthy component.
Finally, the human-vehicle interface layer provides the graphical tools that access the system, visualizing the components information and feedback, and also request specific missions, such as destination to be reached by the vehicle.
Software Architecture
As addressed elsewhere, an autonomous system requires several components and its architectural design provides an abstract view of the system operation and organization. In a layered architecture, the components have public and well-defined communication interfaces through which they exchange information with other components. This characteristic enables the definition of a common architecture for all tracks in this challenge, through adjustments of few components for the maintenance of the same communication interface. This strategy reduces the time spent on the development of the agents, and enables the evaluation of the autonomous navigation performance with different algorithms for a specific task. Fig. 2 shows the general software architecture designed for all agents of LRM-B team in the CARLA AD Challenge 2019. The name "CaRINA Agent" is used to refer to this architecture in the rest of the article. The layers of the architecture are sensing, perception, navigation, control, and, vehicle. Robotic framework ROS (Robotic Operating System) supported the communication interface between components with Publish/Subscribe pattern for messages passing.
Publish and Subscribe Model
The main idea of the Publisher-Subscriber style is each component can assume two roles, i.e., publisher and subscriber. The former produces some information or generates data, which are required and used by components that assume the second role. Therefore, when the publisher modifies the shared resources, it notifies all components that have requested that information [15].
ROS has a centralized process, called master, which indexes each new component started, which can be both publisher and subscriber. Topics and types address the messages between publishers and subscribers. After a subscriber requests a topic, the message is directly exchanged between nodes through a peer-to-peer protocol. Therefore, the communication structure provided by the framework facilitates the development of systems with many components, such as autonomous ones [41].
Sensing
The sensing layer comprises a set of sensors classified as exteroceptive and proprioceptive that measure, respectively, data from the external environment and the vehicle itself [42]. The sensing layer components obtain raw data from the sensors to be interpreted and converted into useful information by the perception layer, so that the system creates a representation of the vehicle state and its surroundings with some degree of reliability.
A layer for sensor management is an important resource for architecture design; it improves both flexibility and scalability, enables the replacement of one of its components, or even the entire layer, and keeps the architecture working, as long as its communication interface remains. If the set of sensors is changed, the modification in the architecture design is minimal or even non-existent. Fig. 3 shows the sensors layout of CaRINA Agent for all tracks in the competition, with three pair of cameras for stereo vision, one LiDAR, and two GPS receivers. As addressed elsewhere, each track allows only one specific set of sensors, thereby, CaRINA Agent uses only a subset of the sensors (Fig. 3) in each track. Some pseudo sensors (e.g. CANBus and Ob-jectFinder) are also available in the competition that provide additional information. Table 1 shows the sensors used in each track.
• GPS: This sensor provides the position of the vehicle in a where lat and lon correspond to the latitude and longitude, alt denotes altitude, and k is the timestamp of the frame.
• Cameras: Six cameras grouped into 3 stereo pairs placed at a height of 1.8m were used (see Fig. 3). The size of the images for the stereo cameras named "Stereo right" and "Stereo left", is 600 × 320 (width × height), and for the "Stereo center" camera is 1080 × 540 (width × height).
The next section provides detailed information about the camera system. • CANBus: This proprioceptive pseudo sensor provides information on the internal state of the vehicle, such as speed, steering angle, shape dimensions, among others.
• ObjectFinder: This pseudo sensor provides the position and orientation of all dynamic (e.g. vehicles and walkers) and static (e.g. traffic signs and traffic lights) objects belonging to the simulated environment, as well as information about the state of traffic lights and shape of the objects.
Perception
The components of the perception layer transform sensor data into useful information using algorithms and theoretical definitions from different research fields, such as Computer Vision, Linear Algebra, Probability, and Machine Learning [2]. This information enables the representation of the vehicle state and environment surroundings towards supplying the components of the navigation layer in the architectural design shown in Fig. 2.
Localization
After the position has been converted from geodesic to UTM (Universal Transverse Mercator) coordinate system (p UT M ), which is a Cartesian coordinate system for localization on Earth, the heading of the vehicle can be estimated by either two sequential timestamps (k and k + 1), or the readings of two receivers (GPS f ront and GPS back) in the same timestamp. In both cases, the arctan function is applied as follows: where θ is the heading, tuple (x, y) is the position in UTM,and k + 1 and k are f ront and back, respectively. The GPS output is defined as where p is the position's vector state, (x back , y back ) are the lateral and longitudinal Cartesian coordinates from the GPS back (see Fig. 3), or from the GPS reading at timestamp k + 1, alt is the altitude from the sensor, and θ is the vehicle's orientation (see Eq. 2). An Extended Kalman Filter (EKF) [43] was also applied towards reducing the noise effect and avoiding a large innovation between sequential readings, which can be a result of either sensor's faults, or surrounding effects typical of GNSSbased (Global Navigation Satellites System) localization systems, such as multi-path, signal loss and antenna masking [44,45]. Fig. 4 displays a flowchart of the localization system using the EKF filter.
Observation vector z (Eq. 4) from the GPS output p (Eq. 3), a control input vector u (Eq. 5) from the CAN sensor, and a state vectorq (Eq. 6), which estimates the position and orientation of the vehicle, were used for CaRINA Agent. Equation 7 shows the system's dynamic equations.
where, x o k , y o k , θ o k is the observation vector from the GPS sensor equivalent to p = (x back , y back , θ) (Eq. 3), (v k , κ k ) is the velocity and curvature from the CANBus sensor. The curvature is estimated using vehicle baseline, L = 2.85m, and the current steering angle. In addition, x k ,ŷ k ,θ k is the final pose estimation, and k the timestamp .
in which ∆t is the time elapsed between predictions.
Obstacle Detection
CaRINA Agent employs two vision systems, namely LiDARbased and stereo camera-based, with perception capabilities for track 1, track 2 and track 3. They provide three-dimensional point clouds in a Cartesian coordinate system (x, y, z) and perform the obstacle detection task. Fig. 5 shows a general overview of the obstacle detection flowchart using 3D LiDAR or stereo camera.
LiDAR-Based Obstacle Detection. Unlike commercial Li-DAR, the simulated sensor offers neither intensity, nor ring data, since it is an implementation based on ray-casting. Therefore, approaches based on compression of rings and virtual scan, such as those presented in [46] and [47] are not enabled to use during the competition.
Due to such a lack of information on point clouds, we decided to use a simple obstacle detection algorithm based on a height map approach, where the xy-plane in the front of the car is divided into a 500 × 500 grid of 10cm resolution. Towards the construction of a height map, each point in the point cloud is compared with the other nearest points contained inside of the same grid cell. The greatest height difference between two points in the grid cell is compared with a threshold-if it is greater than 15cm, the cell is considered occupied (obstacle); otherwise, the cell is considered a cleared area. The occupied area is saved as a point cloud where, for each point, (x, y) are the same coordinates as the center of the cell and z is a value of height. The system segments the point cloud containing the height map (occupied area) into instances and infers basic shapes for each obstacle for making decisions on the obstacles found on the road and performing further tasks, such as tracking.
In the next step, the system uses DBSCAN algorithm [48] to cluster and separate points belonging to different objects (e.g., cars, pedestrians, poles, among others). DBSCAN returns clusters for each object and a bounding box is then fitted by the rotating calipers algorithm implemented in OpenCV, which returns the min rectangle fitted to the cluster in the xy-plane. The height of the cluster completes the 3D box around each cluster.
Stereo-based Obstacle Detection. Driving in urban scenarios requires detailed and precise 3D perception of the environment surrounding the vehicle. LiDAR was an effective sensor to perform this task on track1 and track3. Nevertheless, on track2, we are asked to drive to a destination point using cameras only. In the challenge, there are no sensors that provide RGB-D images or depth information further than LiDAR.
We used a stereo system to perform scene reconstruction and obstacle detection. The stereo approach was chosen, since it enables 3D reconstruction with the use of only two monocular cameras with the correct calibration matrix.
In track2, we use stereo cameras to create RGB-D images. The intrinsic and extrinsic parameters of the cameras were manually defined in the autonomous sensor settings. For stereo left and stereo right, we set focal length f = 44 and field of view f ov = 60 deg, and stereo center was configured with f=33 and fov=40 deg. In all cases the baseline distance, which is the distance between two cameras in the stereo rig, was 24cm. Short baselines are more accurate for stereo estimation and obstacle detection, and large baselines enable the detection of obstacles far from the agent. A medium baseline was used for balancing these features. Fig. 3 depicts poses of the three stereo rigs on the car roof.
ELAS (Efficient LArge-scale Stereo) algorithm extracted disparity maps for depth estimation [49], and, subsequently, using disparity maps and camera calibration, the system built depth images and point clouds similar to those generated by LiDAR, but enhanced with RGB information.
The same algorithm used for obstacle detection using Li-DAR's point cloud was applied for obstacle detection using the RGB point cloud from the stereo system. Overall, for track2, the RGB point cloud provided the localization of obstacles, traffic lights and traffic signs.
A method based on virtual scan from a 3D LiDAR proposed by Petrovskaya et al. [47] detected obstacles with the use of a stereo system. Petrovskaya et al. [47] consider obstacles above the ground and up to two-meter height and build a 3D grid in spherical coordinates for the classification of obstacles into different types. This method was also able to decrease the amount of data in the point cloud. Each spherical grid cell (cone from the origin to the obstacle point) is called virtual ray of the virtual scan. Based on this idea, we used a depth image calculated by the ELAS algorithm considered the depth image a grid in a spherical coordinate system. In this case, the sizes of both spherical grid and columns in the depth image were the same. Columns are grid cells (virtual rays) and rows were compared to rings in a 3D LiDAR. Angles between points contained in rays and angles between rays can be calculated according to the camera field of view and image resolution.
Virtual scan method was applied for the removal of ground and detection of obstacle, as suggested in Petrovskaya et al. [47]. The first step is to determine obstacle points by circulating the rays (in our case, through the columns in the depth image) from the lowest vertical angle (bottom row) to the highest (top row). In Fig. 7, considering the three points (A, B and C) in the virtual ray, if the two normalized vectors AB and BC are parallel to each other, the angle between them must be close to 0, and the dot product must be close to 1. For perpendicular vectors BC and CD (obstacle), the dot product between them was close to 0. The method eliminated outliers for vectors DE and EF (E point is an outlier) -despite their opposite directions, they are approximately parallel (see Fig. 7) and the dot product will also be close to 1, as in the first case. Therefore, the application of a threshold to the dot product is sufficient for the concomitant elimination of ground and outliers in the depth image.
This filter aims at eliminating both the spray noise (outliers) caused by an erroneous calculation of disparity in the depth image and flat areas belonging to the ground -only points of obstacles must be maintained. The first objective was successfully achieved, since the spray noise from erroneous disparities was eliminated (see Figs. 8b and 8d). The filter worked partially in the second task -while the furthest points belonging to the ground were effectively eliminated, the closest points of the ground were kept, probably due to the proximity between pixels in the closest areas (in this case, vectors AB and BC were not completely parallel and the slope between them might be greater than zero). Fig. 8 shows the point cloud generated from the depth image with and without the filter.
Since the ground had not been completely removed and the obstacles had not been totally separated, the same methods used for 3D LiDAR object detection (height map, clustering) were applied towards separating the obstacles from the ground (see flow diagram in Fig. 5). Stereo-based system plays two roles, namely support obstacle detection with RGB-D point cloud and localize traffic signs and traffic lights with depth enhancements after 2D detection (see section 4.3.3). We used the left camera of stereo center for obstacle detection and classification in an RGB image. A Convolutional Neural Network for object detection and classification was trained with images collected and labeled by our team in several simulated environments under various weather conditions. Our stereo camera system was able to detect obstacles up to 30m distances using our trained classifier and stereo point cloud.
Traffic Light and Sign Detection
A Convolutional Neural Network (CNN) known as YOLO v3 was used for the detection of traffic lights and traffic signs in images [22]. An image dataset collected and labeled through the training routes was sliced into training, validation and test sets. The bounding boxes of the annotated images belonged to eight classes, namely horizontal stop, vertical stop, traffic lights (green, yellow, red), and speed plates (90km/h, 60km/h and 30km/h). The original architecture described in Redmon and Farhadi was trained [22]. Fig. 9 shows results for inference from the trained detector in different situations on validation routes of CADCH.
Hazardous Obstacle Monitor
Besides obstacle detection, risk assessment is another important component of an autonomous system for ensuring safe driving. According to Hamid et al. [50], it estimates potential threats to accidents, including collision with other traffic participants. Collision is classified into three main types, i.e., collisions with static obstacles, dynamic obstacles, and unexpected obstacles (which may occur due to occlusions). Therefore, both quantitative risk evaluation and binary collision prediction can feed decision-making algorithms towards appropriate actions for the avoidance of collisions.
According to Kim and Kum [51], and Mechernene et al. [52], quantitative risk evaluations use quantitative metrics as risk indicator, such as Time-To-Collision, Time-To-React, Distance-To-Collision, among others. In turn, the binary collision prediction determines whether or not a collision will occur given Li et al. [54] developed a path planning algorithm with sampling trajectories, which removes trajectories colliding with obstacles. Artuñedo, Godoy and Villagra [55] presented a driver corridor path planning and short-term motion prediction for obstacle-aware navigation of autonomous vehicles. Such prediction assumes a constant velocity vector for other traffic participants to reduce computational costs during navigation.
The risk assessment component of CaRINA Agent creates a safe zone around the ego-vehicle trajectory and applies a shortterm motion prediction assuming constant velocity and orientation for estimating the risk of collision between traffic participants. Fig. 10 illustrates the way the approach works. The egovehicle's trajectory is divided into three attention zones regarding its proximity with the ego-vehicle, namely danger, warning, and safe. Such zones are used for the classification of different threat levels, improving decision-making on safety actions, i.e., if a trajectory conflict is detected in the first zone (danger), an emergency stop must be executed; however, if it occurs in the 'warning' or 'safe' zones, the vehicle should manage its velocity using Distance-To-Collision (DTC) as input to the decisionmaking algorithm (see section 4.4)).
The detection of conflicts between trajectories assumes two different rules according to the available information on the traffic scene. The first is the position of the obstacles and an estimation of their shape. If their orientation and velocity are available, a short-term motion prediction estimates their future positions assuming constant velocity and orientation, and uses them to detect conflicts. Otherwise, the conflict is determined by the proximity of the obstacle to the ego-vehicle's trajectory with a 1m threshold [50].
Navigation
The navigation system relies only on waypoints to perform each mission, i. e., CaRINA agent does not use High Definition Maps or other types of maps to navigate in the simulated urban environment.
Planning
The route followed by the agent is composed of waypoints. However, they are generally sparsely positioned between each other, which reduces the smoothness of the path. Towards overcoming this issue, the waypoints are interpolated by cubic splines, so that a dense representation of the path is achieved and smoothness is increased.
Decision Making
During navigation, the ego-vehicle must adopt some tactical behaviors for safely reaching its destination (e.g. avoiding static and dynamic obstacles). Considering only static obstacles, the ego-vehicle must plan a trajectory to avoid them, while guaranteeing some level of comfort for the passengers. This task becomes more complex when other traffic participants are involved. For example, intersection negotiation and overtaking require a higher level of reasoning than a simple deviation from a static obstacle on the road. Therefore, the ego-vehicle must consider its future actions as well as state transition of other agents, such as color changes in traffic light. Since communication between the ego-vehicle and infrastructure is not available, the inference of future states of such agents is uncertain. Towards dealing with the inherent uncertainty, the problem can be modeled as a Markov Decision Process (MDP) [35].
MDP is a mathematical framework that solves problems subject to uncertainty [56].
It is defined by the tuple (S, A, T, R, γ), where S and A are the state and action space, respectively. When taking action a ∈ A in state s ∈ S, the agent reaches state s ∈ S. Conditional probability function T (s, a, s ) = Pr(s |s, a), which describes the probability of reaching s from s after taking a, models the uncertainty related to state transitions. R(a, s) is the expected reward when the action a is taken in s. MDP aims to compute a policy π * : s → a that maximizes the expected sum of discounted rewards: where γ ∈ [0, 1) is the discount factor, which prioritizes immediate rewards.
The State space S = [v x ϕ ϕ] T encompasses the velocity v of the agent, the distance d ϕ to the next traffic light and its color ϕ, and the distance d v to a vehicle observed on the agent's route and assumed as a static obstacle.
The agent action space is composed of three action: where Brake, S tay Constant and Accelerate stand for a negative, a null and a positive speed rate change, respectively. The where a 0 is the speed rate change of the agent and ∆t is the time step. The traffic light state ϕ changes according to the stochastic model: where p 1 , p 2 and p 3 represent probabilities of transition. The reward model considers the agent's speed and a reference speed: The agent receives a negative reward when its speed deviates from the reference speed, which depends on d ϕ , ϕ and d v : if d v < 2 m, or ϕ = RED and d ϕ < 12 m, the reference speed is chosen to be null. Otherwise, v re f is chosen to be just below the speed limit of the road. The main parameters of the MDP model are shown in Table 2.
Classical methods applied to MDP, such as value iteration and policy iteration algorithms [56], perform poorly in large state spaces, which are assumed to be discrete. Such algorithms also require Pr(s |s, a) be explicitly defined. Thereby, an approach that can handle large state spaces while ensuring, at least, near optimal policies must be employed. The online, continuous MDP-POMDP (Partially Observable MDP) solver ABT (Adaptive Belief Tree) [36] is used in the present study.
Control
The control layer generates the steering, throttle and brake commands for keeping the agent in the planned trajectory. Decision-making and local path planning modules in the navigation layer set a desired vehicle trajectory in terms of agent action space. Two closed control loops are given those reference values and return control actions related to braking, throttle and steering commands, which are directly sent to the simulator interface to be executed.
Lateral Control
The lateral control, which generates the steering signal, is managed by the Model-Based Predictive Control (MPC), in which a cost function is optimized along a predefined time horizon H, thus resulting in a sequence of actions, one for each time step ∆t. The immediate action is executed and the process is restarted in the next time step, leading to a receding horizon optimization.
The formulation of the planner as a convex optimization problem enables the solution computation in a short time step. However, the constraints defining the vehicle's motion model are essentially non-holonomic. Car-like robots can assume positions on the 2-D plane, different headings and steering angles, thus adding up to four degrees of freedom. However, it poses the following two kinematic constraints: a) the vehicle is allowed to move only forward and backward and b) the steering angle is bounded [34]. Therefore, the actual car motion and the planning trajectory can be different when the planner neglects dynamics factors. Fig. 11 shows a bicycle model used to represent car-like vehicles, which are characterized by Ackerman steering geometry [57], moving with longitudinal velocity v. The front wheel is able to turn and gives the steering angle φ, whereas the rear wheel is always aligned with the bicycle body. According to Fig. 11, θ represents the heading of the vehicle, and P is the guiding point controlled so as to follow the assigned path. The intersection between the lines that pass through the rear and front wheels axes provides the Instantaneous Center of Curvature (ICC). The distance between ICC and P represents the radius of curvature R. The curvature of the vehicle is given by κ = 1/R. By considering that the wheels roll without slipping, only the kinematic equations can be considered and the lateral dynamic effects can be neglected [58]. Therefore, the considerations made so far result in the following kinematic model [59] where τ =φ/(D bl cos 2 φ). The motion constraints are added to the optimization problem by means of the third power of ∆t on the basis of (12) [40], where v is computed by the decisionmaking module (considered constant in the optimization). The cost function is defined as the sum of the quadratic differences between the decision variables and the reference path, ∆t H C x C y C θ C κ C τ 1 s 4 s 5 5 10 100 10 and also, the quadratic of τ where C x , C y , C θ , C κ and C τ are cost weights manually tuned.
The chosen parameters are shown in Table 3. The optimization is performed by Python library scypy.optimize.
Longitudinal Control
The solution of the MDP problem is the speed rate change to be applied to the agent. Given the current agent's velocity, a new velocity to be tracked can be computed. This tracking is performed by a Proportional-Integral (PI) control.
Evaluation Metric
Autonomous vehicles are heterogeneous and complex systems, composed of a wide range of software components responsible for sensing, perception, decision-making, planning, control, and health-management systems. Different methodologies evaluate such systems. Unit tests aim to analyze each component individually seeking for failures and quantifying its performance according to some metrics (e.g. accuracy, recall and precision for classification algorithms [60,61]). On the other hand, integration tests examine the behavior of two or more components working together (e.g. obstacle detection and obstacle avoidance). Finally, system test evaluates the functioning of the whole system and all its components working concomitantly, and its performance can be summarized by quantitative or qualitative metrics [62,63]. However, no standard methodology assesses and compares the performance of the complete system for autonomous vehicles [64].
CARLA Autonomous Driving Challenge (CADC) proposes a benchmark for the evaluation of autonomous driving systems, which relies on different sets of sensors and software architecture approaches. This competition runs the autonomous system, also known as agent, in a simulated urban environment, where each scenario varies in appearance and architecture of the city, traffic areas (e.g. freeways, urban scenes, residential districts, roundabout, unsigned intersections), size of routes, number of traffic participants, and weather conditions. Moreover, each route is provided with traffic situations based on NHTSA pre-crash typology, such as control loss with no previous action, obstacle avoidance for unexpected obstacles, negotiation at roundabout and unsigned intersection, leading vehicle's break, and crossing intersection with opposite vehicle disobeying traffic lights.
Towards the evaluation of the agents´performance in each simulated scenario, the competition proposed a quantitative metric that gathers information on the percentage of each route and infraction committed. It evaluates the performance of the entire system based only on questions (e.g. how well does the autonomous vehicle navigate from a point to a destination obeying traffic rules and ensuring safety of passengers and other traffic participants, while facing common and also unexpected traffic situations(e.g. occluded obstacles and vehicle's control loss.) Equation 15 shows the metric, which is an average score of the percentage of complete routes minus the sum of all infractions. Table 4 displays the discount points for each traffic infraction evaluated.
where a is the agent, R is the number of repetitions for each route, N is the number of routes, C (a, r i ) is the amount of route completed for the i route, and I (a, r i ) are the discounted points in the score due to infractions in each route execution, since C (a, r i ) ∈ [0, 1], and I (a, r i ) ∈ N.
Challenge and Experimental Results
To evaluate the CaRINA Agent, we present its performance in the competition and also in a simulation experiment using a publicly available set of routes 2 . The competition involved 10 different routes from unreleased cities' maps, and each of them was executed three times under different weather conditions. On the other hand, the public available set of routes was composed of 26 routes in three different public cities (Town02, Town04, and Town05).
Our agent was run on an Amazon compute node (AWS) with NVIDIA Tesla K80 GPU (Graphics Processing Unit) and the Robotic Operating System (ROS) kinetic version as the communication middleware. In the simulator experiment, it was run on a computer with Ubuntu 18.04, Intel Core i7-4930K with 32GB memory, NVIDIA GeForce GTX TITAN X video card with 12GB video memory, and the melodic version of ROS. Table 5 shows the result of CaRINA Agent in each track of the competition. The total score directly reflects the complexity of each track in the competition. The tracks of Perception Heavy category (track1 and track2) showed the lowest average scores of complete routes, with 40.82 (track1) and 34.74 (track2), and also a high discount score due to traffic infractions -16.8 (track1) and 13.43 (track2). 2 These routes are available at [65], in the file named routes devtest.xml. Track3 and track4 in the Map-based category achieved the highest score for completing routes, meaning the agents were able to cover most path on each route, and were challenged by more traffic situations. track4 was given the highest score among all tracks, since all perception information was available for the agent, which should be concerned only about decisionmaking, planning, and control.
The CaRINA Agent scored 0.42 (track1), 0.39 (track2), 0.17 (track3), and 0.09 (track4), by calculating the ratios between the average infraction scores and the average percentages of completed route for each track. These performances are in line with the complexity of each track, with the tracks in the Map-based category having a considerably lower value than those of the Perception Heavy. Table 6 shows the results of CaRINA Agent for the 26 routes of the simulation experiment. The total score in each track and the average score of complete routes are similar with the results of the competition (Table 5), which decreases as complexity of each track increases. The ratios between average infractions score and average percentages of completed route for each track are 0.28 (track1), 0.27 (track2), 0.1 (track3), indicating the agent with fewer sensors and reliable perceptual information committed more infractions due to the inaccuracy representation of the vehicle's surroundings, or even lack of information.
Traffic Infractions
The lower performance of tracks that used perception algorithms to create the surroundings representations (track1, track2, and track3) can be explained by their challenge in representing different urban scenarios. For instance, CARLA simulator's cities have different designs for traffic light poles and junction's layouts (Fig. 12). In each case, the autonomous system should identify the correct traffic light to be obeyed and the position at which it should stop. The stop position also depends on the position of the system's sensor, since the vehicle must continue to observe the traffic lights for updating its status.
The influence due to lack of precision in surrounding representation on the agent's performance is shown in Tables 7 and 8, which provide the number of traffic infractions CaRINA Agent committed, respectively, in the competition and in the simulation experiment. The agents of track1, track2, and track3 committed more "Running a red light" infractions due to the necessity of detecting traffic lights and stop lines. This information is provided in track4 by the pseudo-sensor called Ob-jectFinder, which justifies the small number of infractions by the agent. In other autonomous vehicle architectures, High Definition Maps (HD-Map) provide information like this, for example, the Lanelets 2 [31], OpenDriver [67], and Tsinghua map model [68]. Therefore, maps with augmented semantic and traffic information can improve the performance of autonomous systems, if used in conjunction with perceptual information for increasing accuracy and reliability of the vehicle's surroundings representation and understanding [69].
Another challenging scenario for autonomous systems that heavily rely on perception algorithms is to recognize the navigable area of environments. In track1 and track2, the agents were provided with only a sparse representation of the route, with waypoints and high-level commands, i.e. turn left, turn right, go straight, change lane left, and change lane right. We employed an interpolation approach using the sparse waypoints towards creating a denser representation of the trajectory. However, the diversity in the roads' geometry layout is still an important issue to be considered. Therefore, the agents of track1 and track2 committed several infractions of "Invading lane in opposite direction" and "Invading a sidewalk" types (see Tables 7 and 8), which highlights the need for more reliable representations of the vehicle's path and navigable surroundings area, such as road segmentation that use images and point clouds [70,71,72,73], and road boundary detection [74,75]. Besides path tracking and traffic rules compliance, the autonomous system must avoid collision with obstacles for ensuring safety of its passengers and other traffic participants. It must detect the obstacle, understand the traffic scenario, assess the risk associated with the situation, make decisions, and perform the action for avoiding the collision, which may involve changing its trajectory or executing an emergency braking. Some of the challenges related to obstacle avoidance are different traffic scenarios such as highways, intersections and roundabout, which may require negotiation among traffic participants, field of view of the sensors, which may create blind spots around the vehicle, and other traffic participants' behaviors, which may lead to unexpected and dangerous situations. We adopted a point cloud clustering approach for obstacle detection, and a corridor-based approach to assess collision risk. The latter creates a safety-field around the vehicle's path for estimating the likelihood of collisions with obstacles near the vehicle. However, collision was the most frequent traffic infraction of CaRINA Agent in both competition and simulated experiment, with ratios of collision by percentage of completed route of 1.25 (track1), 1.47 (track2), 0.5 (track3), 0.35 (track4) in the competition, and 0.75 (track1), 0.84 (track2), 0.2 (track3), 0.07 (track4) in the simulated experiment.
Collisions with static obstacles are related to sidewalk invasion, and collision with pedestrians with the occluded and unexpected obstacles that appear close to the vehicle, not giving it enough time to avoid the collisions. This reinforces the need to have routines available for carrying out emergency maneuvers, which are executed in a shorter time interval, even if this leads to more sudden braking [76,77], and also pedestrian's behavior anticipation and trajectory prediction [78,79,80,81,82].
Collisions with other vehicles occur mainly at intersections and when the vehicle ahead decelerates or stops, and the Ca-RINA Agent does not react in time to avoid a collision. In the first case, the number of collisions can be reduced by integrating components for decision-making in the system's architecture. Such components take into account vehicles' negotiation, which can be provided by vehicle-to-vehicle communication [83,84,85], or through the estimation of the intention and trajectory of other traffic participants to find the best time to cross an intersection [86,87,88,89]. Similarly, the latter situation can also be avoided by vehicle-to-vehicle communication [90], i.e., the vehicle ahead advises other vehicles on its action, or by predicting its intention according to its actions over a period of time, which provides the ego-vehicle with enough time to react [91,92,93,94].
CONCLUSIONS
This article has addressed the development of a software architecture for an autonomous vehicle designed towards meeting the CARLA Autonomous Driving Challenge (CADC). The vehicle used GPS, cameras and LiDAR for a reliable perception system, together with a planning and decision-making system based on Markov Decision Process for completing several routes in many traffic scenarios autonomously.
LRM-B team successfully completed the challenge and ranked first position in tracks 1, 3 and 4, and second in track 2. CaRINA Agent was able to drive autonomously for several kilometers, obeying basic and complex traffic laws, negotiating stop and traffic light intersections, and merging into simulated traffic according to methods and algorithms well-known in robotics and in the autonomous driving research field. Such methods and algorithms include obstacle detection using height maps, clustering, stereo vision, deep learning, path planers, motion controllers, extended Kalman filters for localization, decisionmaking, and a hazardous obstacle monitor.
The system developed by the LRM-B team for CADC proved robust and extensible. Initially, the approach was focused on track3; however, the architecture was soon extended towards meeting the requirements of the remaining tracks, since additional capabilities could be added or easily changed due to the modular architecture. The success of LRM-B Team was largely based on previous experience with our experimental platforms (CaRINA 1, CaRINA 2 and Smart Truck) [12].
During the challenge, among the several novel aspects developed and tested is the use of short-term motion prediction for risk assessment and Distance-to-Collision for decision-making. Another contribution was the decision-making approach for longitudinal high-level control by a probabilistic framework, known as Markov Decision Process, which handled different traffic scenarios, such as traffic jams, highways, and signalized intersections. The use of 2D object detection and projection in the 3D point cloud (stereo and LiDAR) for finding the 3D poses of obstacles, traffic lights and signs must also be highlighted, since it improved decision-making.
Although successful in CADC, our architecture and methods should be extended for a real autonomous drive vehicle including other software components to improve risk assessment such as a 3D obstacle detection with shape and orientation estimation, multi-object tracking, intention prediction and long-term trajectory prediction of traffic participants. Improved software components related to decision-making, path planning and road boundary segmentation can be added to reduce the lane sidewalk invasion. Besides that, a decision-making component can be used for unsigned intersections.
As future work, we aim at using visual navigation approaches for tracks with no prior knowledge of the map, (e.g., tracks 1 and 2 of the competition). Some of such approaches are imitation learning and end-to-end deep learning, which apply machine learning techniques to directly generate the control commands of a vehicle, i.e. brake, throttle and steering [95,96,97,98,99].
Finally, the architecture developed provides an interesting platform for future research in all fields of autonomous driving, and a starting point for more complex and advanced architectures. CARLA simulator has proven an interesting tool that, together with its development framework, can accelerate both prototyping and testing of autonomous vehicles in a relatively short time, and provides a benchmark for autonomous systems evaluation. | 2020-10-27T01:01:16.955Z | 2020-10-23T00:00:00.000 | {
"year": 2020,
"sha1": "99b9f2185f6ed9f3f2db5b2f1298fe7d8d07e347",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "99b9f2185f6ed9f3f2db5b2f1298fe7d8d07e347",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235291928 | pes2o/s2orc | v3-fos-license | Pollution-Free Cultivation Techniques of Leafy Vegetables under the Background of Information Technology
China is one of the world’s agricultural development powers, vegetables in agriculture is worthy of the majority of planting. With the improvement of people’s living standard, “eating Chinese cabbage” has become the focus of people’s general attention. This paper mainly studies the analysis of the pollution-free cultivation techniques of leaf vegetables under the information background. Aiming at the defects of the existing vegetable cultivation expert system in protected areas, such as single function and several vegetables, the vegetable network expert system for disaster mitigation and cultivation has no public cellar, and the design and implementation of the system are introduced. The development of the system is oriented towards the majority of agricultural technicians and farmers, and provides them with a service platform for the safe production of vegetables in protected areas, the identification and prevention of major diseases and insect pests, scientific management, and so on, has a strong practical value and development prospects of grassroots.
Introduction
Pollution-free vegetables refer to the producing area or market of pollution-free agricultural products with the label of vegetable products after being certified by relevant departments, whose production environment, production process and product quality meet the requirements of pollution-free relevant national standards or agricultural industry and the quality inspection institutions through inspection. The production of pollution-free agricultural products is the development direction of modern agriculture and an important symbol of high level. A high level of well-off society is the strategic adjustment of agricultural industrial structure, and the need to improve and increase farmers' income is the need for agriculture to participate in international competition and improve the competitiveness of agricultural products in the market after China's accession to the WTO [1]. Agricultural development has reached a new stage, the structural surplus of agricultural products, with the acceleration of the pace of building a well-off society in an all-round way and the improvement of people's living standards, the change of diet structure and the need for healthy, pollution-free agricultural production are more and more important. An expert system is an intelligent system that stores the knowledge of a large number of experts in related fields, so that human expert knowledge can be used to solve specific problems. Expert systems have been applied in various fields. The CALEX/PEACHES Expert decision support system developed by Bajpai is used in peach Orchard management; The CALEX/RICE expert system is used for the management of RICE production processes, which can update real-time data on pesticides and weather conditions [2]. Neelam uses MICCS tools to develop the crop expert system, which is mainly used for the diagnosis of diseases and insect pests, applies the expert system to vegetable greenhouse cultivation and milk processing, and actively ADAPTS to the Internet era by combining network technology, so that the expert system becomes more practical and has better continuity [3].
Expert system is an important branch of agricultural expert system, which mainly solves various problems in the production of protected vegetables. In order to solve these problems, a web-based expert system for vegetable disaster reduction and pollution-free cultivation was developed.
Overview of Pollution-Free Vegetables
The main pollution sources in the process of pollution-free vegetable production can be classified into air pollution sources, water pollution sources and soil pollution sources according to the main objects affected by their hazards. According to the classification of human social activities generated, it can be divided into industrial pollution sources, agricultural pollution sources, domestic pollution sources and traffic pollution sources, etc. [4].
(1) Air Pollution Air pollution comes from the space around the above-ground parts of vegetable production. It is mainly caused by the waste gas from industry, the waste gas from the combustion of energy, the waste gas from transportation and the pollution caused by pesticides and chemical fertilizers.
(2) Pollution of Irrigation Water Groundwater pollution in rivers, lakes and some areas is caused by large amounts of untreated wastewater and waste residues discharged from industries and cities, as well as large amounts of fertilizers and pesticides used in agriculture, and sewage is used to irrigate vegetable fields, resulting in indirect pollution of vegetables. The composition of these pollutants is very complex, such as hydrogen sulfide, mercaptan, fiber suspended solids, oil, plankton, protein, sugar, organic nitrogen, chloride, carbonate, ammonia, sulfuric acid, phenol, inorganic salts, alkalis, fluoride, cyanide and various heavy metals [5].
(3) Soil Pollution There are two main channels of soil pollution: one is the soil pollution caused by urban and industrial "three wastes"; the other is the secondary pollution caused by the large-scale use of pesticides and fertilizers.
1) Heavy metal pollution. In terms of different heavy metals, tin content is the highest, followed by zinc and copper, mercury content is moderate, lead, arsenic and chromium content is the lowest. In addition, the absorption of heavy metals is also different under different soil conditions [6]. By their nature, heavy metals are hard to biodegrade or chemically degrade. They are easily deposited through the food chain in animals, plants and humans. In addition, they are highly toxic and pose a serious threat to the ecological environment, food safety and human health.
2) Pesticide pollution: Due to the heavy use of highly toxic, highly toxic and highly residual pesticides in vegetable production, the excessive use of low-toxic pesticides, and the disregard for the safety interval of pesticide use, the pesticide pollution becomes more and more serious [7]. The pesticide residues in vegetables were closely related to the kinds of vegetables, the kinds of pesticides, the application methods, the application times and the duration of the application.
3) Traffic pollution: Mainly refers to the pollution caused by traffic in vegetable fields near highways, mainly including lead pollution and PAHs pollution. Among them, the lead pollution comes 3 from automobile exhaust, while the pollution of polycyclic aromatic hydrocarbons (PAHs) comes from asphalt pavement and carbon black produced by automobile tires.
Through careful investigation and measurement of air, soil and water source, the base is selected in strict accordance with the requirements of pollution-free vegetable production standards.
Expert System Architecture
Expert system can share the knowledge of experts in many fields to users, and its level can reach or close to that of experts. The expert system has been widely used in many fields and achieved remarkable results.
(1) Overview of Expert System The so-called expert system can be understood from two parts: "expert" and "system". The former reflects the authority of the expert system in this field, and systematically collects the experience and knowledge of experts in this field for many years to guide the production practice. The latter reflects that the expert system has more levels and a more complete architecture compared with the general computer program, because the expert system can not only complete the collection and input of knowledge, but also match the input information and output the corresponding conclusion according to its reasoning and decision-making function [8]. Common expert systems are generally composed of knowledge base, reasoning machine, interpreter, human-computer interaction interface, knowledge acquisition mechanism, database and other modules.
Knowledge base: the cornerstone of expert system, storing the processed systematic and formatted knowledge of this field, which can be divided into fact base and rule base. To some extent, the quantity and quality of knowledge base determines the professional performance and problem-solving efficiency of the system. Therefore, the key to the construction of expert system lies in the perfection of knowledge base.
Knowledge acquisition institution: it is a medium between human knowledge and knowledge base to collect the ideas of experts in this field and the opinions of engineers from different departments into the database. In the process of expert system construction, how to transform knowledge is the key and difficult point of the work, and adaptability in the process of reasoning must be considered in the process of knowledge transformation.
Reasoning machine: Under the control of a certain reasoning strategy, the existing logical reasoning rules in the database can be used to deduce and solve problems by matching current problems with existing data information, which is the core of simulating experts' problem solving [9]. There are three main reasoning ideas, namely, forward, reverse and mixed reasoning. On the one hand, the reasoning machine and the knowledge base are designed separately and do not affect each other; on the other hand, the representation of knowledge in the knowledge base affects the reasoning performance of the reasoning machine.
Comprehensive database: A large amount of data information will be generated during the operation of the system. These intermediate information will be stored in the comprehensive database, including system input information and inference intermediate results. When the inference engine selects knowledge, it needs to interact with the comprehensive database, and the comprehensive data also provides the inference path for the interpretation mechanism. Therefore, the integrated data plays a connecting role in the system.
Human-computer interaction interface: a bridge between users and domain experts or knowledge engineers, mainly responsible for information interaction and information transmission. The expert system acquires domain expert information through the human-machine interface and transforms it to form expert system knowledge [10]. Or the user inputs matching information in the human-machine interface, inferences through the expert system, and outputs decision conclusions to the human-machine interaction interface.
Interpretation mechanism: it is responsible for annotating and explaining the final output conclusion, from which users can see the complete path of matching reasoning. Interpretation mechanism is not only a supplement to reasoning matching, but also can reflect the shortcomings of reasoning process, which can be taken as the basis for improving the system matching process (2) System reasoning Machine Judgment Method Operator Let the vector composed of the absolute value of the weight in each field of an object be :(|W1|,|W2|... |Wi|), the operator of the determination method of the object is: All CFTt= L under the current domain calculation formula. Under the current simplification, the operator of the determination method is actually: The target value of the object is determined by the reliability of each field and the operator of the judgment method of the field: that is
Production Environmental Control
Good production environment is the premise and foundation of pollution-free vegetable production. According to relevant standards, strictly select and control the environmental quality of vegetable base, and grasp the first stage of pollution-free vegetables.
Entrust province agricultural environment monitoring station and the city environmental sampling and comprehensive investigation on the station and quality standard reference base for soil environmental quality standard secondary standard, farmland irrigation water quality standards of the state, state of atmospheric environment quality standards and protect crops maximum allowable concentration of air pollutants and pollution-free vegetable production requirements.
Pollution-Free Cultivation Techniques
Compound biological fertilizer is a compound compound made of three kinds of soil beneficial microorganisms including nitrogen bacteria, phosphorous bacteria, potassium bacteria, organic matter, inorganic nutrients and trace elements.
The test was carried out in a contracted field in this city, and 7 treatments were set for the test. Basal fertilizer of each treatment was applied 10 days before colonization.
The test was carried out in a contracted field in this city, and 7 treatments were set for the test. The basal fertilizer of each treatment was applied 13 days before colonization. Figure 1, it can be seen that the yield difference among all treatments is small. The main reason for the small yield difference is that the growth period of Chinese cabbage is short. In addition to the high temperature at that time, the nutrient released by the soil itself and the nutrient provided by fertilizers can meet the growth of Chinese cabbage, so the yield difference among all treatments is small.
Experimental Results of Pollution-Free Cultivation of Vegetables
(2) Nitrate Content As shown in table 1, we can see that long tai compound fertilizer, all to be part of the basal using two soybean meal organic fertilizer processing low nitrate content were 2174.4 and 2026.5 mg/kg, this fully shows that long tai compound fertilizer all basal, soybean meal organic fertilizer all basal or long tai compound fertilizer and compound fertilizer as basal or long tai composite biomaterials applying topdressing urea, nitrate concentrations were significantly lower than control treatment of fertilizer. Figure 2. Experimental yield comparison of Chinese cabbage As shown in Table 2 and Figure 2, it can be seen that the yield of treatment 3 is the highest, followed by treatment 4. In treatment 1, the single biological compound fertilizer was used as the base fertilizer. Due to the long growth period of Chinese cabbage and the large amount of fertilizer required, the nutrients provided by the compound biological fertilizer and the nutrients fixed or decomposed by three kinds of bacteria were not enough. Not only from the later growth potential, the old leaves were the first to wilt, but also the lowest in terms of the yield. It can be seen from treatment 6 that if phosphorus and potash fertilizer in the base fertilizer can basically meet the growth of Chinese cabbage, fertilization with nitrogen and potash fertilizer at the core stage has no effect of increasing yield, on the contrary, the yield is lower than that of the control group.
Conclusions
Full text main vegetables and mitigation and pollution-free cultivation expert system are introduced, the design and implementation of the system for vegetable cultivation from seed selection to harvest, divided into several phases, for each phase volume of pure index is put forward, through analyzing Dan, established index is given at this stage and the stage of remedial measures should be taken to make the vegetable cultivation has realized the more scientific management. However, the system needs to be further expanded and improved in the following aspects: the knowledge base of the system is not perfect enough, the database data is not enough, and the work of knowledge sorting and data accumulation needs to be further strengthened. In the next step, if the automatic acquisition of knowledge can be realized, the system will be more perfect. | 2021-06-03T01:23:42.576Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "fc9388922262f7418e5df21198e17e72971e4744",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1881/3/032004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fc9388922262f7418e5df21198e17e72971e4744",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
249945871 | pes2o/s2orc | v3-fos-license | NUMERICAL COMPUTATION OF LIGAND AND SIGNAL ASSOCIATED TO INVADOPODIA FORMATION
Abstract Invadopodia are protrusions that are commonly spotted at the plasma membrane of the invasive cancer cells. In forming invadopodia, several molecular interactions are involved such as the ligand, extracellular matrix (ECM), matrix metalloproteinases (MMPs), actin
INTRODUCTION
As mentioned by [1], [2], the invadopodia are protrusions that have the ability to open the pathway for the motility of the cancer cells. These structures are involved directly in the metastasis and consequently, invasion of cancer occurred. The molecular interactions including the polymerization of actin, degradation of ECM by the MMPs, creation of ligand, signaling process through membrane receptor, and delivery of MMPs to the invasion front are the processes for the invadopodia formation. According to [3], the force from the polymerization of actin and matrix degradation chemical activity are the causes for the invadopodia to gain their protrusive structure. The MMPs played the role of the matrix-degrading enzymes (MDEs) to degrade the ECM. Meanwhile, the polymerization of actin produced the force that pushed the membrane to enable the growth of the invadopodium.
Studies on invadopodia has been investigated by many researchers due to its important role in the metastasis process [4]- [7]. In an effort to investigate the formation and maturation of invadopodia from a mathematical point of view, [8] introduced a positive feedback loop to explain the process of invadopodia formation. On that account, the mathematical modeling approach has been explained by emphasizing the equations for ECM, MMPs, ligand, and actin. The different rate constants for MMPs are marked to study the level of invasiveness of a cancer cell. This study revealed that the higher of rate constant for MMP, the cancer cell became more invasive. However, the problem appeared as the region of actin became disconnected. Due to the obstacle, actin is noticed at the outside of the cancer cell, and this is not in line with biological facts that actin must lie inside the cancer cell.
To solve the actin disconnection problem, [9], [10] implemented a new domain of a free boundary interface to separate the activities on the regions of inner and outer of the cell. Also, a new variable is suggested which is signal transduction that lies inside the cell. From the numerical results, it is found that, as time increased, the position of the free boundary and density of signal distributions are increased. In addition, the concentration of the signal is spotted higher on the interface. From this attempt, the invadopodium should be formed due to the expansion of the plasma membrane.
Another exploration on the invadopodia that emphasized signal transduction has been performed by [11]. Two-dimensional signal transduction is carried out and from this approach, the invadopodia through the formation of outward protrusions have been disclosed. Besides, [12] also studied the formation of the protrusions by implementing ligand and signal transduction in their mathematical model. In their study, the cell protrusion is accounted as the free boundary by emphasizing the quasi-static mathematical model with the Dirichlet and Neumann condition applied to the inner and outer of the cell, respectively. Further, [13] has studied the twodimensional time-independent signal and ligand with the Dirichlet boundary condition for both regions.
Generally, the changes of the interface position from the polymerization activity led to the appearance of invadopodia. Hence, it is precise to account for the interface as a free boundary. Thus, the level set is an appropriate method to locate the interface location. Furthermore, the regions of the inner and outer of the cell can be determined using the method of level set.
As mentioned in [14], [15], the level set method is reasonable especially for the boundaries and interfaces tracking in the progression of time. An investigation on the level set method to locate the moving fronts in multi-physics problems is presented by [16]. The techniques of the level set to several problems are stated in [17]. Besides, the method of level set to highlight the changes of interface for the pure substance such as melting of ice or called Stefan problem have been implemented by [18].
Studies in [19]- [21] employed the method of level set to model the growth and interaction of multiple dendrites in solidification. Apart from that, [22], [23] investigated the corrosion effect on the geometry of metals with the implementation of the level set method. Meanwhile, the implementation of the method of level set in a sense of biology has been discussed in [24] to determine the development of cell growth. Thus, from the previous studies, the level set method is appropriate to deal with the interface changes.
Hence, by considering the above-mentioned studies, the formation of invadopodia is investigated by considering the mathematical model of timedependent ligand and signal transduction. The location of the interface is captured with the zero-level set function approach. Besides, the equations for ligand and signal transduction are solved numerically using finite difference methods. Apart from that, the convergence results are given to explain the effectiveness of the numerical computation.
Mathematical Modeling
In this section, four key variables are focused on in this paper to interpret the molecular interactions between ligand denoted by * , signal denoted by , the interface velocity denoted by , and velocity extension denoted by . The molecular interactions between the variables are described in the square domain as pictured in Figure 1 and are defined in Ω = 0 * ∪ Γ ∪ 0 to represent the extracellular, interface, and intracellular regions, respectively. The ligand binds with the membrane-associate receptor particularly the epidermal growth factor receptor (EGFR) and stimulates the signal transduction. One of the processes that are pivotal for the formation of invadopodia is signal and this is not mentioned in [8]. However, it is introduced in [9], [10], [12]. Therefore, the stimulation of the signal transduction in the intracellular region leads to the MMPs up-regulation.
Figure 2
The molecular interaction Further, the signal transduction stimulates the activity of the actin polymerization. The polymerization process by the actin creates the force that led the interface to the invasion front. Hence, the gradient of the inner signal is applied for the interface velocity. Meanwhile, the partial differential equations method is performed for the velocity extension to gain the information of the velocity from every area of the interface. Since the level set method is implemented, both velocities on the interface and the whole domain are important. From the above explanations, the equations for the invadopodia formation are stated below.
Numerical Methods
In this section, the numerical methods to solve the equations above are presented. First, the explanation of the level set method mostly on tracking the position of interface and consequently determining the position for ligand and signal transduction has been discussed. Next, the finite difference and ghost fluid scheme are described to discuss the procedure to deal with the availability points.
Level Set
In the numerical computation, the first level set is described using ( , 0) where this symbol is set as a circle equation with radius (see Equation (6)). The circle-shaped is denoted the plasma membrane of the individual invasive cancer cell. The formation of the invadopodia is the consequence of the motion of the interface. Therefore, the zero-level set function is specified for the interface to detect its motion (refer to Equation (7)). In the meantime, the level set method is implemented to determine the regions of intracellular and extracellular. The intracellular region where the signal transduction is spotted is defined as the , < 0. On the other hand, for the condition of positive level set , > 0 is used to locate the extracellular region where the ligand is accumulated.
Finite Difference and Ghost Fluid
After the regions for intracellular, extracellular and interface have been determined, the second part that is challenging is the method to solve for the points involved. There are two types of points that have to be emphasized which are regular and neighboring points. The point that is distant from the interface is called regular. Apart from that, the point that is very near to the interface is called the neighboring point.
The neighboring points are also determined if their adjacent point is separated by an interface.
Figure 3 Neighboring points
The second-order centered finite difference approach is suited for the regular points because, in the finite difference approximation, the five-point stencil is on the same region. Nevertheless, it is not preferable to discretize the neighboring point since, from the five-point involved, one of them is separated by the interface. The description of the neighboring points is depicted in Figure 3. From this figure, the region for ligand and signal transduction are separated by the interface. Focused on the signal position, the point on the ligand position is labeled as the ghost value. Here, the ghost fluid method with linear extrapolation is selected. The scheme for the ghost fluid method depending on the four directions has been described in [25]. Hence, Equation (8) is the scheme of the ghost fluid method specifically for the -derivative in signal transduction position. The in Equation (8) denoted the distance of the point to the interface. Hence, is computed using the formula in Equation (9).
RESULTS AND DISCUSSION
In this section, the results and discussion are explained in two subsections. First, the formation of invadopodia through the existence of protrusions on the interface is presented. In the next part, the convergence results for the variable involved namely, level set, ligand, and signal transduction are described. For simplicity purpose, the ligand and signal diffusivity coefficients in Section 2.1 are set equal to one. Figure 4 depicted the formation of invadopodia that has been observed at = 1 (510 iterations). Figure 4(a) displayed the accumulation of ligand in the extracellular region, and it is more clearly spotted at the area of the interface. Because of the ligand and membrane receptor binding, the stimulation of the signal transduction occurred and thus diffused to the intracellular region as illustrated in Figure 4(b). From these two figures, the higher concentration of ligand and signal transduction are observed on the interface due to the MMPs have been concentrated. Consequently, the existence of outward protrusion is spotted at the top of the interface as illustrated in Figure 4(c). This occurrence is due to the interaction inside and outside of a cell and consequently, moved the interface. Here, the invadopodium has been formed to open the passage for the cancer cell to invade the other part of the body.
The Convergence Results
To compute the convergence results, the general shape of protrusions as shown in Figure 4c is taken as the plot of the reference solutions for . Hence, the shape of the protrusions is used as an illustration to end the simulation. The convergence results are provided in Table 1 and Figure 5. The maximum norm for the , * , and has been observed which has been compared at two different values of space steps, ℎ.
Two different space steps are selected to observe the effect of the grid size with the numerical convergence through the simulation on the variables involved. The results obtained are relatively in good agreement. Based on the simulation, the lower grid size, resulting in a lower error. It is also noticed that significant differences in signal transduction error can be observed when using a smaller grid size.
Figure 5 Convergence curve
The convergence profiles for the , * , and at two different values of space step are depicted in Table 2 and Figure 6. Here, the numerical errors are examined with the increase of iterations. Figure 6(a) illustrates the variation of error for at two different space steps. This figure shows that the error at ℎ = 0.2 is higher compared to ℎ = 0.1. These findings coincide with the idea that the smaller the grid size, the less error can be achieved. Meanwhile, Figure 6(b) and Figure 6(c) show the errors for ligand and signal transduction. From these two figures, a slight difference in errors for ligand and signal transduction is noticed. Nevertheless, the error show decreases as iteration increased.
CONCLUSION
This paper investigated the invadopodia formation from the existence of protrusions on the interface. The protrusions formed are the consequence of the physical force created from the polymerization of actin. In this paper, the mathematical modeling is taken as the heat equation to represent the diffusion of ligand and signal transduction to the extracellular and intracellular regions, respectively. The MMPs concentration is assumed as the function and this is said as the starting point for the formation of invadopodia. Furthermore, the velocity of the interface from the polymerization of actin activity finally moved the interface.
The zero-level set function is used to detect the position of the interface. Furthermore, the intracellular and extracellular regions can be differentiated using the different signs of level set function. On the other hand, the second-order centered finite difference and ghost fluid with linear extrapolation methods are applied for the regular and neighboring points, respectively. Thus, with the aid of the abovementioned methods, the formation of actin-rich protrusions from the movement of the free boundary interface has been observed.
In the meantime, the distributions of ligand and signal transduction have been obtained to perceive the activity in the extracellular and intracellular regions. Furthermore, the convergence results for the level set, ligand, and signal transduction are observed in the maximum norm approach.
Results showed that the invadopodia are formed due to the activities on the intracellular and extracellular regions. Also, the convergence results gave good outcomes since the error is observed to be smaller as the grid size is decreased. In the future, the equations for MMPs, actin, and membrane receptor should be included in the mathematical modeling due to their important role in the invadopodia formation. | 2022-06-23T15:10:31.753Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "c2fc47cca377d4ded699edb5b495d6bcc6854322",
"oa_license": null,
"oa_url": "https://journals.utm.my/jurnalteknologi/article/download/17901/7947",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68d550c23c92509f6483a37ae2be2edb64f4f351",
"s2fieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
181918 | pes2o/s2orc | v3-fos-license | Cross-sectional association between serum concentrations of n-3 long-chain PUFA and depressive symptoms: results in Japanese community dwellers
The effect of n-3 long-chain PUFA (n-3 LCPUFA) on depression in healthy subjects is unclear, and most of the previous studies have focused on populations eating Western diets with lower fish intake. The present study investigated the association between blood levels of n-3 LCPUFA and depressive symptoms in Japanese community dwellers with higher n-3 LCPUFA blood levels. A cross-sectional study was conducted from 2006 to 2008, including 1050 men and 1073 women aged 40 years or older from the National Institute for Longevity Sciences – the Longitudinal Study of Aging. The Center for Epidemiologic Studies Depression Scale (CES-D) was used to assess depressive symptoms. Multiple logistic regression analysis was performed to estimate the OR and 95 % CI for a CES-D score ≥16. Serum concentrations of n-3 PUFA, but not n-6 PUFA, were inversely associated with depressive symptoms. Compared with the lowest quintile, the adjusted OR for serum EPA at the fourth and fifth quintiles were 0·55 (95 % CI 0·35, 0·85) and 0·64 (95 % CI 0·42, 0·98), respectively, and at the fifth quintile for DHA it was 0·58 (95 % CI 0·37, 0·92), for the presence of depressive symptoms (P for trend=0·013 and 0·011, respectively). Serum levels of EPA and DHA were inversely associated with depressive symptoms in Japanese community dwellers with higher blood levels of n-3 LCPUFA, suggesting that n-3 LCPUFA intakes corresponding to higher levels in a Japanese population may have implications for a lower prevalence of depression.
Depression is a serious public health problem worldwide (1) . According to the seventeen nation World Mental Health Survey, approximately 5 % of people report having experienced at least one episode of depression during their life (2) . By 2030, depression is predicted to become one of the top three causes of disability-adjusted life years lost (3) .
Management and prevention of depression are becoming increasingly important, especially for the middle-aged and the elderly. Depression in the middle-aged population results in economic losses and social burden; this social burden arises because the roles of middle-aged people in a community, including working and child rearing, keep growing. In addition, depressive symptoms are inter-related with multiple chronic conditions and age-related cognitive impairment (4) , as well as prodromal signs of cognitive decline in patients with early dementia (5,6) . Thus, depression has attracted attention as a potential risk factor for dementia. Therefore, it should be recognised as a problem that needs to be addressed immediately in developed countries with ageing societies.
It is well known that diet and nutrition influence depression. EPA and DHA, the two major n-3 long-chain PUFA (n-3 LCPUFA) derived from fish, are essential for the maintenance of cellular membrane functions via determination of membrane fluidity, and they affect neurotransmitter release and modify receptor binding (7) . Previous experimental studies have shown various mechanisms of these fatty acids, including anti-inflammatory (8) , antioxidative (9) , neuroprotective (10) and neurogenesis effects (11) . These factors may also be important in protecting against depression and its symptoms (12) . Thus, we hypothesised that n-3 LCPUFA may be able to influence the pathophysiology of depression.
According to a recent meta-analysis that took into account the heterogeneity among populations of randomised clinical trials, the use of n-3 LCPUFA was effective in patients with major depressive disorder (MDD) and in depressed patients without a diagnosis of MDD (13) . A meta-analysis of case-control studies, using blood levels as an indicator of n-3 LCPUFA levels, also reported significantly lower levels of EPA, DHA and total n-3 LCPUFA to be associated with depression (14) . These findings of associations in a general population are important in terms of preventing depression.
Previous ecological studies have shown that countries with high fish consumption, such as Japan, were associated with lower age-standardised disability-adjusted life-year rates for depressive disorders (15) . However, studies on markers of dietary intake of n-3 LCPUFA in the same area/region provided contrasting results. To the best of our knowledge, only a few observational studies have examined the association between depression and n-3 LCPUFA, evaluated by measuring blood samples or intake estimation, in a general population with relatively high blood levels of n-3 LCPUFA (16,17) . The lack of evidence regarding depressive symptoms in countries with high fish consumption may be explained in part by levels of n-3 LCPUFA being high enough to interact with the expression of depression.
The aim of our study was to investigate the association between blood levels of n-3 LCPUFA and depressive symptoms in a cross-sectional analysis of a community-dwelling population with higher n-3 LCPUFA blood levels. We specifically focused on middle-aged and elderly people, who have a higher risk for depression. We also examined other fatty acids comparatively, because it is necessary to ascertain whether findings regarding any association are specific to n-3 LCPUFA.
Methods
Data for the present study were obtained from the National Institute for Longevity Sciencesthe Longitudinal Study of Aging (NILS-LSA), a population-based survey of ageing in Japan. The participants in the NILS-LSA include about 2300 selected middle-aged and elderly subjects (18) . The normal ageing process has been researched over time using detailed questionnaires and medical check-ups, anthropometric measurements, physical fitness tests and nutritional assessments. Participants in the NILS-LSA included randomly selected age-and sex-stratified individuals from the pool of non-institutionalised residents of the National Center for Geriatrics and Gerontology's neighbourhood areas of Obu City and Higashiura Town in Aichi Prefecture. Details of the NILS-LSA study have been reported elsewhere (18) .
Subjects
Subjects in this cross-sectional analysis were derived from the fifthwave examination of the NILS-LSA. The fifth-wave examination, which comprehensively measured various fatty acids, comprised 2419 participants aged 40 years and older, and was started in July 2006 and completed in July 2008. After the exclusion of 296 subjects because of fasting for <12 h (n 53), history of and current dementia (n 6) or lack of available data for analysis (n 237), a total of 2123 subjects (1050 men, 1073 women) were included in the analysis (Fig. 1). Self-reported history of previously diagnosed dementia was collected at the fifth-wave interview.
The study was approved by the Committee for Ethics of Human Research of the National Center for Geriatrics and Gerontology (no. 249). Written informed consent was obtained from all the subjects. n-3 Fatty acids and depression in Japanese
Serum fatty acid levels
Venous blood was collected early in the morning after at least 12 h of fasting. Blood samples were centrifuged at 3500 g for 15 min. Serum was separated and immediately frozen at −80°C. Total lipids in serum were extracted and purified by the Folch procedure. Serum fatty acid concentration was measured using a GC at a clinical laboratory (SRL). In brief, fatty acid residues in the lipids were transmethylated and analysed using a GC (GC-17A; Shimadzu) with a capillary column Omegawax 250 (Supelco). Absolute amounts of each fatty acid were quantified by standards as weight concentrations (μg/ml).
Depressive symptoms
Depressive symptoms were assessed using the Japanese version (19) questionnaire of the Center for Epidemiologic Studies Depression Scale (CES-D) (20) . Participants completed a CES-D questionnaire, which assessed the depressed state during a previous week, approximately 2 weeks before each exploratory visit for the fifth-wave examination. The CES-D questionnaire comprises twenty questions in four subscales: somatic and retarded activity, depressed affect, positive affect and interpersonal relations. Scores range from 0 to 60, with lower scores indicating fewer depressive symptoms. This scale has been reported to be a valid and reliable measure of depressive symptoms in the elderly (21) . A cut-off score of ≥16 is used to identify subjects with relevant depressive symptoms (20) . We defined a CES-D score ≥16 as representative of depressive symptoms.
Other measurements
Data regarding medical history of hypertension, hyperlipidaemia, ischaemic heart disease, stroke, diabetes and dementia (yes/no), education (≤9, 10-12, or ≥13 years of school), employment status (no occupation or household labour/ non-regular employment/regular employment), marital status (yes/no), sex, age and smoking status (yes/no) were asked in the survey questionnaires. This information was obtained via a self-completed questionnaire provided approximately 2 weeks before the exploratory visit; questionnaires were collected on the same day as blood samples. BMI was calculated as weight in kilograms divided by the square of height in metres (kg/m 2 ). Both weight and height were actual measured values obtained on the same day that blood samples were collected. Alcohol intake (ml/d) during the previous year was assessed using a FFQ through interviews with well-trained dietitians. The number of steps walked per day was used to assess physical activity. Participants wore a uniaxial accelerometry sensor (Lifecorder; Suzuken) for a 7-d period (except while sleeping or bathing) for 2 weeks after the exploratory visit. We calculated the mean number of steps from 5-d records (the maximum and minimum values were excluded).
Statistical analysis
All the statistical analyses were conducted using Statistical Analysis System version 9.3 (SAS Institute). The confounding variables were sex, age, BMI, education level, marital status, smoking status, alcohol consumption, physical activity, employment status and history of hypertension, hyperlipidaemia, ischaemic heart disease, stroke and diabetes. Differences in characteristics and serum fatty acid concentrations between subjects with and without depressive symptoms were assessed using the χ 2 test (categorical variables) and Student's t test (continuous variables). For analysis of the association between serum fatty acid concentrations and depressive symptoms, we carried out multiple logistic regression analysis and estimated the OR and 95 % CI of depressive symptoms for quintiles of serum fatty acid concentrations, using the lowest quintile category as the reference. Trend associations were assessed by entering dummy variables assigned to the quintile of serum fatty acid concentrations. In model I, we adjusted for age and sex. Model II was further adjusted for the above-described confounding variables. A sub-analysis by sex also was performed for EPA and DHA. Two-sided P values <0·05 were regarded as statistically significant.
Results
The study subjects included 1050 men (49·5 %) and 1073 women (50·5 %). In total, the mean age was 60·3 (SD 12·3) years, and 266 subjects (12·5 %) were identified as having depressive symptoms (CES-D scores ≥16). Characteristics according to subjects with and without depressive symptoms are shown in Table 1. Subjects with depressive symptoms were significantly more likely to be unmarried, have fewer years of education and have a higher rate of past stroke compared with subjects without depressive symptoms. Serum concentrations of principal fatty acids based on the presence or absence of depressive symptoms are shown in Table 2. There were no significant differences in any serum fatty acid concentrations between groups. However, several fatty acids showed marginally significant differences between subjects with and without depressive symptoms, including EPA (P = 0·068), n-3 PUFA (P = 0·072) and n-3 LCPUFA (P = 0·078). The mean n-3 LCPUFA concentrations of subjects with and without depressive symptoms were 264·1 (SD 101·2) μg/ml and 276·0 (SD 103·4) μg/ml, respectively. Tables 3 and 4 present the OR and 95 % CI of depressive symptoms for quintiles of serum fatty acid concentrations, using the lowest quintile category as the reference. In Table 3, SFA, MUFA, PUFA and n-6 PUFA were not significant, but the n-3: n-6 ratio partially and n-3 PUFA showed an inverse association in all models. Individual fatty acids of the PUFA series are shown in Table 4. In the crude model, EPA and DHA showed an inverse association with depressive symptoms (P for trend = 0·021 and 0·039, respectively). The crude OR for EPA at the fourth quintile was lower than that for the lowest quintile (OR 0·58; 95 % CI 0·38, 0·88). In Model I, adjusting for sex and age, the OR for EPA at the fourth and fifth quintiles and for DHA at the fifth quintile were significantly lower compared with reference values. Furthermore, these significant associations were maintained, even after adjusting for more covariates including age, sex, BMI, education level, marital status, smoking status, alcohol consumption, physical activity, employment status and history of hypertension, hyperlipidaemia, ischaemic heart n-3 Fatty acids and depression in Japanese * P < 0·05, ** P < 0·01. † P for trend estimated by treating quintiles as ordinal variables for serum fatty acid concentration. ‡ A person with total score ≥16 using the Center for Epidemiologic Studies Depression Scale is supposed to have a significant depressive tendency. § Model I: adjusted for sex and age.
|| Model II: adjusted for sex, age, BMI, education level, marital status, smoking status, alcohol consumption, physical activity, employment status and medical history.
676
C. Horikawa et al. * P < 0·05, ** P < 0·01. † P for trend estimated by treating quintiles as ordinal variables for serum fatty acid concentration. ‡ A person with total score ≥16 using the Center for Epidemiologic Studies Depression Scale is supposed to have a significant depressive tendency. § Model I: adjusted for sex and age. || Model II: adjusted for sex, age, BMI, education level, marital status, smoking status, alcohol consumption, physical activity, employment status, and medical history.
Discussion
In the present study, inverse associations between serum EPA and DHA concentrations and depressive symptoms were observed in Japanese middle-aged and elderly communitydwelling subjects. This is the first study demonstrating the inverse association in community-dwelling Japanese people with higher blood levels of n-3 LCPUFA. No positive association for other fatty acids indicates that this association is specific to n-3 LCPUFA. The prevalence of depressive symptoms among subjects was 12·5 % in this study. In another survey of nearly 5000 Japanese elderly aged 65 years and older, Yokoyama et al. (22) showed that the prevalence of depression was 13·8 %. The incidence rate is approximately the same as the present study, although depression in their study was evaluated according to the eleven-item short form of the CES-D.
Subjects in the present study were Japanese and had higher serum levels of n-3 LCPUFA compared with populations in Western countries (23)(24)(25) . Median values for serum EPA, DHA and n-3 LCPUFA were 69·1, 165·6 and 261·3 μg/ml, respectively. Japanese consume a large amount of fish and seafood (26) , the main sources of n-3 LCPUFA (24) , and n-3 LCPUFA intake correlates well with blood levels (27) . In a previous study, Japanese middle-aged and elderly subjects also showed similar levels of serum n-3 LCPUFA (28) . In American middle-aged and elderly subjects with hypercholesterolaemia (n 105), the median values for serum EPA and DHA were 18 and 46 μg/ml, respectively (29) . In Brazilian middle-aged subjects (<50 years), the median value for serum n-3 LCPUFA was 150 μg/ml (30) . Although not a stringent comparison because we determined non-fractionated and serum concentrations in our study, the mean compositions for erythrocyte phospholipids of EPA and DHA in another Japanese elderly population were 2·3 and 6·7 %, respectively (31) , and 0·3 and 3·7 % in elderly Italian females (32) . The n-3 LCPUFA blood level in Japanese subjects is more than two times higher compared with that of subjects from Western countries.
A recent meta-analysis was conducted in countries with populations with lower blood levels of n-3 LCPUFA, and the results showed lower blood levels of n-3 LCPUFA in patients with depression (14) . An inverse association between serum n-3 LCPUFA levels and depressive symptoms in communitydwelling people with higher blood levels of n-3 LCPUFA has not been observed so far. We considered that the lack of evidence regarding depressive symptoms in populations with higher blood levels of n-3 LCPUFA may depend on levels of n-3 LCPUFA being high enough to interact with the expression of depression. However, we found significant trends for these inverse associations between serum n-3 LCPUFA and depressive symptoms in the present study.
To the best of our knowledge, only two studies investigated the association between serum levels of n-3 LCPUFA and depressive symptoms in Korean or Japanese subjects with high blood levels of n-3 LCPUFA (16,17) . In the Japanese study, no association was observed between blood n-3 LCPUFA levels and depressive symptoms, whereas a significant association was noted in Korean subjects. The Korean study used a case-control design with eighty depressive patients and eighty controls, including a higher percentage of women. On the other hand, the Japanese study included 113 healthy men. The composition ratio of sex may partially influence results. According to three previous studies of dietary intervention in Western countries, the association appears to be observed more in women than in men (33)(34)(35) . The NILS-LSA is a cohort study that has approximately 280 subjects in each group classified by age decade and sex, and thus roughly half the subjects are women. We showed a significant association for EPA in women through a sub-analysis stratified by sex, and assumed higher endogenous n-3 LCPUFA in women because of oestrogenic effects as a reasonable possibility for this sex difference (36) . However, this finding should be interpreted with caution in light of evidence from a previous report implicating the stronger association in men than in women between n-3 LCPUFA intakes and the occurrence of depressive episodes (37) .
The Korean study was conducted in patients with depression (17) , but subjects in the Japanese study were healthy (16) . Most previous studies showing a significant association between n-3 LCPUFA and depressive symptoms were conducted in depressed patients (13,14) . However, subjects who participated in the NILS-LSA were non-hospitalised, community-dwelling people, and we did not have a diagnosis of clinical depression. There might be other reasons for our findings.
Our findings may be due to the characteristics and the design of the NILS-LSA. Participants in the NILS-LSA included randomly selected age-and sex-stratified individuals; therefore, they were equally distributed to each group by generation and sex. In contrast, most studies investigated more limited samples. The 113 healthy men in the Japanese study represented a limited sample from the same workplace and a similar younger generation. It might be that the equal proportion size of each age-and sex-stratified group in the present study led to the significant results. There is indeed a 75 % power to detect the significance for serum EPA at the highest quintile, but the DHA power is only 48 % (POWER procedure, SAS version 9.3). Most fatty acids also show low power. For fatty acids other than EPA and DHA, we consider that a failure to detect an effect was present.
The most relevant reason for the new findings in our study is that the NILS-LSA has the ability to adjust for confounding factors. Medical history including stroke and diabetes is recognised as a critical complication of depression (38)(39)(40) , although medical history was not adjusted for in the Japanese study mentioned above (16) . As many individual characteristics may be involved in the development of depressive symptoms and ageing, study results should be interpreted after adjusting for confounding factors.
There were a few differences between EPA and DHA in the present study, as the quintiles of both fatty acid concentrations showed an association with a lower OR for depressive symptoms, with significant trends. Other reports have indicated the possibility of a better effect of EPA than of DHA on depression (41,42) . A recent meta-analysis provided evidence that EPA may be more efficacious than DHA in treating depression (43) . Another meta-analysis also suggested that only regimens containing over 60 % EPA showed a highly significant effect in the treatment of depression (42) . DHA is also a main component of brain neurons (44) and plays an important role in maintaining regular brain function (45) . An intervention study and a metaanalytic review indicated the possibility of a better effect of DHA than EPA on depression (14,46) . This study has several limitations. First, it is not possible to prove a causal relationship between serum fatty acid concentrations and depressive symptoms because this study is a cross-sectional analysis. The association can be potentially explained by the possibility that depressive symptoms may lead to a lower intake of n-3 LCPUFA through decreased appetite and food consumption and a reduction in activity. Further longitudinal studies are needed to assess this relationship.
Second, we did not make a diagnosis of clinical depression in this study. Although the CES-D is a valuable assessment scale to study the relationship between depressive symptoms and several variables, it is not a clinical diagnostic tool (20) .
In conclusion, the present study suggests that the serum levels of EPA and DHA may be associated with depressive symptoms in Japanese with higher blood levels of n-3 LCPUFA. Therefore, even in countries with a higher fish intake, n-3 LCPUFA intake in the usual diet may lower depressive symptoms. Furthermore, the efficacy of n-3 LCPUFA against depressive symptoms may be present in Japanese community dwellers. Longitudinal and intervention studies are needed to elucidate any protective effects of n-3 LCPUFA against depression. | 2018-04-03T06:04:43.074Z | 2015-12-22T00:00:00.000 | {
"year": 2015,
"sha1": "79d79336e6724780f764773094b34801d5091bdb",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CCB19A13888B8D3617A30F97894F19C5/S0007114515004754a.pdf/div-class-title-cross-sectional-association-between-serum-concentrations-of-span-class-italic-n-span-3-long-chain-pufa-and-depressive-symptoms-results-in-japanese-community-dwellers-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "09f64a3d072866a8bf7c843d6055c06164c2365e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1692380 | pes2o/s2orc | v3-fos-license | Experimentally Guided Computational Model Discovers Important Elements for Social Behavior in Myxobacteria
Identifying essential factors in cellular interactions and organized movement of cells is important in predicting behavioral phenotypes exhibited by many bacterial cells. We chose to study Myxococcus xanthus, a soil bacterium whose individual cell behavior changes while in groups, leading to spontaneous formation of aggregation center during the early stage of fruiting body development. In this paper, we develop a cell-based computational model that solely relies on experimentally determined parameters to investigate minimal elements required to produce the observed social behaviors in M. xanthus. The model verifies previously known essential parameters and identifies one novel parameter, the active turning, which we define as the ability and tendency of a cell to turn to a certain angle without the presence of any obvious external factors. The simulation is able to produce both gliding pattern and spontaneous aggregation center formation as observed in experiments. The model is tested against several known M. xanthus mutants and our modification of parameter values relevant for the individual mutants produces good phenotypic agreements. This outcome indicates the strong predictive potential of our model for the social behaviors of uncharacterized mutants and their expected phenotypes during development.
Introduction
Organized movement of cells is an important event in a variety of biological processes. In multicellular organisms such as vertebrates, organogenesis and morphogenesis require organized migration and passing of signals between cells [1]. As an example for unicellular eukaryotes, the slime mold Dictyostelium discoideum achieves highly organized cell movement in its different pattern formation by relaying diffusible morphogens [2,3]. As representatives for prokaryotes, the myxobacteria display organized gliding patterns during vegetative swarming and form fruiting bodies with various shapes during development, demonstrating their versatility of organized cell movement [4]. While the life cycle and social behaviors of myxobacteria resemble in many respects those of cellular slime molds, the mechanisms to achieve these behaviors differ. Directed motility in Dictyostelium discoideum is based on chemotaxis where cells sense and respond to chemoattractant gradients, resulting in a long-range cell interactions [2]. In contrast, myxobacteria rely on direct local contact dependent signaling and social interactions between neighboring cells to coordinate cell movement [4]. In order to clearly delineate the cellular interactions and identify essential components required for organized movement, Myxococcus xanthus is frequently chosen as a bacterial model system.
Myxococcus xanthus is a gram-negative bacterium initially isolated from cultivated soil. Individual M. xanthus cells are elongated, rod-shaped, about 3-5 mm in length and 0.5 mm in width. They do not have flagella and are therefore unable to swim. Instead, the cells glide on solid surfaces using two distinct motility systems: Adventurous (A)-motility and Social (S)-motility [5]. Single cell movement via A-motility is the preferred type of locomotion on dry surfaces, while coordinated movement via S-motility is mainly utilized on moist surfaces, enabling the bacterium to adapt to a variety of physiological and ecological environments [6]. Type IV pili (TFP), the molecular motors for S-motility, are found at the leading pole of the cells. They function by extending the pili at one cell pole, attaching to surfaces or to another cell and then retract, thereby pulling the cell forward [7][8][9]. The cell surface extracellular polysaccharide (EPS) was found to be the anchoring substrate for TFP and trigger retraction [10]. The A-motility engine, on the other hand, is initially thought to be localized at the lagging pole of the cell, powered by the secretion of a gel-like slime through nozzle-like structures, and generate a propulsive force to push the cell forward [11,12]. Although the chemical composition of the slime is not yet determined, it is suggested to include repeat unit polysaccharides [12]. Alternatively, a focal adhesion model is proposed to explain A-motility [13,14]. In this model, transient adhesion complexes push against the surface and gain traction with the aid of extracellular polysaccharide slime, which enables the cells to move forward in a rotating manner [15]. Although the existing models do not agree regarding the nature of the A-motility engine, both support the excretion of EPS slime on surfaces. In addition, motile M. xanthus cells frequently reverse their gliding directions at 6 to 8 minute intervals [16] by changing the use of the two motility systems between opposite cell poles. The synchronization of the two motors is obtained by spatial oscillations of the corresponding motility proteins [17].
Individual bacterial cell behavior changes in groups and during the complex life cycle of M. xanthus, resulting in the most distinct feature of M. xanthus -its social phenotype. During vegetative growth, M. xanthus cells use their two motility systems to glide across surfaces of soil particles, or on agar surfaces in the laboratory. During colony formation the cells locally align into domains [18]. Under these conditions, cells glide away from the center of a colony towards an area where they retrieve new nutrients from prey that are lysed by their secreted autocides [19]. When nutrients are depleted, M. xanthus cells change their gliding direction from outward to inward and eventually form multicellular dome-like structures called fruiting bodies. During this process, cells stop growing and merge into streams that then join to form initial aggregation centers. It is proposed that the initial aggregate nucleus or kernel may result from a random traffic jam which is later resolved [20]. Cells in the early aggregation centers are motile and large spiral patterns are formed in monolayers on the substratum [21]. These orbiting patterns may persist into later stages of development at the bottom of fruiting bodies [22]. Small adjacent aggregation centers fuse to form larger mounds. When more cells are absorbed into the mounds, they rise up and increase in size and eventually form fruiting bodies. Cells within the fruiting body develop into metabolically dormant myxospores and these myxospores will germinate and become vegetative again when nutrients become available [4].
For a long time, continuing attempts have been made to simulate the development of M. xanthus focusing on different stages [23][24][25][26][27][28][29][30]. However, some of these models are incomplete in capturing important biological properties (e.g. cell reversal and quorum sensing are excluded), while others are overwhelmed by inaccuracy due to the implementation of artificial parameters that were not experimentally determined. In this study, we develop a cell-based model that takes only experimentally determined parameters into account for identification of the minimal elements required to produce the observed gliding patterns and aggregation center formation during the early stage of M. xanthus development (up to 12 hours). Our model verifies the known essential parameters for early aggregation center formation in M. xanthus development, which is the key event in fruiting body formation. Furthermore, we identify one novel parameter, the active turning, which is defined as the action of a cell to turn its cell body. This results in changing the direction of movement at a certain angle without the presence of any obvious external factors. Our simulation demonstrates that this active turning parameter is in fact essential in producing the observed gliding pattern and in facilitating efficient and spontaneous aggregation center formation. We further test our model against the social phenotypes of several known M. xanthus mutants and good agreements with experimental observations are obtained. This suggests that the model can be used as an effective tool in predicting the phenotypes of mutants with defects in any of the parameters important for social behaviors of M. xanthus.
Results
To understand cell behavior, we utilize individual cell motility analysis (Materials and Methods) and experimentally observe the following elements that were previously described to be essential for M. xanthus gliding and aggregation center formation during early development.
Basic cellular properties
M. xanthus are elongated, rod-like cells gliding on surfaces at an average rate of about one cell length per minute. Cells often align parallel with each other when they move in close proximity or collide. This alignment is achieved by type IV pili attachment to the EPS of a neighboring cell and pulling the cells together. However, the cells moving side-by-side do not always adhere, they sometimes depart.
Cellular reversal
M. xanthus cells change direction not by making U-turns, but by reversing the cell's polarity [16]. The initial leading cell pole becomes the lagging pole, and vice versa. When isolated bacteria glide over the surface for a few body lengths, they pause shortly, reverse direction and move back on their original path, at intervals of about 6-8 minutes. The frizzy (Frz) chemosensory system regulates these reversal events [16].
Quorum sensing
Quorum sensing is a cell-cell communication process in which bacteria use the production and detection of extracellular chemicals called autoinducers to monitor the cell population density. Quorum sensing allows bacteria to switch between different gene expression programs: one favored at low-celldensity and another favored at high-cell-density [31]. In M. xanthus, cells can sense cell density and respond accordingly. One inducer is proposed to be the 17-kDa cell surface protein called Csignal, which accumulates by contacts between cells in high cell density and rippling events [32][33][34]. C-signal carries information regarding cell density and cell position with respect to other cells [35]. During development, increased C-signal molecules interact with the Frz chemosensory system via a cascade of covalent modifications [36]. Significant increase in the methylation of FrzCD [37] reduces the cell reversal frequency. EPS, which is also found to accumulate to a high concentration during development, may also be a quorum sensing inducer [26]. Experimental data on the correlation of reversal frequencies with different cell densities was previously reported by Shi et al. [38] and is similar to findings by Jelsbak et al. [34].
EPS slime production and following
When a M. xanthus cell glides over the surface, it leaves a slime trail that is evident as a bright line in phase contrast microscopy. This slime trail is suggested to contain a gel-like slime mainly made up of polysaccharides. The EPS portion of the extracellular matrix important for S-motility are also detected in the slime trails. EPS is important for guiding cell movement and for building up the scaffold that holds fruiting bodies together [39]. However, the source and actual components of EPS slime are not determined yet. In this paper, we use the general term ''EPS slime'' to refer to the material deposited by the cell on the surface. It has been shown that myxobacterial cells tend to follow a pre-existing EPS slime trail [40]. When a cell begins to cross an existing trail, it turns to an acute (less than 90 degrees) angle at the intersection to follow the trail. Cells are also observed to move in both directions on the same trail [4].
Employing the four parameters described above, we develop an off-lattice computational model to simulate myxobacteria gliding behavior and the pattern produced during the initial stage of development. This model is a cell-based model in which cell movement is governed entirely by a set of rules that reflect the properties of motility engines and local cell interactions. In this model, a cell is represented by a string of nodes, which are connected to one another by a segment of equal length. The use of multiple nodes allows the model to closely capture the cells' ability to turn and bend, and hence, geometric constraint and artificiality are minimized. The first node, which represents the cell's leading pole, determines the direction of movement and leads forward the rest of the body nodes. Each cell in our model is assigned a reversal clock that functions as a periodic timer to keep track of the cell's reversal period and adjust it according to the quorum sensing mechanism. A detailed description of the model and its motion algorithms are discussed in the Materials and Methods section.
Four different sets of simulations were carried out for various purposes. The first set of simulations ( Fig. 1) was performed to test the necessity of the four parameters mentioned above in simulating myxobacteria gliding behavior prior to aggregation during development. The values for these parameters are summarized in Table 1, items 1-4. We start with only the basic cell parameters ( Table 1, item 1) and sequentially include additional parameters to the model until all four are used. We simulate 100 cells, initially placed randomly over the surface with periodic boundary conditions, for two hours. EPS slime traces are plotted and shown as light blue dots.
The basic cell parameters coupled with the cell motility algorithm described in the Materials and Methods section allow cells to move on the surface individually. In the absence of collision, the cell moves forward in the direction of their long axis leaving straight traces (Fig. 1A). In Fig. 1B, cellular reversal is incorporated into the model and cells are now able to reverse periodically with a fixed reversal frequency. As a result, they are no longer able to travel as far as in Fig. 1A. Their straight paths tend to be shorter and in absence of collision, cells simply glide back and forth following the same path. In Fig. 1C, reversal frequency is modulated according to the local cell density given in Table 1 item 3 -quorum sensing. However, at the cell density used in all simulations in Fig. 1, the quorum sensing does not appear to have a major impact on the cellular behavior. The chances in which cells will group together and alter their reversal frequency are low, and therefore the simulation result in Fig. 1C looks very similar to Fig. 1B. In the simulation shown in Fig. 1D, each cell is capable of depositing EPS slime while gliding over the surface and at the same time, sensing and following the EPS slime trail laid previously on the surface. The tendency of a cell to follow an EPS slime trail causes it to turn and change direction of movement more frequently. However, a number of long straight paths are still obvious, which is not commonly observed experimentally (Fig. 1F). This suggests that the four parameters are insufficient to accurately represent M. xanthus gliding behavior.
In order to improve the simulation and enable our model to fully represent M. xanthus gliding behaviors during development, we have identified one novel parameter that was overlooked before, the active turning. When single cells initially glide over agar surface, their movement paths do not always follow a straight line. In contrast, single cells are frequently found to change direction of movement without the presence of apparent external factors covered in the known parameters: collision, alignment with other cells, or following an existing EPS slime trail. We define this new cellular behavior as active turning. The active turning angle is measured as the angle between the cell long axis before and after one cell length movement ( Fig. 2A). By examining the active turning events in hundreds of individual cells in recorded experiments, we find that the frequency of active turning is normally distributed with an average of approximately 3 minutes per turn (Fig. 2B). In addition, the turning angle is also normally distributed centered around 30 degrees and there is an equal chance for cells to either turn to the left or turn to the right (Fig. 2C).
Since active turning of a single cell is a common event observed during the initial stage of development, we hypothesize that this behavior is important for the early gliding pattern formation. We add the active turning property as the fifth parameter in our computational model using experimentally determined values ( Fig. 2; Table 1 item 5). This addition greatly improves our previous simulation results. In particular, more circular EPS slime trails are now produced, as shown in Fig. 1E. These circular paths later become the starting locations for initial aggregation as more cells join and follow these paths. This result demonstrates that all five experimentally determined parameters are important for gliding pattern formation during M. xanthus early development.
In the next simulation (Fig. 3A), we start from a higher cell density and test whether the five cellular properties are sufficient to produce initial aggregation centers similar to those observed in M. xanthus development assay (Fig. 3B). We simulate 5000 M. xanthus cells, with all five parameters that are set according to values listed in Table 1. The cells are initially placed randomly over the simulation domain with periodic boundary conditions, where they start to glide independently of each other. During gliding, they deposit EPS slime and follow existing slime trails. At this early stage, cells reverse their polarity periodically and their frequent turns are due to either collision between cells, EPS slime following, or the active turning itself. As several cells follow the same trail, they form small groups and move together in streams. Individual cells often join the streams, which results in increasing local cell density inside the streams. Due to the quorum sensing mechanism, cells that locate at a higher neighboring density modulate their reversal frequency. Streaming cells do not reverse as frequently as before and thus travel farther. As local cell density keeps increasing, cells in streams begin to turn to one direction once they encounter an area with high EPS slime concentration. This one directional turning event results in the formation of a circular path. With many cells following this path, it gradually forms an initial aggregation center in which cells orbit in spiral patterns. More cells are absorbed to this center causing it to become more condensed. At this stage, few cells leave the center occasionally and some may reverse to rejoin the center. Since we only simulate a fixed, limited number of cells approximately 90% of cells join the aggregation center at the end of 12 hours, leaving few cells gliding independently outside. This new model exactly reproduces different stages and the multicellular behavior during the aggregation center formation of M. xanthus development. Therefore, we demonstrate that the five parameters are not only necessary for creating the gliding pattern at low cell density but are essential in early aggregation center formation during M. xanthus development at higher cell density.
In the next simulation ( Fig. 4) we aim to verify whether the experimentally observed 30 degree average turning angle is necessary for efficient aggregation. We repeat the simulation in Fig. 3A with several average turning angle distributions between 5 and 60 degrees for up to 20 hours and measure the time needed to complete the initial aggregation. Changing this parameter to values other than 30 degrees produces non-optimal results. The results are summarized in Table 3. An average active turning angle smaller than 30 degrees does not initiate any discernible aggregation process (5 degree angle) or produces incomplete structures (15 degree angle) within the 20 hour simulation period. Active turning angles with normal distributions that are centered at values larger than 30 degrees do not reflect the experimentally observed behavior even though they still allow the formation of aggregation center. In the case of 45 degrees, round aggregates are formed within approximately 9 hours. However, some of them are unstable and may break apart. In the case of 60 degrees, many small round aggregates clump together in regions with high EPS concentration. These small aggregates are unlikely to merge. Consequently, long aggregates with many centers are formed. Figure 4 shows simulation results of wild-type cells with different means of active turning angle distributions. These results indicate that an active turning angle, with a normal distribution centered around 30 degrees, together with the four other previously determined factors are essential for modeling spontaneous and efficient aggregation center formation during M. xanthus development. Using the five experimentally determined parameters, our computational model accurately simulates the early aggregation center formation during M. xanthus development. To further test this model, we simulate 5000 cells of several known genetic mutants during the first 12 hours of development, and test whether the simulation produces phenotypes similar to those observed experimentally (Fig. 5).
SW504 is a M. xanthus mutant strain that does not produce cell surface EPS [41]. This mutant is unable to form aggregation centers under developmental conditions (Fig. 5B). Removal of the EPS parameter (Table 1, item 4) from our model reproduces this mutant phenotype, i.e. all cells move independently of each other and do not form big groups (Fig. 5A). Without EPS, the cells are less likely to align with each other and tend to disperse instead of moving side by side. Mechanistically, this is due to the fact that the active turning becomes a dominant factor in determining direction of cell movement as EPS-guided direction no longer plays a role.
SW600 and SW601 are hypo-reversal and hyper-reversal M. xanthus mutants, respectively [42]. Different from wild-type cells which reverse every 6-8 minutes and reduce reversal frequency Table 1. Experimentally determined parameters used in the model for simulating the gliding behavior and early aggregate formation during M. xanthus development.
c. Physical interactions: two cells align with each other during pole-to-side collision; cell either slightly changes its orientation to pass by the colliding cell and move forward, or reverses during pole-to-pole collision.
2 Cellular reversal Cells reverse from time to time. Initial reversal interval is set according to a random normal distribution with mean = 6.24 minutes, standard deviation = 0.5 minutes [42]. EPS slime production and following a. EPS slime production: As cell moves, it deposits EPS slime on the surface.
b. EPS slime sensing: Cells detect EPS slime within one Type-IV pilus distance, which is approximately equal to one cell length [49].
c. EPS slime following: Cell follows an existing EPS slime trail ahead of it to the direction with higher EPS slime concentration. depending on neighboring cell density, SW601 cells constantly reverse approximately every 2 minutes, while SW600 cells only reverse every 120 minutes [16,42]. The early aggregation phenotypes of these two mutants are measured at 12 hours after induction of development and both are found to form curly aggregate structures consistent with their reported ''frizzy'' fruiting body phenotypes [16] (Figs. 5D, 5F). By changing the value of the cell reversal frequency parameter (
Discussion
Understanding how cells interact and the essential elements that facilitate their coordination are important in explaining the social behavior exhibited by many bacterial cells. To achieve this goal, we develop an off-lattice cell-based computational model to simulate gliding behavior and aggregation center formation during M. xanthus development. This model is cell-based in which cell movement is determined solely by a motility algorithm and local rules of interaction that are developed based on experimentally determined parameters. Thus, our model offers transparency and flexibility as the advantages in comparison to other off-latticebased statistical mechanics model.
In this paper, we show that our model successfully reproduces the array of social behaviors of M. xanthus at both low and high density, from single cells to the spontaneous formation of multicellular aggregation center involving thousands of cells without the necessity of creation of an artificial traffic jam, as used in some previous models [24,25]. This automatic formation of aggregation center is supported by the observations of Curtis et al. [43], Holmes et al. [29] and Hendrata et al. [30] in their simulation models. These models included the turning and bending of cells as random fluctuations due to stochasticity in the models' algorithm. The cumulative effect of cell turning and bending enabled the spontaneous aggregation center formation in their simulations. However, our experimental data show that turning angle and frequency are not completely random, but instead follow normal distributions centered at distinct values. The individual cell movement, gliding pattern as well as the timing of aggregation center formation in our simulation closely matches the real cell behaviors observed in recorded experiments.
With this model we test the sufficiency of several previously determined biological elements that are hypothesized to be important for multicellular social behavior during both gliding and early aggregation center formation of M. xanthus development. We demonstrate that EPS slime depositing and following, cellular reversals, as well as cell density dependent reversal frequency are essential for these biological processes. In addition, we identify for the first time a new element for social behavior that is necessary for accurately producing the gliding pattern and efficient aggregation center formation -the active turning. Active turning is the tendency of cells to randomly change its direction during movement without the presence of factors such as EPS slime trail or cell collision. This parameter is important in the early stage of development because the cell's tendency to bend its body up to a certain angle determines the initial formation of circular slime trails and determines the EPS slime pattern on surface. The cell turning and curly trajectory of motile cells facilitates the spiral movement during aggregation and helps creating the starting location of dense aggregation centers. Furthermore, we note that the angle for this active turning is very important. Only by turning to a certain preferred angle, 30 degrees in this case, can the cells aggregate most efficiently. While the underlying mechanisms of this active turning behavior are currently unknown, it could be produced by a number of scenarios including random imbalances in motor function or yet to be determined external factors such as nanoscale surface properties. In addition, undetermined myxobacterial cellular features may be involved. Thus, careful examination of mutant strains defective in aggregation while being intact for motility, reversal, EPS production and quorum sensing has the potential to reveal the molecular basis for active turning.
In addition, our model includes the cell density dependent reversal frequency in the simulation. Although this phenomenon is commonly observed, previous models either ignored it or approximated reversal as all-or-none [25,29,30], overlooking the importance of cells to quorum sense its environment and adjust their behaviors accordingly. Our model suggests that cell density dependent reversal in M. xanthus is indispensable since without it, the efficiency to form aggregation centers decreases. It is known that the Frz chemosensory system is responsible for the reversal events [16]. Methylation or demethylation of FrzCD, the methylaccepting chemotaxis protein (MCP) of the Frz system may provide bacteria a short term ''memory'' of concentrations of attractants [15], in this case the EPS slime, to determine the frequency of reversal. Similar to the run-tumble behavior of swimming in E. coli, through controlled reversals, M. xanthus may employ a biased random walk [15,17,44,45]. This eventually leads the cells toward places with higher cell concentration and more EPS slime, finally resulting multicellular fruiting body formation. Some previous models [32][33][34] also indicate that slime trails decay over time. As there is no experimental evidence supporting this argument, we omit their decay is our model. Finally, our model not only provides an approach to decipher the essential minimal factors for individual processes during early aggregation formation, but also proves to be useful to determine how changes in individual behavior due to genetic mutation affect the aggregation formation phenotype (Fig. 5). The model is tested against three known M. xanthus mutant strains SW504 (no EPS), SW600 (reduced reversal frequency) and SW601 (elevated reversal frequency) and all three distinct phenotypes are reproduced by changing relevant parameters in the model. Therefore, this computational model can be used as an effective tool to predict mutant social behaviors and phenotypes during development.
Our model provides a tool to understand how interaction between cells facilitates organized movements and which minimal elements are required for these social behaviors. Understanding the cellular social interactions in M. xanthus may shed light on the self-organizing processes in fruiting body formation in other Myxobacteria and may also provide insight for the organized movement of cells in the spread of biofilm in an infected tissue and morphogenesis in higher organisms.
M. xanthus development assay
All M. xanthus strains used in this study are listed in Table 2. M. xanthus cells were grown in CYE medium (1% casitone, 0.5% yeast extract, 8mM MgSO 4 in 10 mM Mops buffer, pH 7.6) at 32 0 C on a rotary shaker at 300 rpm and were maintained on 1.5% CYE agar plates. M. xanthus cells were grown in CYE to the exponential growth phase and concentrated to OD 600~5 (2:5|10 9 cells ml {1 ) in TPM buffer (10 mM Tris-HCl pH 7.6, 1mM KH 2 PO 4 , 8mM MgSO 4 ). 5ml aliquot concentrated cells was spotted onto CF agar [46] and incubated at 32 0 C. Snapshots were taken every 3 hours during development using a Nikon eclipse TE200 inverted microscope fitted with a SPOT camera (Diagnostic Instruments). Images at 12 hours were used to represent the early aggregation center formation stage.
Individual cell motility analysis
Individual cell motility parameters were determined following previously described protocols [47]. Briefly, 1|10 5 exponentially growing cells (about 1ml) were diluted 10-fold with MOPS buffer and spotted on 1.5% agar MOPS plates. Cell movements were viewed with a Nikon Eclipse TE200 inverted microscope at 400| total magnification and recorded with a SPOT camera. Active turning is defined as the ability and tendency of a cell to turn to a certain angle larger than 5 degrees without the presence of any apparent external factors (existing/visible EPS trail or cell collision). The active turning angle was measured by calculating the acute angle between the cell long axis before and after movement of one cell length from an arbitrarily chosen starting point. The active turning intervals were calculated by measuring the amount of time (minutes) per active turning event for a single cell. 200 different cells were examined for each parameter value.
Detailed description of the computational model
In this subsection, we describe the implementation of each of the experimentally determined parameters (listed in Table 1) into our model.
1. Cell representation and basic properties. The computational model for M. xanthus we develop here is a twodimensional off-lattice model in which a cell is represented as a string of N nodes connected by (N-1) segments of equal length L. As the cell length varies between 3-5 mm, the number of nodes N is varied from three to five. Cell width is set to be 0.1 of its length. Three nodes (or two segments) are required minimally for cell bending and turning. Fig. 6 shows a cell with three nodes (N = 3), where the black dots indicate the two poles of the cell. Cell nodes are numbered with the leading pole being node 1. In this representation, cell orientation is defined to be the vector pointing from the lagging pole to the leading pole and it is given by where n 1 k and n N k are the positions of leading and lagging poles of cell k, respectively.
In our model we assume that cell movement is determined by the leading pole (node 1). Cell velocity is fixed at 4.5 mm/min. One simulation time step corresponds to 1/3 minutes. At each iteration, cell k updates its leading pole position according to the discrete equation: where Dt is the time step, x k (t) is the position of the leading pole of cell k at time t, v is cell velocity and V k (t) is the motility direction at time t. The rest of the nodes get pulled forward following the leading pole, for example, the position of node 2 at time (tzDt) will be the position of node 1 at time t, etc. In general, the direction of movement V k is determined by three possible factors: collision with other cells, EPS-driven direction (depositing and following EPS slime), and the cell's active turning. EPS-driven direction and active turning mechanism will be discussed in item 4 and 5 below. Collision, if it occurs, is primarily resolved according to interaction rules stated in Table 1 item 1. In our simulation, we say the leading pole of cell k collides with a node of cell j if the distance between them is less than one cell width. There are two cases: (a) Pole-to-side collision occurs when the leading pole (node 1) of cell k collides with a node of cell j other than the leading pole (node number =1). Cell k then aligns with cell j. We model the alignment in such a way that cells eventually orient with their neighbors to the acute angle, that is, setting where C k and C j are the orientation of cells k and j, respectively, as defined by (1). (b) Pole-to-pole collision occurs when the leading pole (node 1) of cell k collides with the leading pole (node 1) of cell j. Cell k randomly chooses, with uniform probability, to either slightly changes its orientation to pass by the colliding cell and move forward, or reverses its orientation. That is, we have for the current iteration either is the rotation matrix with random angle d [ ½{5 0 ,5 0 , or C k~{ C k .
2. Cellular reversal. Cellular polarity reversal is regulated by a reversal clock that is assigned to each cell. Initially, each cell is assigned a random reversal period T r that is normally distributed with mean 6.24 minutes [42]. At the beginning of simulation, each clock is set to a random phase between 0 and T r . At each iteration, this phase is increased by 1 unit and when it reaches T r , the cell reverses and its clock is reset to 0. During cellular polarity reversal, the cell orientation is reversed (i.e. setting C k~{ C k ), and so are the two cell poles and their functions.
3. Quorum sensing. Our model allows the reversal frequency to be modulated during the development according to local cell density. Local cell density is measured by counting the number of neighboring cells that partly or entirely lie within the cell's measuring domain, a square region enclosing the cell whose sides equal one cell length. The cell's center of mass is taken to be the center of the measuring domain. Fig. 7 shows a cell, colored in dark grey, having 5 neighboring cells as its local cell density. When a cell reverses, its local density is calculated to see if its current reversal period T r needs to be adjusted according to values in Table 1 parameter 3quorum sensing.
4. EPS production and EPS-driven direction. In all myxobacterial cells, except the EPS-mutants, the EPS-driven direction becomes more prominent than direction due to active turning as cells tend to follow slime trails previously laid on the surface. In our algorithm, we take into consideration both the effect of exopolysaccharide (EPS) slime secretion from the rear pole and the cells' tendency to follow the slime trail deposited previously by other cells.
Our model is essentially a discrete model in which the cell position is stored by keeping track of the position of each cell node in a two-dimensional domain. Similarly, we keep track of the EPS slime trail by keeping track of the position of the EPS points deposited on the surface from cell's lagging pole. EPS slime secretion pushes the cell directly forward, causing the cell to orient itself along its long axis given by (1). In Fig. 8, a cell leaves behind a light blue trail representing the slime with EPS points in it.
In addition to depositing EPS, a cell also has the capability to sense and follow a slime trail previously deposited by other cells. Slime reduces surface friction and thus moving on slime trail requires less energy than moving on a dry surface. This reflects in the tendency of a cell to turn to an acute angle voluntarily whenever it encounters a slime trail. To model this behavior, we first define EPS slime searching region to be the semi-circular region right ahead of the leading pole with radius approximately one cell length. This EPS searching region indicates how far a cell can sense EPS trail nearby and indicates possible region to which a cell can move towards. A cell searches for EPS points inside this searching region and turn to an acute angle to follow the trail on which these EPS points are located. If there are more than one slime points within the EPS searching area that satisfy the acute angle requirement, then the cell moves towards the region with more EPS slime points. In experiments, this corresponds to the region with higher EPS concentration. Fig. 8 illustrates a cell with a yellow semi-circular region indicating its EPS searching area. Here, the cell turns in the direction towards the slime point A since the sub-region in which A lies has 5 slime points, while the subregion in which slime point B is located only has 2. Thus, the direction due to slime orientation is given by: where s A is the position of the slime point A. The inequality in (5) ensures that cell turns to an acute angle in following a slime trail. 5. Active turning mechanism. The active turning mechanism relies on two parameters, namely its frequency and turning angle. Each cell is assigned two variables; one to keep track of the time of the last active turning and the other one is a random active turning interval T t whose value follows a normal distribution around 3 minutes per turn. When a moving cell neither collides with another cell nor encounters a slime trail, its active turning takes place provided that the time interval since the last active turning is equal to or has exceeded T t . Otherwise, the cell moves following its tail-to-head orientation defined by equation (1).
Thus, the direction of movement V k of cell k in equation (2) then becomes where A k~S cos a, sin aT and a is the active turning angle. The weights a i k ,b i k and c i k at each iteration i is determined based on the following motility algorithm: (a) Checking the reversal clock and quorum sensing. If the phase of the cell's reversal clock has reached its reversal period T r , cell reverses and we set the weights a i k~b i k~c i k~0 as cell does not move at this particular iteration. Local cell density is then computed to check whether the current reversal period T r needs to be modulated. If the cell does not reverse, go to step (b). | 2014-10-01T00:00:00.000Z | 2011-07-19T00:00:00.000 | {
"year": 2011,
"sha1": "79b31ffd15bce62c545030a291325db05ec7dc5a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022169&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79b31ffd15bce62c545030a291325db05ec7dc5a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
59396941 | pes2o/s2orc | v3-fos-license | Optimization and Characterization of Thermostable Endo and Exocellulases by Humicola sp. SKESMBKU03
Article Information Humicola sp. SKESMBKU03, a cellulase producer was isolated from horse dung manure collected from Hyderabad, Telangana. In the present study, endo and exoglucanase production from Humicola sp. SKESMBKU03 was studied and optimization of the cultural conditions to enhance production of enzymes has been reported. Among the twenty fungal cellulase producers isolated from different thermogenic habitats, the fungal strain ‘HD3’ identified as Humicola sp. SKESMBKU03 exhibited highest cellulase activity by plate screening assay. To enhance the production level of the enzyme, different cultural conditions were optimized and observed that optimum pH and temperature for endo and exoglucanase production was 5.0 and 45C respectively. Maximum growth as well as enzyme production was recorded on 3 day of incubation period in shake flask (100RPM) containing Mandel’s Weber medium. Urea and malt extract among the organic nitrogen sources while ammonium chloride as inorganic nitrogen source were found to be the best nitrogen source (0.2%) for endo and exoglucanase production. The endo and exoglucanase activities are higher in media containing glucose as their carbon source (1%) followed by xylose and lactose. The organism showed maximum dry weight in pH of 9.0-10.0, temperature of 45C, cellulose as carbon sources, yeast extract and malt extract of nitrogen source. The endo and exocellulases produced by the Humicola sp. SKESMBKU03 are highly stable at pH 8.0 and temperature of 75C. The results indicate that the endo and exocellulases produced by Humicola sp. SKESMBKU03 are more stable at high temperature and alkaline pH. Article History: Received : 31-09-2014 Revised : 01-12-2014 Accepted : 12-12-2014
INTRODUCTION
Thermophilic fungi are species that grow at a maximum temperature of 50°C or above, and a minimum of 20°C or above (Maheshwari et al., 2000). Based on their habitat, thermophilic fungi have received significant attention in recent years as a source of new thermostable enzymes for use in production of ethanol (Olsson and Hahn,1996), organic acids (Luo et al.,1997), and other chemicals (Cao et al.,1997). Cellulose is one of the main components of plant cell wall material and is the most abundant and renewable nonfossil carbon source on Earth. Degradation of cellulose to its constituent monosaccharides has attracted considerable attention for the production of food and biofuels (Sukumaran et al., 2010). The degradation of cellulose to glucose is achieved by the cooperative action of endocellulases (EC 3.1.1.4), exocellulases (cellobiohydrolases, CBH, EC 3.2.1.91; glucanohydrolases, EC 3.2.1.74), and beta-glucosidases (EC 3.2.1.21). Endocellulases hydrolyze internal glycosidic linkages in a random fashion, which results in a rapid decrease in polymer length and a gradual increase in the reducing sugar concentration. Exocellulases hydrolyze cellulose chains by removing mainly cellobiose either from the reducing or the non-reducing ends, which leads to a rapid release of reducing sugars but little change in polymer length. Endocellulases and exocellulases act synergistically on cellulose to produce cellooligosaccharides and cellobiose, which are then cleaved by beta-glucosidase to glucose (Vlasenko et al., 2010;Duo-Chuan et al., 2011). An important impediment in the exploitation of cellulose is the fact that the production of cellulase is expensive, contributing as much as 50 % to the overall cost of hydrolysis .This is due to the low specific activity of cellulase, necessitating a large quantity of the enzyme for extensive hydrolysis. Considerable progress has been made in optimization of culture conditions (Yu et al.,1998;Techapun et al.,2002;Olsson et al., 2003) and mode of cultivation (Xu et al., 2002). The optimization of fermentation conditions is an important problem in the development of economically feasible bioprocesses (Xue-Cai Hao et al., 2006). This work focus on factors relevant for the enhancement of enzymatic hydrolysis of cellulosic compounds using Humicola sp.. SKESMBKU03 fungus as a potential producer of cellulases.
Collection of Samples
Total 100 Samples from different thermogenic habitats (Zoo dump, Nests of birds, Industrial waste, Vegetable market compost, Mushroom compost, Horse dung manure, Decomposing litter, Soils from furnace area,
Original Research
Cattle dung and Municipal waste) of Telangana (India) were collected.The samples were taken by means of sterilized spatulas and collected in sterile sealed polythene bags. The samples were then brought to the laboratory for microbiological study.
Isolation of Fungi
All the samples were processed within 1-2 hrs of collection. For isolation, dilution plate method was used (Apinis,1963).Ten grams of sample was placed in 100 ml of sterile water and shaken for proper mixing and made in to dilution (10 fold), the sample of 1 ml of 10 -4, 10 -5 dilution are placed in the Yeast extract starch agar medium (YpSs: yeast extract-5gm, starch-15gm, K2HPO4-1gm, MgSO4-0.5gm per 1000ml of distilled water), Yeastextract glucose agar (yeast extract-5gm, glucose -15gm, K2HPO4-1gm, MgSO4-50gm per 1000ml of distilled water) medium (Cooney and Emerson,1964). Streptomycin and rose bengal were added to the molten medium after autoclave, and the plates were incubated at 45 0 C for 3-4 days to identify the fungi. All the isolates from thermogenic habitats of Telangana were given a code from where it is isolated and identified based on their morphology and mycelial characters according to (Johri et al., 1999;Salar et al., 2007 andJean et al., 2000) and maintained on YpSs slants at low temperature (4 0 C) for further study.
Screening for Cellulase Enzyme
All the twenty isolates were screened for cellulolytic activity on selective carboxy methyl cellulose agar (containing grams/litre: NaN03-2.0, KH2PO4-1.0, MgSO4. 7H2O-0.5, KCl-0.5, carboxy methyl cellulose, sodium salt-2.0, peptone-0.2, and agar-17.0) medium (Sacin et al., 2011). Plates were spot inoculated with spores of pure culture and incubated at 45 0 C. After 3days of incubation plates were flooded with Gram's iodine (2.0g KI and 1.0g iodine in 300ml distilled water) for 5 minutes, the diameter of the zone of decolorization around each colony was measured (Ramesh et al., 2008). The fungal colony showing largest zone of decolorization was selected for further studies on cellulase production.
Identification of Thermophilic Fungi
Isolate (HD3) showing large clear zone of cellulolysis on carboxy methyl cellulose agar is futher identified to the species level by molecular analysis of the internal transcribed spacer (ITS) region of the nuclear ribosomal DNA (rDNA). The rDNA was extracted from mycelium using the thermolysis method according to the protocols described by Zhang et al. (2010). The ITS regions were amplified by the polymerase chain reaction (PCR) with the universal primer pair ITS1 (5'-TCCGTAGGTGAACCTGC GG-3') and ITS4 (5′ TCCTCCGCTTATTGATATGC-3′) (Jingfeng et al., 2013). The PCR products were purified and sequenced in both directions by Macrogen (Macrogen Inc., Geumchen-gu, South Korea). The sequences of fungus were compared using the GenBank database and basic local alignment search tools (Blast) for nucleotide analysis. The identification was confirmed with >98% similar to the fungal species deposited in GenBank. The sequence of the organisms is identified as Humicola sp. and deposited in EMBL and accession number is HG934775.1(Humicola sp. SKESMBKU03).
Enzyme Assay
The carboxymethyl cellulose (CMCase) or endoglucanase activity determined according to the method by Ghose, (1987) reaction mixture contains 0.5ml of 1% carboxymethyl cellulose solution, 1ml of 0.05M phosphate buffer, (pH 5.5) and 0.5ml enzyme filtrate was incubated at 60 0 C for 30mintues and releasing reducing sugars were estimated against blank. Exoglucanase activity was determined by using cellulose powder as a substrate. The reaction mixture (2 ml) contained 1 ml of 1% suspension of the substrate and 1ml of enzyme filtrate and incubated for 1hour at 60 0 C, enzyme activity was expressed as International units (IU) (Bhat and Maheshwari, 1987). Absorbance of the above solution was measured at 575 nm, One international unit of cellulase activity is the amount of enzyme that forms 1µmol glucose (reducing sugar as glucose) per minute during the hydrolysis reaction. Reducing sugars were determined by the dinitrosalicylicacid (DNS) method (Miller,1959).
Effect of pH on Endo and Exogulcanses Production
The effect of pH on endo and exocellulase production was assessed by cultivating Humicola sp.SKESMBKU03 in 150ml flask containing 25ml of optimized media of varied pH ranging from 3.0-10.0, the pH of the medium was adjusted by using 1 N Hcl or 1 N NaOH and incubated at 45°C in shaker incubator (100RPM) for 3, 6,9 and 12 days at regular intervels the enzyme was extracted and used as endo and exocellulase source (Roberto et al., 2005).
Effect of Temperature on Endo and Exogulcanses Production
In order to determine the optimum temperature for endo and exocellulase production by Humicola sp. SKESMBKU03, the fermentation was carried out at different temperatures ranging from 35,45, 50 and 55 o C for 3,6,9,12 days. Enzyme activity was estimated at each temperature and time period, according to standard assay procedure (Gomes et al., 2000).
Optimization of Carbon and Nitrogen Sources on Endo and Exogulcanses Production
To identify a suitable carbon source for the endo and exogulcanses production by Humicola sp. SKESMBKU03, the various carbon sources were tested (maltose, fructose, glucose, lactose, xylose, starch, carboxy methyl cellulose, cellulose and sucrose). These carbon sources were tested individually at the concentration of 1% with production medium and incubated at 45°C in shaker incubator for 3, 6,9 and 12 days. To investigate the effect of organic and inorganic nitrogen sources on the endo and exogulcanses production, experiments were carried out with addition of different organic and inorganic organic nitrogen sources namely peptone, yeast extract, malt extract, beef extract, urea. (NH4)2 SO4, NaNO3, KNO3, NH4Cl, KH2PO4 were added at a concentration of 0.2% to the Mandel and Weber production medium. All the flasks were incubated at 45 o C ± 2 in an orbital shaker incubator at 100 RPM. At regular interval enzyme assay was carried out (Coronei et al., 1991).
Effect of Static and Agitated Condition on Cellulase Production
To study the effect of static and agitated condition on cellulase production by fungus, 2 sets of fermentation medium was prepared. In all the sets, all the conditions (pH, temperature) applied were kept constant. One set of inoculated medium was kept in an incubator without shaking while other sets were kept in different RPM (100,150,200) in an orbital-shaker incubator. Cellulase activity was determined by assaying culture filtrate on 3,6,9,12days of incubation (Shahriarinour et al.,2011).
Stability Studies on Endo and Exogulcanses
The thermal stability was investigated by measuring the enzyme activity after keeping the aqueous enzyme solution for 1 hour at temperatures between 35 ºC and 80 °C in the absence of substrate and at constant pH 5.5. Remaining enzyme activity was determined by enzyme assay. The pH stability (pH 3.0 to10.0 ) of the crude enzyme was evaluated by mixing the enzyme solution and buffer to give final proportion of 0.5:1 (v/v). These solutions were incubated at 45 ºC for 1 hour and remaining activity was determined by enzyme assay. (Quiroz-Castaneda et al.,2009)
Determination of Fungal Biomass
At regular intervals of time (3, 6,9 and 12 days) the contents of the flasks were aseptically passed through pre-weighed Whatman No.1 filter paper to separate mycelial mat from culture filtrates. The filter papers, along with mycelial mat were dried at 70 o C in an oven for overnight and weight was recorded. The difference between the weight of the filter paper bearing mycelia mat and weight of pre-weighed filter paper represented fungal biomass, which was expressed in terms of dry weight of mycelia mat in milligrams (Shilpi et al., 2011).
Collection of Samples and Isolation of Fungi
In total, of 100 samples, 20 isolates were selectively obtained from different thermogenic habitats of Telangana (India) based on the methods described in Materials and Methods. It has found that yeast extract starch agar medium found good for the growth of thermophilic fungi compare to yeast extract glucose agar medium.
Screening and Identification of Thermophilic Fungi for Cellulase Production
All the twenty isolates were screened for cellulolytic activity. All the tested strains were capable of producing cellulose, but in varying degrees (Table 1). The most potent species for cellulase production was Humicola sp. SKESMBKU03 isolated from horse dung manure with plate clearing zone of 3.5 cm in diameter (Figure 1 and 2). Which was morphologically (Colonies on YpSs agar at 45 o C are white at first but soon turns through jet-black as spore maturation proceeds. Hyphae colourless, prostrate, branched, septate, 2-5 µm wide. Conidiogenous cells small, 8.7 x 3.7 µm. Conidia dark brown, smooth walled, translucent, generally globose, 7-12.5 µm in diameter, or oval 11.2-14.6 x 7.5-10 µm, produced on hyphal branches or developed intercalarily.) and molecularly identified and deposited in EMBL, accession number HG934775.1 (Figure 3). The pure culture of fungi was made by the hyphal tip method and maintained at 4°C for further use.
Effect of pH on Endo and Exogulcanses Production
Humicola sp. SKESMBKU03 shows the highest production of endo and exoglucanase at pH 5.0 and 6.0 on third day of incubation, as the incubation proceeds the endo and exoglucanase activity was decreased (Table 2).
Effect of Temperature on Endo and Exogulcanses Production
The effect of temperature on the endo and exogulcanses production was studied at the temperature range from 35-55 o C (Table 4).The results of the test showed that the optimal temperature for endo and exocellulases was 45 o C. Meager amount of enzyme produced at 50 o C on 3 rd day of incubation.
Optimization of Carbon and Nitrogen Sources on Endo and Exogulcanses Production
To evaluate the carbohydrates to cause induction or repression of cellulase, the organism was grown on different carbon sources. In general, enhanced production of enzyme was observed with all the tested sugars. Among the carbon sources examined, glucose found to be the best inducer in SmF on third day of incubation next to this is xylose (Table 6). Effect of different organic as well as inorganic nitrogen sources on cellulase production by Humicola sp. SKESMBKU03 were studied. Among the various nitrogen sources tested urea and malt extract were found to be the most effective for production of endo and exo glucanase activity (Table 8).
Effect of Static and Agitation Condition on Endo and Exogulcanses Production
When cellulase production was studied under static and shake flask conditions (100, 150 and 200 RPM) for 3, 6, 9 and 12 days at 45 o C. The enzyme production by Humicola sp. SKESMBKU03 was found to be high in agitation condition in comparison with that of static conditions. It was noticed that the optimum level of rotation needed for the maximum production for both endo and exogulcanses was at 100 RPM (Table 10).
Stability Studies on Endo and Exogulcanses
The effect of pH on endo and exogulcanses were shown in (Figure 4). Samples of the enzyme solutions in buffer of various pHs were incubated at 45°C for 1hr and then the remaining activities were assayed.The activities are stable between pH 5.0 and 7.0 for endoglucanse, between pH 5.0 and 6.0 for exoglucanse.Enzyme activity was decresed with enhanced pH treatment.Activity assay at different temperatures (35-85°C) showed by endo and exogulcanses had activity over a broad range of temperatures with maximum at 45-75°C for both the enzymes ( Figure 5).
Determination of Fungal Biomass
The biomass production of Humicola sp. SKESMBKU03 denotes that biomass increased with incubation period. The results revealed that the Humicola sp.SKESMBKU03 showed increased biomass in pH of Table 3) and temperature of 45°C (Table 5). Cellulose found to be best carbon sources for biomass production followed by fructose and lactose (Table 7). In nitrogen sources peptone, yeast extract and matl extract were found to produce maximum fungal biomass (Table 9). Compare to the static, maximum fungal biomass was produced in the shake flask culture at 100RPM (Table 11).There is no co-relationation between endo and exogulcanses production and biomass production.
DISCUSSION
The conversion of cellulosic biomass by microorganisms is a potential sustainable aproach to develop novel bioprocesses and products.Microbial cellulases have become the focal biocatalysts due to their complex nature and wide spread industrial applications (Henrissatet et al., 1998;Ramesh Chander Kuhad et al., 2011). Cellulase research has been concentrated mostly in mesophilic fungi but there is increasing interest in celullase production by thermophilic fungi due to their higher growth rate and thermostable and alkali stable properties. The development of rapid and reliable methods for the screening of celullases from thermogenic inhospitable environments will alow a greater number of novel fungal celulases to be isolated with purpose of industrial use.
The aim of the present work was to isolate and identify a high cellulase producing thermophilic fungus from thermogenic habitats and optimization of media components. A number of samples were processed for isolation of thermophilic fungi which were later screened for the cellulase production.Pure fungal organisms from thermogenic habitats were screened on CMC agar plates supplemented with Grams iodine. Zone of clearance was observed for all the fungal isolates in varying degree. Among the isolates Humicola sp.SKESMBKU03 produced a zone of 3.5 cm in diameter. Based on the colony morphology and microscopic observation, it was identified as Humicola sp. SKESMBKU03.
Transportation of various chemical across the cell membrane, including movement of enzyme and their activity is importantly influenced by the pH of the medium. There are general reports showing that different nitrogen sources have different influences on extracellular enzyme production by different strains.Current finding also shows that pH of production medium was an important factor affecting endo and exoglucanase production ( Table 2). Due to the change in the pH value from slightly acidic (pH-5.5) to more acidic (pH 2-3) condition which is unfavorable for the production of endo and exoglucanase activity. The effect of pH on endo and exoglucanase production by this fungi supports the findings of Moretti et al., (2012) who has reported the optimum pH for endoglucanases and xylanase from Aspergillus fumigatus M.7.1 were 4.5 and 4.5-5.5, respectively and 5.0 for both enzymes from Myceliophthora thermophila M.7.7.
In Table 4 results shows that the enzyme activity was decreased when the temperature increrased above 50 o C. The investigations are in hormony with the findings of Irshad et al., (2011) who found that xylanase production by H. lanuginosus is high at 45 o C and low at 50 o C.
Most fungi can selectively use substrates from a mixture of different carbon sources. The presence of preferred carbon sources effect its activity.Endo and exogulcanses production was found to be influenced by the nature of the carbon source used in the production media.Glucose was found to be good enhancer for endo and exoglucanse production by Humicola sp. SKESMBKU03. The results are nearer to the findings of Mchunu et al. (2013) who found that Thermomyces lanuginosus utilized xylose, trehelose, raffinose, Dmannose, furanose, fructose and glucose as best carbon source.
Cellulase production was dependent on the nature of nitrogen source in the culture medium. Various inorganic and organic nitrogen sources were tested for their effect on cellulase production. The maximum enzyme activities were present in organic nitrogen sources urea (endoglucanse), malt extract (exo-glucanse). The results are in agreement with Coutts and Smith,(1976) who found that urea and NaNO3 seemed to be most suitable for Cx and C1 cellulase production by Sporotrichum thermophile.
It was noticed that the optimum level of rotation needed for the maximum production of enzyme.The maximum production of both exo-endoglucanse is acheived at 100RPM. Further increase in RPM level, there was decrease in enzyme activity,this could be due to the fact that the increase in RPM level has resulted in the coagulation of the organism to form clumps and decrease in enzyme production. Tarek et al.(2013) who establish that rate of cellulase production was five times more in shaking cultures than in static ones, while βglucosidase was seven times more in shaking cultures than in static ones in Sclerotium rolfsii.
Humicola sp. SKESMBKU03 was isolated from semiarid regions of Telangana and grows better at high temperature, indicating that the optimal growing temperature does not necessarily correlate with the optimal condition for cellulases activity. pH Stability of exo-endoglucanase was found to be 5.0-7.0 and 5.0-6.0 respectively. Though cellulase activity decreased at elevated temperatures still substantial activity was maintained. Thermostability profile of cellulase showed that enzyme was thoroughly stable at 50-60°C for 1hr.The results are in consensus with findings of Busk and Lange, (2013) who reported that with increase in temperature there is a decrease in the stability of cellulase activity by Talaromyces leycettanus.
Determination of fungal biomass was done by taking dryweight of mycellium and it was found that there is a difference in biomass production in various media components.The maximum biomass was produced in pH of 9-10,tempaerature of 45 o C. Production media containing cellulose,peptone and yeast extract as their carbon and nitrogen sources showed a good biomass production. The findings are in secinario with the investigations of Granjo et al., (2007) Who found that the biomass will be less in the beginning of the incubation periods later on increase in the incubation period there will be increase in biomass production.
CONCLUSIONS
The result has enabled the ideal formulation of media composition for maximum endo and exoglucanase production by Humicola sp.SKESMBKU03. The high activity and stability of cellulase enzymes between neutral to alkaline pH and high temperature will be of use in various industrial and biotechnological applications. | 2019-01-01T04:13:38.968Z | 2015-02-09T00:00:00.000 | {
"year": 2015,
"sha1": "298cc4bd0124c9d99e3e04552ff34987015c7565",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/star/article/download/112744/102488",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "298cc4bd0124c9d99e3e04552ff34987015c7565",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54458462 | pes2o/s2orc | v3-fos-license | Hospitalized Pets as a Source of Carbapenem-Resistance
The massive and irrational use of antibiotics in livestock productions has fostered the occurrence and spread of resistance to “old class antimicrobials.” To cope with that phenomenon, some regulations have been already enforced in the member states of the European Union. However, a role of livestock animals in the relatively recent alerts on the rapid worldwide increase of resistance to last-choice antimicrobials as carbapenems is very unlikely. Conversely, these antimicrobials are increasingly administered in veterinary hospitals whose role in spreading bacteria or mobile genetic elements has not adequately been addressed so far. A cross-sectional study was carried out on 105 hospitalized and 100 non-hospitalized pets with the aim of measuring the prevalence of carbapenem-resistant Gram-negative bacteria (GNB) colonizing dogs and cats, either hospitalized or not hospitalized and estimating the relative odds. Stool samples were inoculated on MacConkey agar plates containing 1 mg/L imipenem which were then incubated aerobically at 37°C ± 1 for 48 h. Isolated bacteria were identified first by Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and were confirmed by 16S rRNA sequencing. The genetic basis of resistance was investigated using PCR methods, gene or whole genome sequencing (WGS). The prevalence of pets harboring carbapenem-resistant bacteria was 11.4 and 1.0% in hospitalized and not-hospitalized animals, respectively, with an odds ratio of 12.8 (p < 0.01). One pet carried two diverse isolates. Overall, 14 gram-negative non-fermenting bacteria, specifically, one Acinetobacter radioresistens, five Acinetobacter baumannii, six Pseudomonas aeruginosa and two Stenotrophomonas maltophilia were isolated. The Acinetobacter species carried acquired carbapenemases genes encoded by blaNDM-1 and blaOXA-23. In contrast, Pseudomonas phenotypic resistance was associated with the presence of mutations in the oprD gene. Notably, inherent carbapenem-resistant isolates of S. maltophilia were also resistant to the first-line recommended chemotherapeutic trimethoprim/sulfamethoxazole. This study estimates the risk of colonization by carbapenem-resistant non-fermenting GNB in pets hospitalized in veterinary tertiary care centers and highlights their potential role in spreading resistance genes among the animal and human community. Public health authorities should consider extending surveillance systems and putting the release of critical antibiotics under more strict control in order to manage the infection/colonization of pets in veterinary settings.
The massive and irrational use of antibiotics in livestock productions has fostered the occurrence and spread of resistance to "old class antimicrobials." To cope with that phenomenon, some regulations have been already enforced in the member states of the European Union. However, a role of livestock animals in the relatively recent alerts on the rapid worldwide increase of resistance to last-choice antimicrobials as carbapenems is very unlikely. Conversely, these antimicrobials are increasingly administered in veterinary hospitals whose role in spreading bacteria or mobile genetic elements has not adequately been addressed so far. A cross-sectional study was carried out on 105 hospitalized and 100 non-hospitalized pets with the aim of measuring the prevalence of carbapenem-resistant Gram-negative bacteria (GNB) colonizing dogs and cats, either hospitalized or not hospitalized and estimating the relative odds. Stool samples were inoculated on MacConkey agar plates containing 1 mg/L imipenem which were then incubated aerobically at 37 • C ± 1 for 48 h. Isolated bacteria were identified first by Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and were confirmed by 16S rRNA sequencing. The genetic basis of resistance was investigated using PCR methods, gene or whole genome sequencing (WGS). The prevalence of pets harboring carbapenem-resistant bacteria was 11.4 and 1.0% in hospitalized and not-hospitalized animals, respectively, with an odds ratio of 12.8 (p < 0.01). One pet carried two diverse isolates. Overall, 14 gramnegative non-fermenting bacteria, specifically, one Acinetobacter radioresistens, five Acinetobacter baumannii, six Pseudomonas aeruginosa and two Stenotrophomonas maltophilia were isolated. The Acinetobacter species carried acquired carbapenemases genes encoded by bla NDM−1 and bla OXA−23 . In contrast, Pseudomonas phenotypic resistance was associated with the presence of mutations in the oprD gene. Notably, inherent carbapenem-resistant isolates of S. maltophilia were also resistant to the first-line recommended chemotherapeutic trimethoprim/sulfamethoxazole. This study estimates the risk of colonization by carbapenem-resistant non-fermenting GNB in pets hospitalized in veterinary tertiary care centers and highlights their potential role Frontiers in Microbiology | www.frontiersin.org
INTRODUCTION
Among the large quantities of antimicrobials given to animals each year, only a fractional amount is reserved to pets (Guardabassi et al., 2004;European Medicines Agency, 2015). The latter estimate is expected to be undervalued since injectable molecules and human specialties are not adequately taken into account (European Medicines Agency, 2015). Even more interesting, it is the qualitative point of view; indeed, whereas in husbandry almost exclusively "old class antibiotics" are used; in veterinary tertiary care facilities (VTCF) for pets, also antimicrobials critically important for human medicine as carbapenems are administered in certain critical cases. In humans, Enterobacteriaceae and non-fermenting gram-negative bacteria (GNB) Pseudomonas aeruginosa, Acinetobacter spp. and Stenotrophomonas maltophilia account for the majority of the pandemic carbapenem resistance threat. The occurrence of carbapenem-resistant bacteria in veterinary medicine has already been brought to the attention of the scientific community due to the severe impact that this phenomenon may cause (Endimiani et al., 2011;Shaheen et al., 2013;Stolle et al., 2013). Resistant organisms are transmitted among pets, owners and veterinary staff and, in this way, spread to the community. (Boerlin et al., 2001;Guardabassi et al., 2004;Leite-Martins et al., 2014;Yao et al., 2016).
To date, the assessment of public health risk posed by VTCFs has not adequately been addressed and only scarce but relevant information is available (Boerlin et al., 2001;Pulss et al., 2018). Certainly, veterinary hospitals share some risk factors with human hospitals, such as the use of antibiotics, complex treatments, prolonged hospitalization of critically ill or immunocompromised patients, and the presence of intensive care units. These factors favor nosocomial infections, and the exchange of antimicrobial-resistant bacteria and genetic determinants of resistance (Poirel et al., 2008). In human hospitals, a particular concern has recently arisen, based on the evidence that the spread of resistant bacteria is not restricted to clonal outbreaks but also due to asymptomatic carriage (Cerqueira et al., 2017).
With the aim to refine the focus of antibiotic resistance in pets hospitalized in VTCFs, a cross-sectional study was thus carried out to investigate the prevalence of carbapenem-resistant GNB in pets admitted to VTCFs. To this aim, dogs and cats admitted to two different VTCFs in the Bologna area (Northern Italy) were included. In addition, privately owned pets living in the same area and not hospitalized were also investigated and compared in order to assess the relative odds of being colonized after hospitalization.
Experimental Design and Animals
This cross-sectional study was carried out within the Bologna area from November 2014 to January 2015. Feces from 105 hospitalized pets (61 dogs and 44 cats) and 100 non-hospitalized pets (73 dogs and 27 cats) were collected. Hospitalized pets from two different facilities, both of which were characterized by the presence of intensive care units and referral practices, were included consecutively after at least 48 h of hospitalization. The non-hospitalized pets were from 100 distinct owners, had never been hospitalized and had not been given antibiotics for at least 2 months; they will hereafter be defined as "general population." All the private owners of the animals considered in this study signed an informed consent form to authorize the rectal swab sampling.
Isolation and Identification of Enteric Bacteria
Rectal swabs were used for collecting the fecal material from the pets and were immediately placed in the transport medium Cary-Blair Medium (Oxoid). Within 24 h, the swabs were soaked in 1 mL of sterile saline, vortexed, and 0.1 mL was inoculated on plates of MacConkey Agar (Oxoid) containing 1 mg/L imipenem (Sigma-Aldrich). The plates were incubated aerobically at 37 • C ± 1 for 48 h.
All the different morphotypes of colonies grown in the same plate were subcultivated to obtain pure culture. Isolates were identified firstly by Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (bioMérieux) and successively confirmed by partial sequencing of the 16S rRNA gene (Brosius et al., 1978).
Antimicrobial Susceptibility
For all isolates, the minimum inhibitory concentrations (MICs) of imipenem and meropenem was determined using E-test strips (bioMérieux) following manufacturer instructions. P. aeruginosa, S. maltophilia, and Acinetobacter spp. were considered resistant to imipenem and meropenem at values ≥ 8 mg/L as suggested by the EUCAST (2016). The MIC of colistin were determined by broth microdilution using a breakpoint > 2 mg/L (EUCAST, 22 march 2016). Disk diffusion was used for testing all other antibiotics (Supplementary Material) and interpreted according to Clinical and Laboratory Standards Institute standard (CLSI 2014 M100-S24).
Molecular Characterization of Antimicrobial Resistance
The isolates were screened for the presence of the following genes coding for carbapenem-hydrolyzing enzymes: bla VIM , bla NDM−1 , bla IMP , bla KPC , and bla OXA−48−like using a multiplex polymerase chain reaction (PCR) assay as previously described (Doyle et al., 2012). In addition, a real-time PCR was used for detecting bla OXA−23 (Supplementary Material) followed by direct Sanger sequencing of the amplicons as a confirmatory assay. The presence of AmpC was carried out using a previously described multiplex PCR (Pérez-Pérez and Hanson, 2002) while extended-spectrum β-lactamases (ESBL) genes were detected using a multiplex PCR with subsequent melt curve analysis suitable to identify bla SHV , bla TEM , and bla CTX−M gene targets (Supplementary Material).
Two PCR reactions were used to confirm the presence of chromosomally encoded metallo β-lactamases (MBL) in S. maltophilia (Supplementary Material); in particular, two PCR assays were carried out targeting an internal sequence of the L1 gene and a sequence located within the region bridging the transcription factor encoding gene ampR and the inducible β-lactamase L2 gene (Gould et al., 2006;Okazaki and Avison, 2008). Furthermore, trimethoprim/sulfamethoxazole (SXT) resistance was investigated using primers pairs for sul1, sul2, and dfrA genes as well as class 1, 2, and 3 integrons (Hu et al., 2011;Supplementary Material).
To investigate whether bla NDM−1 was located in transposon Tn125, a PCR was carried out at 5 terminus with primers fwd_ISAba125 and NDM-reverse and at the 3 terminus of the transposon with the fwd primer annealing on F_ pac _and rev_ISAba125 (Supplementary Material). Finally, the oprD gene of P. aeruginosa was amplified using outer primers, amplifying the almost complete sequence of the gene followed by sequencing of the amplicon by internal primers (Supplementary Material).
Whole Genome Sequencing
Since, to the best of our knowledge, Acinetobacter radioresistens carrying chromosomally encoded bla NDM−1 has not previously been reported, the whole genome of the isolate was sequenced to better characterize the resistance pattern. Also the five isolates of A. baumannii were fully sequenced to characterize the bla OXA−23 genetic environment, the complete resistome profile and to assess the Multi Locus Sequence Type as well as the clonal relationship. For Whole Genome Sequencing (WGS), genomic DNA was extracted from overnight broth culture in Brain Heart Infusion (Oxoid, Basingstoke, United Kingdom) using the MagAttract HMW DNA kit (Qiagen, Milan, Italy). The DNA samples were quantified with a fluorescent nucleic acid dye (Picogreen; Invitrogen, Paisley, United Kingdom), and libraries were prepared using the Nextera XT DNA Sample Preparation Kit (Illumina, Inc., San Diego, CA, United States). Sequencing was carried out on an Illumina MiSeq platform with 2 × 250 bp paired-end runs. The sequence quality of the reads was evaluated using FastQC (Andrews, 2016). The reads were assembled with SPAdes 3.7.1. The ResFinder web server 1 using a threshold of 100% identity for the genes encoding β-lactamases and 98% identity for all other genes and CARD 2 (Jia et al., 2017) using the Resistance Gene Identifier v 4.0.2 were used to search for antimicrobial resistance genes in the Acinetobacter spp. draft genomes.
MLST
The Sequence Type of A. baumannii isolates was inferred from assembled contigs using the Multi Locus Sequence Typing application 1.8 (Assembled Genomes/Contigs) (Larsen et al., 2012).
In addition, to further analyze the genetic relatedness of the apparently clonal A. baumannii isolates, an alternative approach to the MLST, the wgMLST was carried out by genome-wide geneby-gene comparisons of a dataset of 5633 genes retrieved from the whole genome assemblies using the Bionumerics 7.6 package software (Applied Maths) using either an assembly free and an assembly based algorithm in Bionumerics engine calculation. The isolate 92A IMI was assumed as index isolate for comparison. The results were graphed as dendrogram with percentage of homology (Supplementary Figure S1).
PFGE Typing
For clonality assessment, A. baumannii isolates were typed using a rapid PFGE protocol with ApaI (40 U/plug) (Promega) as a restriction enzyme according to Durmaz (Durmaz et al., 2009).
Nucleotide Accession Number
The raw sequence data of Acinetobacter radioresistens and of five Acinetobacter baumannii isolates were deposited in the GenBank database under BioProject accession PRJNA344732 and PRJNA437120, respectively.
RESULTS
Thirteen out of 205 pets carried carbapenem-resistant bacteria with a prevalence estimate of 6.3% [95% confidence interval (CI): 3.4-10.6%]. Of these, 12 (8 dogs and 4 cats) were hospitalized pets [prevalence estimate 11.4% (95% CI: 6.0-19.1%)] and 1 dog was from the general population [prevalence estimate 1.0% (95% CI: 0.0-5.4%)]. The odds ratio of the hospitalized pets was 12.8 (95% CI: 1.6-100.2; p < 0.01). In one hospitalized cat, two different carbapenem-resistant bacteria were isolated; therefore, 14 isolates were found in total. They included one Acinetobacter radioresistens, five A. baumannii, six P. aeruginosa, and two S. maltophilia (Supplementary Table S1). The MICs of imipenem for the Acinetobacter species ranged from 16 mg/L to more than 32 mg/L, for the P. aeruginosa they were 16 mg/L and for the S. maltophilia they were > 32 mg/L ( Table 1). All bacteria showed multiple resistance, however, in all cases, the bacteria were susceptible to colistin (MIC < 2 mg/L). Carbapenemase resistance genes were detected in all isolates except P. aeruginosa. A. radioresistens carried both bla NDM−1 and bla OXA−23. The bla NDM−1 gene was chromosomally located and organized in a composite transposon encompassing bla NDM−1 , bleMBL, trpF, tat, cutA1, groES, groEL, insE, and pac genes bracketed between a pair of ISAba125. The presence of a mobile element with IS elements was further confirmed by a specific PCR assay designed to investigate the genetic environment of bla NDM−1 gene (Figure 1).
The A. radioresistens whole genome was sequenced, and the reads showed a coverage of 293X and a guanine-cytosine (GC) content of 42%. The assembled genome encountered 81 contigs with N50 of 113445 and mean contig size of 38028.383 bp. The genome length was 3.080 Mbp.
All the A. baumannii strains carried both intrinsic bla OXA−66 and acquired bla OXA−23 Class D Oxacillinases. However, while bla OXA−23 was embedded in a transposon bracketed within two ISAba1 insertion sequence which provide a strong promoter leading to a sustained expression of oxacillinase hydrolyzing enzyme, the bla OXA−66 lacks an upstream IS sequence (Figure 2).
Overall, the antimicrobial resistance profile of the five A. baumannii isolates showed an extensively drug-resistant (XDR) phenotype, i.e., non-susceptibility to at least one agent in all but two or fewer antimicrobial categories (Supplementary Table S2; Magiorakos et al., 2012). The comprehensive resistance determinants are enlisted in Supplementary Table S3.
The two S. maltophilia isolates carried inducible chromosomal encoded L1 metallo-β-lactamase with carbapenem hydrolyzing activity and its regulatory element ampR (Gould et al., 2006;Okazaki and Avison, 2008). Notably, both S. maltophilia isolates were resistant to SXT due to the presence of both sul1 and sul2 genes associated with class 1 integron mobile element.
Finally, carbapenem-resistant P. aeruginosa were negative to carbapenemase genes but carried either bla SHV , and/or bla TEM , and/or bla CTX−M β-lactamase associated with an oprD gene with truncating mutations in 4 out of 6 of these isolates (Tables 1, 2).
All A. baumannii strains showed the same macrorestriction profile by PFGE (Supplementary Figure S2). All isolates were assigned to ST2 (Pasteur's scheme) or ST451 (Oxford's scheme). Using the complete panel of the coded alleles a UPGMA tree was calculated in Bionumerics 7.6 software (Supplementary Figure S1). The wgMLST showed that all isolates except one were closely related with homology > 99.8%. (<6 diverse allele). The only isolate 108A IMI seemed more dissimilar with an homology of 98.8%. The most closely related isolates were sampled on the same day (87A IMI and 92A IMI) or 2 months apart (115A IMI and 213A IMI) (Supplementary Figure S1). FIGURE 1 | Acinetobacter radioresistens bla NDM−1 genetic environment. The schematic presentation of the A. radioresistens bla NDM−1 transposon Tn125 encompassing bla NDM−1 , bleMBL, trpF, tat, cutA1, groES, groEL, insE, and pac genes bracketed between a pair of ISAba125 is drawn above. Red rectangles indicate the position of the amplicons.
DISCUSSION
This study findings focuses on Acinetobacter spp. as the main emerging threat to public health associated with the hospitalization in VTCFs. Notably, we identified for the first time to the best of our knowledge, a carbapenem resistant A. radioresistens isolate carrying a bla NDM−1 gene within a Tn125 mobile element. The same composite Tn125 has already been reported in other Acinetobacter species (Pfeifer et al., 2011;Bonnin et al., 2012;Dortet et al., 2012;Fu et al., 2012;Chatterjee et al., 2016) but not yet in A. radioresistens. The presence of a Class B MBL in a mobile element is a concern since it could be readily transferred to other bacteria residing in the intestine, such as Enterobacteriaceae. Its potential to serve as a reservoir for worldwide dissemination of bla NDM should be given outstanding attention and should not be undervalued in view of the fact that was isolated in a veterinary and not human tertiary care settings. Conversely, A. radioresistens was found to carry bla OXA−23 without ISAb1 sequences and, as a consequence, the hydrolyzing oxacillinase was not expressed at high level. Overall, A. radioresistens was multi-drug resistant being not susceptible to quinolones, tetracycline and trimethoprim/sulfamethoxazole though it remained susceptible to amikacin and gentamicin, which are not affected by Aminoglycoside O-phosphotransferases strA and strB, as well as to ampicillin/sulbactam and colistin.
Noteworthy, the resistome of the five A. baumannii isolates showed an XDR phenotype (Supplementary Table S2) including non-susceptibility to quinolones, gentamicin, amikacin and sulbactam with only colistin as putative effective treatment. It is worth mentioning that also 16S rRNA methylase gene armA recently reported in nosocomial outbreaks in Italy and responsible for resistance to aminoglycosides including gentamicin and amikacin, was found (Brigante et al., 2012). For these features, the A. baumannii isolates represent an actual hazard retaining high epidemic potential.
A. baumannii is reported as a frequent cause of nosocomial infections with increased mortality worldwide (Higgins et al., 2010). Very recently, single cases and small case series of clinical isolates of carbapenem resistant A. baumannii of different sequence types and carrying bla OXA−23 were reported (Pomba et al., 2014;Ewers et al., 2016Ewers et al., , 2017Lupo et al., 2017). Those authors focused the attention on companion animals as potential public health hazard. Also a remarkably high rate of asymptomatic pet carriers of carbapenem-resistant A. baumannii ST25 (2.7%) has been reported in France and pointed out pets as possible reservoir of community acquired infections (Hérivaux et al., 2016).
All the A. baumannii isolates in VTCFs in our study belong to ST2. In Italy, besides, the endemic so called "Italian strain" ST78, the ST2 strains belonging to International Clonal lineage II has progressively become dominant as a cause of epidemic outbreaks in hospitals and is even replacing the ST78 strains. Both strains share great ability to adhere to cells, to invade and survive within pneumocytes as well as to form biofilm on abiotic surface (Ambrosi et al., 2017) hence, replacing of the ST78 may be associated with the acquisition of ST2 strains of highly expressed bla OXA−23 oxacillinase (D'Arezzo et al., 2011;Brigante et al., 2012;Mezzatesta et al., 2012). Indeed, in last years, nosocomial outbreaks in Italy are almost all ascribed to ST2 bla OXA−23 producing strains (Carretto et al., 2011;Agodi et al., 2014;Principe et al., 2014;Perilli et al., 2015;Piana et al., 2015). Bla OXA−23 producing strains were not reported in a survey, which had investigated nosocomial A. baumannii isolated in 2007 in the Bologna Area, while they were reported in another survey carried out later in 2011 (Carretto et al., 2011;Principe et al., 2014).
Within-hospital selective pressure clearly plays a role in favoring the emergence of MDR but there is also inter-hospital interconnection modalities (i.e., patients transfer) which explains the simultaneous outbreaks of related resistant A. baumannii strains (Karkada et al., 2011). How the VTCFs/ICUs fit in this model is far from being understood but the evidence that the strains isolated in veterinary and human medical settings in the same geographical area worth to be examined as well as the role of the community.
When investigating the clonal relatedness, all A. baumannii isolates showed the same PFGE pattern. Also wgMLST confirmed the high relatedness of at least of four of five of the isolates whereas one isolate showed less similarity, however, the complexity in interpreting the genomic variability is challenging, since no threshold guidelines could be applied and a case-by-case interpretation is still used in this matter (Halachev et al., 2014;Higgins et al., 2017;Shaheen et al., 2018); Since this study did not deal with an epidemic outbreak, the A. baumannii isolates found during the survey may represent the genetic evolution over time of originally the same clone which has colonized the medical environment for its ability to grow on abiotic surface, has accumulated genetic variations making it to diverge from the original clone. Eventually, the knowledge of the frequency rate of mutation accumulation in A. baumannii would lead to infer the precise time of colonization.
Unlike Acinetobacter isolates, P. aeruginosa-associated carbapenem resistance is linked to the loss of oprD (Lee and Ko, 2012;Ocampo-Sosa et al., 2012;Kim et al., 2016) and it is associated with multidrug resistance. In most of our cases, the loss of oprD was caused by different mutations within the gene causing a premature stop codon as a consequence of a large insertion, a frameshift or a nonsense mutation. The loss of oprD was also associated with either ESBL (bla SHV , bla TEM or bla CTX−M ) or AmpC resistance genes. The presence of different mutations and β-lactamase genes indirectly demonstrates the lack of clonal relatedness at least between the P. aeruginosa isolates. The absence of clonal spread in apparent pseudooutbreaks is very frequent for Gram-negative non-fermenting bacteria and it is correlated to their ability to colonize the medical device and surface (Valdezate et al., 2004) and produce multispecies biofilms (Tan et al., 2017).
Together with Acinetobacter spp. and P. aeruginosa, other carbapenemase-resistant non-fermenter GNB, namely S. maltophilia, were found. S. maltophilia is not an inherently primary pathogen, but it is rather environmental bacteria intrinsically resistant to β-lactams including carbapenems due to inducible β-lactamase (Gould et al., 2006;Okazaki and Avison, 2008). It should be noted that, in S. maltophilia human nosocomial infections, the antibiotic of choice for treatment of infections caused by these bacteria is SXT (Madi et al., 2016). Our isolates actually showed complete resistance to SXT. Both sul1 associated with class 1 integron, and sul2 genes were found in the isolates. Plasmid-mediated sul1 and sul2 are responsible for the worldwide emergence of resistance to sulfonamides reported beginning since 2007 (Toleman et al., 2007). In 2009, a comprehensive meta-analysis carried out on 3872 strains, found 4% of SXT resistant strains (Looney et al., 2009). As far as we know, the only report from southern Europe, namely from Greece, showed that the majority of S. maltophilia clinical isolates were sensitive to SXT (Samonis et al., 2012) while most of the concerns were from Middle east (Turkey, Iran) and Far East (Korea and China) (Liaw et al., 2010). The origin of this resistance at the genetic level was ascribed to sul genes. In particular, the presence of sul2 is reported in isolates showing very high resistance to SXT (Toleman et al., 2007). However, sul genes alone do not confer trimethoprim resistance but certainly, they are frequently associated with dfrA genes in the gene cassettes of the integrons. In our isolates, dfrA genes were not found and the high resistance to SXT was not clarified. Furthermore, class 1 integrons may carry genes coding for tolerance to common disinfectants and facilitate the horizontal spread of resistance to other bacterial species. Overall, these features make these opportunistic pathogens isolated in this study an insidious threat.
On a more general note, the findings of the study show that hospitalization in VTCFs providing cures to pets represents a significant risk factor of colonization with carbapenem resistant GNB embodying the most significant current hazard to public health in VTCFs. These findings are even more worrying if compared with those found in human hospitals in very similar cross-sectional epidemiological survey during non-epidemic outbreaks. The carbapenemase producing GNB was 0.4% compared with the 6% of this study and the overall prevalence of carbapenem-resistant GNB was 4.8% compared with the 11.4% of this study (Pantel et al., 2015). The other side of the coin, thankfully, is that the more pathogenic Enterobacteriaceae carrying carbapenemase resistance genes reported to cause epidemic cases around the world and also recently found in Germany in many different veterinary hospitals (Pulss et al., 2018) were not found in these veterinary healthcare settings. It should be noted that KPC producers (mainly Enterobacteriaceae) may exhibit susceptibility to imipenem and resistance to ertapenem causing an underestimation of their prevalence using MacConkey supplemented with imipenem (1 mg/L) as carried out in this study. However, KPC producers were not shown either using ceftazidime (1 mg/L) for ESBL bacteria nor using multiplex PCR directly on the DNA purified from stools for molecular screening (data not shown).
Non-fermenter GNB may cause nosocomial infections, particularly in immunocompromised critically ill patients, raising the mortality rate (Siempos et al., 2010;Chiu et al., 2015). Furthermore, Acinetobacter spp. and S. maltophilia, but also P. aeruginosa, show great ability to adapt to any environment, including hospitals; the clonal relatedness between the isolates of A. baumannii carried by pets hospitalized in one VTCF over a short period of time demonstrated the colonization of the environment. Non-fermenter GNB are also readily carried on the skin of colonized animals thus enabling the easy transmission to veterinarians, personnel providing care and owners as well as other hospitalized pets. These bacteria represent a rising challenge for healthcare management because they are frequently multidrug resistant as a consequence of a multitude of mechanisms. In addition, non-fermenter GNB may be a source of genes which are carbapenemase resistant for Enterobacteriaceae (Toleman et al., 2012;Bonnin et al., 2014). It should be emphasized that non-fermenter GNB were not isolated in clinical samples, but they colonized the intestines of the pets; no outbreaks sustained by nonfermenter GNB were reported in either of the investigated VTCFs during the period of study and, hence, no awareness of the risk was evident and no specific measures of control were activated.
In VTCFs, the concerns of carbapenem resistance should be even more compelling with respect to human hospitals due to the extensive, systematic use of broad-spectrum antibiotics, the relative lack of antimicrobial susceptibility-based therapy, the objective difficulties in applying and maintaining hygienic practices and the very limited use of systematic monitoring programs. In fact, these findings and considerations give rise to concerns regarding the risk for veterinary personnel veterinary students or trainees and for the pet owners, highlighting the possible role that veterinary facilities play in spreading carbapenemase-resistant genes in the environment, including the human community.
In the VTCFs investigated in this study, wide spectrum β-lactam drugs are used as first line therapy in hospitalized dogs.
The consequent high environmental pressure of such antibiotics represents a risk factor for the co-selection of carbapenemresistant bacteria (Maseda et al., 2017). In fact, almost all the carbapenem-resistant isolates are either Sulbactam resistant or AmpC or ESBL producers (data not shown), i.e., the "restricted use" of a specific class of antibiotics is not per se sufficient to avoid the emergence of the relative resistance. Since it is evident that carbapenem-resistant gram negative non-fermenters represent an actual threat in VTCFs, it appears more reasonable to also extend the public health surveillance system to VTCFs, and to allow the traceable and targeted use of carbapenems or other critical antibiotics, such as colistin, to control infection or to eradicate the carriage of multidrug resistant bacteria in pets. Appropriate antibiotic stewardships as well as active and continuous surveillance of resistance should be implemented in VTCFs which ask to be accredited for the use of carbapenems or last-choice antimicrobials.
AUTHOR CONTRIBUTIONS
FG and RZ provided substantial contributions to the conception and design of the work and the acquisition, analysis, and interpretation of data and drafting the work. FP and DM performed the acquisition, analysis, and interpretation of data and drafting the work. VS and AP performed the acquisition, analysis, and interpretation of data and critical revising of the work for important intellectual content. MT, NR, EZ, DT, GP, and JP performed the acquisition and analysis of data and critical revising of the work for important intellectual content. | 2018-12-06T14:04:59.290Z | 2018-12-06T00:00:00.000 | {
"year": 2018,
"sha1": "75a021da0303b062c104ede3f94558c42ffbcba2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.02872/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75a021da0303b062c104ede3f94558c42ffbcba2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
195833273 | pes2o/s2orc | v3-fos-license | What graph neural networks cannot learn: depth vs width
This paper studies the expressive power of graph neural networks falling within the message-passing framework (GNNmp). Two results are presented. First, GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node attributes, and layer expressiveness. Second, it is discovered that GNNmp can lose a significant portion of their power when their depth and width is restricted. The proposed impossibility statements stem from a new technique that enables the repurposing of seminal results from distributed computing and leads to lower bounds for an array of decision, optimization, and estimation problems involving graphs. Strikingly, several of these problems are deemed impossible unless the product of a GNNmp's depth and width exceeds a polynomial of the graph size; this dependence remains significant even for tasks that appear simple or when considering approximation.
INTRODUCTION
A fundamental question in machine learning is to determine what a model can and cannot learn. In deep learning, there has been significant research effort in establishing positive results for feedforward (Cybenko, 1989;Hornik et al., 1989;Lu et al., 2017) and recurrent neural networks (Neto et al., 1997), as well as more recently for Transformers and Neural GPUs (Pérez et al., 2019). We have also seen the first results studying the universality of graph neural networks, i.e., neural networks that take graphs as input. Maron et al. (2019b) derived a universal approximation theorem over invariant functions targeted towards deep networks whose layers are linear and equivariant to permutation of their input. Universality was also shown for equivariant functions and a particular shallow architecture (Keriven & Peyré, 2019).
Universality statements allow us to grasp the expressive power of models in the limit. In theory, given enough data and the right training procedure, a universal network will be able to solve any task that it is presented with. Nevertheless, the insight brought by such results can also be limited. Knowing that a sufficiently large network can be used to solve any problem does not reveal much about how neural networks should be designed in practice. It also cannot guarantee that said network will be able to solve a given task given a particular training procedure, such as stochastic gradient descent.
On the other hand, it might be easier to obtain insights about models by studying their limitations. After all, the knowledge of what cannot be computed (and thus learned) by a network of specific characteristics applies independently of the training procedure. Further, by helping us comprehend the difficulty of a task in relation to a model, impossibility results can yield practical advice on how to select model hyperparameters. Take, for instance, the problem of graph classification. Training a graph classifier entails identifying what constitutes a class, i.e., finding properties shared by graphs in one class but not the other, and then deciding whether new graphs abide to said learned properties. However, if the aforementioned decision problem is shown to be impossible by a graph neural network of certain depth then we can be certain that the same network will not learn how to classify a sufficiently diverse test set correctly, independently of which learning algorithm is employed. We should, therefore, focus on networks deeper that the lower bound when performing experiments. problem bound problem bound cycle detection (odd) dw = Ω(n/log n) shortest path d √ w = Ω( √ n/ log n) cycle detection (even) dw = Ω( √ n/log n) max. indep. set dw = Ω(n 2 /log 2 n) for w = O(1) subgraph verification* d √ w = Ω( √ n/ log n) min. vertex cover dw = Ω(n 2 /log 2 n) for w = O(1) min. spanning tree d √ w = Ω( √ n/ log n) perfect coloring dw = Ω(n 2 /log 2 n) for w = O(1) min. cut d √ w = Ω( √ n/ log n) girth 2-approx. dw = Ω( √ n/log n) diam. computation dw = Ω(n/log n) diam. 3 /2-approx. dw = Ω( √ n/log n) Table 1: Summary of main results. Subgraph verification* entails verifying one of the following predicates for a given subgraph: is connected, contains a cycle, forms a spanning tree, is bipartite, is a cut, is an s-t cut. All problems are defined in Appendix A.
MAIN RESULTS
This paper studies the expressive power of message-passing graph neural networks (GNN mp ) Gilmer et al. (2017). This model encompasses several state-of-the-art networks, including GCN (Kipf & Welling, 2016), gated graph neural networks (Li et al., 2015), molecular fingerprints (Duvenaud et al., 2015), interaction networks (Battaglia et al., 2016), molecular convolutions (Kearnes et al., 2016), among many others. Networks using a global state (Battaglia et al., 2018) or looking at multiple hops per layer (Morris et al., 2019;Liao et al., 2019;Isufi et al., 2020) are not directly GNN mp , but they can often be re-expressed as such. The provided contributions are two-fold: I. What GNN mp can compute. Section 3 derives sufficient conditions such that a GNN mp can compute any function on its input that is computable by a Turing machine. This result compliments recent universality results (Maron et al., 2019b;Keriven & Peyré, 2019) that considered approximation (rather than computability) over specific classes of functions (invariant and equivariant) and particular architectures. The claim follows in a straightforward manner by establishing the equivalence of GNN mp with LOCAL (Angluin, 1980;Linial, 1992;Naor & Stockmeyer, 1993), a classical model in distributed computing that is itself Turing universal. In a nutshell, GNN mp are shown to be universal if four strong conditions are met: there are enough layers of sufficient expressiveness and width, and nodes can uniquely distinguish each other. Since Turing universality is a strictly stronger property than universal approximation, Chen et al. (2019)'s argument further implies that a Turing universal GNN mp can solve the graph isomorphism problem (a sufficiently deep and wide network can compute the isomorphism class of its input).
II. What GNN mp cannot compute (and thus learn). Section 4 analyses the implications of restricting the depth d and width w of GNN mp that do not use a readout function. Specifically, it is proven that GNN mp lose a significant portion of their power when the product dw, which I call capacity, is restricted. The analysis relies on a new technique that enables repurposing impossibility results from the context of distributed computing to the graph neural network setting. Specifically, lower bounds for the following problems are presented: (i) detecting whether a graph contains a cycle of specific length; (ii) verifying whether a given subgraph is connected, contains a cycle, is a spanning tree, is bipartite, is a simple path, corresponds to a cut or Hamiltonial cycle; (iii) approximating the shortest path between two nodes, the minimum cut, and the minimum spanning tree; (iv) finding a maximum independent set, a minimum vertex cover, or a perfect coloring; (v) computing or approximating the diameter and girth. The bounds are summarized in Table 1 and the problem definitions can be found in Appendix A. Section 5 presents some empirical evidence of the theory.
Though formulated in a graph-theoretic sense, the above problems are intimately linked to machine learning on graphs. Detection, verification, and computation problems are relevant to classification: knowing what properties of a graph a GNN mp cannot see informs us also about which features of a graph can it extract. Further, there have been attempts to use GNN mp to devise heuristics for graph-based optimization problems (Khalil et al., 2017;Battaglia et al., 2018;Li et al., 2018;Joshi et al., 2019;Bianchi et al., 2019), such as the ones discussed above. The presented results can then be taken as a worst-case analysis for the efficiency of GNN mp in such endeavors.
DISCUSSION
The results of this paper carry several intriguing implications. To start with, it is shown that the capacity dw of a GNN mp plays a significant role in determining its power. Solving many problems is shown to be impossible unless dw =Ω(n δ ), where δ ∈ [ 1 /2, 2], n is the number of nodes of the graph, and f (n) =Ω(g(n)) is interpreted as f (n) being, up to logarithmic factors, larger than g(n) as n grows. This reveals a direct trade-off between the depth and width of a graph neural network.
Counter-intuitively, the dependence on n can be significant even if the problem appears local in nature or one only looks for approximate solutions. For example, detecting whether G contains a short cycle of odd length cannot be done unless dw =Ω(n). Approximation helps, but not a limited extent; computing the graph diameter requires dw =Ω(n) and this reduces to dw =Ω( √ n) for any 3 /2-factor approximation. Further, it is impossible to approximate within any constant factor the shortest path, the minimum cut, and the minimum spanning tree, all three of which have polynomialtime solutions, unless d √ w =Ω( √ n). Finally, for truly hard problems, the capacity may even need to be super-linear on n. Specifically, it is shown that, even if the layers of the GNN mp are allowed to take exponential time, solving certain NP-hard problems necessitates d =Ω(n 2 ) depth for any constant-width network.
Relation to previous impossibility results. In contrast to universality (Maron et al., 2019b;Keriven & Peyré, 2019), the limitations of GNN mp have been much less studied. In particular, the bounds presented here are the first impossibility results that (i) explicitly connect GNN mp properties (depth and width) with graph properties and that (ii) go beyond isomorphism by addressing decision, optimization, and estimation graph problems. Three main directions of related work can be distinguished. First, Dehmamy et al. (2019) bounded the ability of graph convolutional networks (i.e., GNN mp w/o messaging functions) to compute specific polynomial functions of the adjacency matrix, referred to as graph moments by the authors. Second, Xu et al. (2018) and Morris et al. (2019) established the equivalence of anonymous GNN mp (those that do not rely on node identification) to the Weisfeiler-Lehman (WL) graph isomorphism test. The equivalence implies that anonymous networks are blind to the many graph properties that WL cannot see: e.g., any two regular graphs with the same number of nodes are identical from the perspective of the WL test (Arvind et al., 2015;Kiefer et al., 2015). Third, in parallel to this work, Sato et al. (2019) utilized a connection to LOCAL to derive impossibility results for the ability of a class of novel partially-labeled GNN mp to find good approximations for three NP-hard optimization problems. Almost all of the above negative results occur due to nodes being unable to distinguish between neighbors at multiple hops (see Appendix D). With discriminative attributes GNN mp become significantly more powerful (without necessarily sacrificing permutation in/equivariance). Still, as this work shows, even in this setting certain problems remain impossible when the depth and width of the GNN mp is restricted. For instance, though cycles can be detected (something impossible in anonymous networks (Xu et al., 2018;Morris et al., 2019)), even for short cycles one now needs dw =Ω(n). Further, in contrast to Sato et al. (2019), an approximation ratio below 2 for the minimum vertex cover is not impossible, but necessitates dw =Ω(n 2 ).
Limitations. First, all lower bounds are of a worst-case nature: a problem is deemed impossible if there exists a graph for which it cannot be solved. The discovery of non worst-case depth-vswidth bounds remains an open problem. Second, rather than taking into account specific parametric functions, each layer is assumed to be sufficiently powerful to compute any function of its input. This strong assumption does not significantly limit the applicability of the results, simply because all lower bounds that hold with universal layers also apply to those that are limited computationally. Lastly, it will be assumed that nodes can uniquely identify each other. Node identification is compatible with permutation invariance/equivariance as long as the network output is asked to be invariant to the particular way the ids have been assigned. In the literature, one-hot encoded node ids are occasionally useful (Kipf & Welling, 2016;Berg et al., 2017). When attempting to learn functions across multiple graphs, ids should be ideally substituted by sufficiently discriminative node attributes (attributes that uniquely identify each node within each receptive field it belongs to can serve as ids). Nevertheless, similar to the unbounded computation assumption, if a problem cannot be solved by a graph neural network in the studied setting, it also cannot be solved without identifiers and discriminative attributes. Thus, the presented bounds also apply to partially and fully anonymous networks.
Notation. I consider connected graphs G = (V, E) consisting of n = |V| nodes. The edge going from v j to v i is written as e i←j and it is asserted that if e i←j ∈ E then also e j←i ∈ E. The neighborhood N i of a node v i ∈ V consists of all nodes v j for which e i←j ∈ E. The degree of v i is denoted by deg i , ∆ is the maximum degree of all nodes and the graph diameter δ G is the length of the longest shortest path between any two nodes. In the self-loop graph G * = (V, E * ), the neighborhood set of v i is given by
THE GRAPH NEURAL NETWORK COMPUTATIONAL MODEL
Graph neural networks are parametric and differentiable learning machines. Their input is usually an attributed graph G a = (G, (a i : v i ∈ V), (a i←j : e i←j ∈ E)), where vectors a 1 , . . . , a n encode relevant node attributes and a i←j are edge attributes, e.g., encoding edge direction.
Model 1 formalizes the graph neural network operation by placing it in the message passing model (Gilmer et al., 2017). The computation proceeds in layers, within which a message m i←j is passed along each directed edge e i←j ∈ E going from v j to v i and each node updates its internal representation by aggregating its state with the messages sent by its incoming neighbors v j ∈ N i . The network output can be either of two things: a vector x i for each node v i or a single vector x G obtained by combining the representations of all nodes using a readout function. Vectors x i /x G could be scalars (node/graph regression), binary variables (node/graph classification) or multi-dimensional (node/graph embedding). I use the symbols GNN n mp and GNN g mp to distinguish between models that return a vector per node and one per graph, respectively.
Computational model 1 Message passing graph neural network (GNN mp ) The operation of a GNN mp is primarily determined by the messaging, update, and readout functions. I assume that MSG and UP are general functions that act on intermediate node representations and node ids (the notation is overloaded such that v i refers to both the i-th node as well as its unique id). As is common in the literature (Lu et al., 2017;Battaglia et al., 2018), these functions are instantiated by feed-forward neural networks. Thus, by the universal approximation theorem and its variants (Cybenko, 1989;Hornik et al., 1989), they can approximate any general function that maps vectors onto vectors, given sufficient depth and/or width. Function READ is useful when one needs to retrieve a representation that is invariant of the number of nodes. The function takes as an input a multiset, i.e., a set with possibly repeating elements, and returns a vector. Commonly, READ is chosen to be a dimension squashing operator, such as a sum or a histogram, followed by a feed-forward neural network (Xu et al., 2018;Seo et al., 2019).
Depth and width. The depth d is equal to the number of layers of the network. Larger depth means that each node has the opportunity to learn more about the rest of the graph (i.e., it has a larger receptive field). The width w of a GNN mp is equal to the largest dimension of state x (l) i over all layers l and nodes v i ∈ V. Since nodes need to be able to store their own unique ids, in the following it is assumed that each variable manipulated by the network is represented in finite-precision using p = Θ(log n) bits (though this is not strictly necessary for the analysis).
SUFFICIENT CONDITIONS FOR TURING UNIVERSALITY
This section studies what graph neural networks can compute. It is demonstrated that, even without readout function, a network is computationally universal 1 if it has enough layers of sufficient width, nodes can uniquely distinguish each other, and the functions computed within each layer are sufficiently expressive. The derivation entails establishing that GNN n mp is equivalent to LOCAL, a classical model used in the study the distributed algorithms that is itself Turing universal.
THE LOCAL COMPUTATIONAL MODEL
A fundamental question in theoretical computer science is determining what can and cannot be computed efficiently by a distributed algorithm. The LOCAL model, initially studied by Angluin (1980), Linial (1992), and Naor & Stockmeyer (1993), provides a common framework for analyzing the effect of local decision. Akin to GNN mp , in LOCAL a graph plays a double role: it is both the input of the system and captures the network topology of the distributed system that solves the problem. In this spirit, the nodes of the graph are here both the machines where computation takes place as well as the variables of the graph-theoretic problem we wish to solve-similarly, edges model communication links between machines as well as relations between nodes. Each node v i ∈ V is given a problem-specific local input and has to produce a local output. The input contains necessary the information that specifies the problem instance. All nodes execute the same algorithm, they are fault-free, and they are provided with unique identifiers.
A pseudo-code description is given in Model 2. Variables s (l) i and s (l) i←j refer respectively to the state of v i in round l and to the message sent by v j to v i in the same round. Both are represented as strings. The computation starts simultaneously and unfolds in synchronous rounds l = 1, . . . , d. Three things can occur within each round: each node receives a string of unbounded size from its incoming neighbors; each node updates its internal state by performing some local computation; and each node sends a string to every one of its outgoing neighbors. Functions ALG 1 l and ALG 2 l are algorithms computed locally by a Turing machine running on node v i . Before any computation is done, each node v i is aware of its own attribute a i as well as of all edge attributes {a i←j : v j ∈ N * i }.
Computational model 2 LOCAL (computed distributedly by each node v i ∈ V).
In LOCAL, there are no restrictions on how much information a node can send at every round. Asserting that each message s ( ) i←j is at most b bits yields the CONGEST model (Peleg, 2000).
TURING UNIVERSALITY
The reader might have observed that LOCAL resembles closely GNN n mp in its structure, with only a few minor exceptions: firstly, whereas in LOCAL an algorithm A may utilize messages in any way it chooses, a GNN n mp network N always sums received messages before any local computation. The two models also differ in the arguments of the messaging function and the choice of information representation (string versus vector). Yet, as the following theorem shows, the differences between GNN n mp and LOCAL are inconsequential when seen from the perspective of their expressive power: where G a is the set of all attributed graphs.
This equivalence enables us to reason about the power of GNN n mp by building on the well-studied properties of LOCAL. In particular, it is well known in distributed computing that, as long as the number of rounds d of a distributed algorithm is larger than the graph diameter δ G , every node in a LOCAL can effectively make decisions based on the entire graph (Linial, 1992). Together with Theorem 3.1, the above imply that, if computation and memory is not an issue, one may construct a GNN n mp that effectively computes any computable function w.r.t. the graph and attributes. Corollary 3.1. GNN n mp can compute any Turing computable function over connected attributed graphs if the following conditions are jointly met: each node is uniquely identified; MSG l and UP l are Turing-complete for every layer ; the depth is at least d ≥ δ G layers; and the width is unbounded.
Why is this result relevant? From a cursory review, it might seem that universality is an abstract result with little implication to machine learning architects. After all, the utility of a learning machine is usually determined not with regards to its expressive power but with its ability to generalize to unseen examples. Nevertheless, it can be argued that universality is an essential property of a good learning model. This is for two main reasons: First, universality guarantees that the learner does not have blind-spots in its hypothesis space. No matter how good the optimization algorithm is, how rich the dataset, and how overparameterized the network is, there will always be functions which a non universal learner cannot learn. Second, a universality result provides a glimpse on how the size of the learner's hypothesis space is affected by different design choices. For instance, Corollary 3.1 puts forth four necessary conditions for universality: the GNN n mp should be sufficiently deep and wide, nodes should be able to uniquely and consistently identify each other, and finally, the functions utilized in each layer should be sufficiently complex. The following section delves further into the importance of two of these universality conditions. It will be shown that GNN n mp lose a significant portion of their power when the depth and width conditions are relaxed.
The universality of GNN g mp . Though a universality result could also be easily derived for networks with a readout function, the latter is not included as it deviates from how graph neural networks are meant to function: given a sufficiently powerful readout function, a GNN g mp of d = 1 depth and O(∆) width can be used to compute any Turing computable function. The nodes should simply gather one hop information about their neighbors; the readout function can then reconstruct the problem input based on the collective knowledge and apply any computation needed.
IMPOSSIBILITY RESULTS AS A FUNCTION OF DEPTH AND WIDTH
This section analyzes the effect of depth and width in the expressive power of GNN n mp . Specifically, I will consider problems that cannot be solved by a network of a given depth and width.
To be able to reason in terms of width, it will be useful to also enforce that the message size in LOCAL at each round is at most b bits. This model goes by the name CONGEST in the distributed computing literature (Peleg, 2000). In addition, it will be assumed that nodes do not have access to a random generator. With this in place, the following theorem shows us how to translate impossibility results from CONGEST to GNN n mp : Theorem 4.1. If a problem P cannot be solved in less than d rounds in CONGEST using messages of at most b bits, then P cannot be solved by a GNN n mp of width w ≤ (b − log 2 n)/p = O(b/ log n) and depth d.
The p = Θ(log n) factor corresponds to the length of the binary representation of every variable-the precision needs to depend logarithmically on n for the node ids to be unique. With this result in place, the following sections re-state several known lower bounds in terms of a GNN n mp 's depth and width.
IMPOSSIBILITY RESULTS FOR DECISION PROBLEMS
I first consider problems where one needs to decide whether a given graph satisfies a certain property (Feuilloley & Fraigniaud, 2016). Concretely, given a decision problem P and a graph G, the GNN n mp should output x i ∈ {true, false} for all v i ∈ V. The network then accepts the premise if the logical conjunction of {x 1 , . . . , x n } is true and rejects it otherwise. Such problems are intimately connected to graph classification: classifying a graph entails identifying what constitutes a class from some training set and using said learned definition to decide the label of graphs sampled from the test set. Instead, I will suppose that the class definition is available to the classifier and I will focus on the corresponding decision problem. As a consequence, every lower bound presented below for a decision problem must also be respected by a GNN n mp classifier that attains zero error on the corresponding graph classification problem.
Subgraph detection. In this type of problems, the objective is to decide whether G contains a subgraph belonging to a given family. I focus specifically on detecting whether G contains a cycle C k , i.e., a simple undirected graph of k nodes each having exactly two neighbors. As the following result shows, even with ids, cycle detection remains relatively hard: Corollary 4.1 (Repurposed from (Drucker et al., 2014;Korhonen & Rybicki, 2018)). There exists graph G on which every GNN n mp of width w requires depth at least d = Ω( √ n/(w log n)) and d = Ω(n/(w log n)) to detect if G contains a cycle C k for even k ≥ 4 and odd k ≥ 5, respectively.
Whereas an anonymous GNN mp cannot detect cycles (e.g., distinguish between two C 3 vs one C 6 (Maron et al., 2019a)), it seems that with ids the product of depth and width should exhibit an (at least) linear dependence on n. The intuition behind this bound can be found in Appendix C and empirical evidence in support of the theory are presented in Section 5.
Subgraph verification. Suppose that the network is given a subgraph H = (V H , E H ) of G in its input. This could, for instance, be achieved by selecting the attributes of each node and edge to be a one-hot encoding of their membership on V H and E H , respectively. The question considered is whether the neural network can verify a certain property of H. More concretely, does a graph neural network exist that can successfully verify H as belonging to a specific family of graphs w.r.t. G? In contrast to the standard decision paradigm, here every node should reach the same decision-either accepting or rejecting the hypothesis. The following result is a direct consequence of the seminal work by Sarma et al. (2012): Corollary 4.2 (Repurposed from (Sarma et al., 2012)). There exists a graph G on which every GNN n mp of width w requires depth at least d = Ω( n w log 2 n + δ G ) to verify if some subgraph H of G is connected, contains a cycle, forms a spanning tree of G, is bipartite, is a cut of G, or is an s-t cut of G. Furthermore, the depth should be at least d = Ω n w log n γ + δ G with γ = 1 2 − 1 2(δ G −1) to verify if H is a Hamiltonian cycle or a simple path.
Therefore, even if one knows where to look in G, verifying whether a given subgraph meets a given property can be non-trivial, and this holds for several standard graph-theoretic properties. For instance, if we constrain ourselves to networks of constant width, detecting whether a subgraph is connected can, up to logarithmic factors, require Ω( √ n) depth in the worst case.
IMPOSSIBILITY RESULTS FOR OPTIMIZATION PROBLEMS
I turn my attention to the problems involving the exact or approximate optimization of some graphtheoretic objective function. From a machine learning perspective, the considered problems can be interpreted as node/edge classification problems: each node/edge is tasked with deciding whether it belongs to the optimal set or not. Take, for instance, the maximum independent set, where one needs to find the largest cardinality node set, such that no two of them are adjacent. Given only information identifying nodes, GNN n mp will be asked to classify each node as being part of the maximum independent set or not.
Polynomial-time problems. Let me first consider three problems that possess known polynomialtime solutions. To make things easier for the GNN n mp , I relax the objective and ask for an approximate solution rather than optimal. An algorithm (or neural network) is said to attain an α-approximation if it produces a feasible output whose utility is within a factor α of the optimal. Let OPT be the utility of the optimal solution and ALG that of the α-approximation algorithm. Depending on whether the problem entails minimization or maximization, the ratio ALG/OPT is at most α and at least 1/α, respectively.
According to the following corollary, it is non-trivial to find good approximate solutions: Corollary 4.3 (Repurposed from (Sarma et al., 2012;Ghaffari & Kuhn, 2013)). There exists graphs G and G of diameter δ G = Θ(log n) and δ G = O(1) on which every GNN n mp of width w requires depth at least d = Ω( n w log 2 n ) and d = Ω(( n w log n ) γ ) with γ = 1 2 − 1 2(δ G −1) , respectively, to approximate within any constant factor: the minimum cut problem, the shortest s-t path problem, or the minimum spanning tree problem.
Thus, even for simple problems (complexity-wise), in the worst case a constant width GNN n mp should be almost Ω( √ n) deep even if the graph diameter is exponentially smaller than n.
NP-hard problems. So what about truly hard problems? Clearly, one cannot expect a GNN mp to solve an NP-hard time in polynomial time 2 . However, it might be interesting as a thought experiment to consider a network whose layers take exponential time on the input size-e.g., by selecting the MSG l and UP l functions to be feed-forward networks of exponential depth and width. Could one ever expect such a GNN n mp to arrive at the optimal solution? The following corollary provides necessary conditions for three well-known NP-hard problems: Corollary 4.4 (Repurposed from (Censor-Hillel et al., 2017)). There exists a graph G on which every GNN n mp of width w = O(1) requires depth at least d = Ω(n 2 /log 2 n) to solve: the minimum vertex cover problem; the maximum independent set problem; the perfect coloring problem.
Thus, even if each layer is allowed to take exponential time, the depth should be quadratically larger than the graph diameter δ G = O(n) to have a chance of finding the optimal solution. Perhaps disappointingly, the above result suggests that it may not be always possible to exploit the distribution decision making performed by GNN mp architectures to find solutions faster than classical (centralized) computational paradigms.
IMPOSSIBILITY RESULTS FOR ESTIMATION PROBLEMS
Finally, I will consider problems that involve the computation or estimation of some real function that takes as an input the graph and attributes. The following corollary concerns the computation of two well-known graph invariants: the diameter δ G and the girth. The latter is defined as the length of the shortest cycle and is infinity if the graph has no cycles. Corollary 4.5 (Repurposed from (Frischknecht et al., 2012)). There exists a graph G on which every GNN n mp of width w requires depth at least d = Ω(n/(w log n) + δ G ) to compute the graph diameter δ G and d = Ω( √ n/(w log n) + δ G ) to approximate the graph diameter and girth within a factor of 3 /2 and 2, respectively.
Term δ G appears in the lower bounds because both estimation problems require global information. Further, approximating the diameter within a 3 /2 factor seems to be simpler than computing it. Yet, in both cases, one cannot achieve this using a GNN n mp whose capacity is constant. As a final remark, the graphs giving rise to the lower bounds of Corollary 4.5 have constant diameter and Θ(n 2 ) edges. However, similar bounds can be derived also for graphs with O(n log n) edges (Abboud et al., 2016). For the case of exact computation, the lower bound is explained in Appendix C.
EMPIRICAL EVIDENCE
This section aims to empirically test the connection between the capacity dw of a GNN mp , the number of nodes n of its input, and its ability to solve a given task. In particular, I considered the problem of 4-cycle classification and tasked the neural network with classifying graphs based on whether they contained a cycle of length four. Following the lower bound construction described in Appendix A, I generated five distributions over graphs with n ∈ (8, 16, 24, 32, 44) nodes and an average diameter of (4, 6,8,9,11), respectively (this was achieved by setting p ∈ (6, 8, 10, 12, 14), see Figure 3a). For each such distribution, I generated a training and test set consisting respectively of 1000 and 200 examples. Both sets were exactly balanced, i.e., any example graph from the training and test set had exactly 50% chance of containing a 4-cycle.
The experiment aimed to evaluate how able were GNN mp of different capacities to attain high accuracy on the test set. To this end, I performed grid search over the hyperparameters w ∈ (2, 10, 20) and d ∈ (5, 10, 20, 15). To reduce the dependence on the initial conditions and training length, for each hyperparameter combination, I trained 4 networks independently (using Adam and learning rate decay) for 4000 epochs. The GNN mp chosen was that proposed by Xu et al. (2018), with the addition of residual connections-this network outperformed all others that I experimented with.
It is important to stress that empirically verifying lower bounds for neural networks is challenging, because it involves searching over the space of all possible networks in order to find the ones that perform the best. For this reason, an experiment such as the one described above cannot be used to verify 3 the tightness of the bounds: we can never be certain whether the results obtained are the best possible or whether the optimization resulted in a local minimum. In that view, the following results should be interpreted in a qualitative sense. The question that I will ask is: to which extend do the trends uncovered match those predicted by the theory? More specifically, does the ability of a network to detect 4-cycles depend on the relation between dw and n?
To answer this question, Figure 1 depicts the training and test accuracy as a function of the capacity dw for all the 240 networks trained (5 distributions × 3 widths × 4 depths × 4 iterations). The accuracy of the best performing networks with the smallest capacity is shown in Figures 1c and 1d. It is important to stress that, based on Weisfeiler-Lehman analyses, anonymous GNN mp cannot solve the considered task. However, as it seen in the figures, the impossibility is annulled when using node ids 4 . Indeed, even small neural networks could consistently classify all test examples perfectly (i.e., achieving 100% test accuracy) when n ≤ 16. Moreover, as the theoretical results predicted, there is a strong correlation between the test accuracy, dw and n (recall that Corollary 4.1 predicts dw =Ω( √ n)). Figure 1d shows that networks of the same capacity were consistently less accurate on the test set as n increased (even though the cycle length remained 4 in all experiments). It is also striking to observe that even the most powerful networks considered could not achieve a test accuracy above 95% for n > 16; for n = 40 their best accuracy was below 80%.
Effect of anonymity. Figure 2a plots example training and test curves for GNN mp trained with four different node attributes: no attributes (anonymous), a one-hot encoding of the node degrees (degree), a one-hot encoding of node ids (unique id), and a one-hot encoding of node ids that changed across graphs (random unique id). It can be clearly observed that there is a direct correlation between accuracy and the type of attributes used. With non-or partially-discriminative attributes, the network could not detect cycles even in the training set. The cycle detection problem was solved exactly with unique ids, but when the latter were inconsistently assigned, the network could not learn to generalize.
Exchangeability of depth and width. Figure 2b examines further the relationship between depth, width, and test accuracy. This time, networks were separated depending on their depth and width normalized by the square root of the "critical capacity". For each n, the critical capacity is the minimum dw of a network that was able to solve the task on a graph of n nodes-here, solving amounts to a test accuracy above 95%. In this way, a network of depth d and width w tested on n nodes corresponds to a point positioned at x = d/ √ critical, y = w/ √ critical and no network positioned at xy < 1 can solve the task (non-highlighted region in the bottom left corner). As seen, there is a crisp phase transition between the regime of under-and super-critical capacity: almost every network meeting the condition dw ≥ critical was able to solve the task, irrespective of whether the depth or width was larger. Note that, the exchangeability of depth and width cannot be guaranteed by the proposed theory which asserts that the condition dw =Ω( √ n) is necessary-but not sufficient. The empirical results however do agree with the hypothesis that, for 4-cycle classification, depth and width are indeed exchangeable.
CONCLUSION
This work studied the expressive power of graph neural networks falling within the message-passing framework. Two results were derived. First, sufficient conditions were provided such that GNN mp can compute any function computable by a Turing machine with the same connected graph as input. Second, it was discovered that the product of a GNN mp 's depth and width plays a prominent role in determining whether the network can solve various graph-theoretic problems. Specifically, it was shown that GNN n mp with dw =Ω(n δ ) and δ ∈ [0.5, 2] cannot solve a range of decision, optimization, and estimation problems involving graphs. Overall, the proposed results demonstrate that the power of graph neural networks depends critically on their capacity and illustrate the importance of using discriminative node attributes.
• (maximum) independent set: an independent set is a set of nodes in a graph no two of which are adjacent. The maximum independent set problem entails finding the independent set of maximum cardinality.
• (minimum) vertex cover: a vertex cover of G is a set of nodes such that each edge of G is incident to at least one node in the set. The minimum vertex cover problem entails finding the vertex cover of minimum cardinality.
• (perfect) coloring: a coloring of G is a labeling of the nodes with distinct colors such that no two adjacent nodes are colored using same color. The perfect coloring problem entails finding a coloring with the smallest number of colors.
• diameter: the diameter δ G of G equals the length of the longest shortest path.
• girth: the girth of G equals the length of the shortest cycle. It is infinity if no cycles are present.
The claim is proven by expressing the state of node v i in the two models in the same form. It is not difficult to see that for each layer of the GNN n mp one has (Xu et al., 2018, Lemma 5)) where AGG is an aggregation function, i.e., a map from the set of multisets onto some vector space. In the last step, I used a result of Xu et al. Xu et al. (2018) stating that each aggregation function can be decomposed as an element-wise function over each element of the multiset, followed by summation of all elements, and then a final function.
Similarly, one may write: with the last step following by restructuring the input and defining ALG as the Turning machine that simulates the action of both ALG 2 and ALG 1 −1 . Since one may encode any vector into a string and vice versa, w.l.o.g. one may assume that the state of each node in LOCAL is encoded as a vector x i . Then, to complete the proof, one still needs to demonstrate that the functions are equivalent (in the interest of brevity the layer/round indices have been dropped). If this holds then each layer of GNN n mp is equivalent to a round of LOCAL and the claim follows. I first note that, since its input is a multiset, ALG l is also an aggregation function. To demonstrate equivalence, one thus needs to show that, despite not having identical inputs, each of the two aggregation functions can be used to replace the other. For the forward direction, it suffices to show that for every aggregation function AGG there exists ALG with the same output. Indeed, one may always construct ALG = AGG • g, where g takes as input {(x j , v i , v j , a i←j ) : v j ∈ N * i }, identifies x i (by searching for v i , v i ) and appends it to each element of the multiset yielding The backward direction can also be proven with an elementary construction: given ALG, one sets AGG = ALG • h, where h deletes x i from each element of the multiset.
B.2 PROOF OF COROLLARY 3.1
In the LOCAL model the reasoning is elementary (Linial, 1992;Fraigniaud et al., 2013;Seidel, 2015): suppose that the graph is represented by a set of edges and further consider that ALG l amounts to a union operation. Then in d = δ G rounds, the state of each node will contain the entire graph. The function ALG 1 d can then be used to make the final computation. This argument also trivially holds for node/edge attributes. The universality of GNN n mp then follows by the equivalence of LOCAL and GNN n mp .
B.3 PROOF OF THEOREM 4.1 First note that, since the GNN n mp and LOCAL models are equivalent, if no further memory/width restrictions are placed, an impossibility for one implies also an impossibility for the other. It can also be seen in Theorem 3.1 that there is a one to one mapping between the internal state of each node at each level between the two models (i.e., variables x (l) i and s (l) i ). As such, impossibility results that rely on restrictions w.r.t. state size (in terms of bits) also transfer between the models.
To proceed, I demonstrate that a depth lower bound in the CONGEST model (i.e., in the LOCAL model with bounded message size) also implies the existence of a depth lower bound in the LOCAL model with a bounded state size-with this result in place, the proof of the main claim follows directly. As in the statement of the theorem, one starts by assuming that P cannot be solved in less than d rounds when messages are bounded to be at most b bits. Then, for the sake of contradiction, it is supposed that there exists an algorithm A ∈ LOCAL that can solve P in less than d rounds with a state of at most c bits, but unbounded message size. I argue that the existence of this algorithm also implies the existence of a second algorithm A whose messages are bounded by c + log 2 n: since each message s (l) j←i is the output of a universal Turing machine ALG 2 l that takes as input the tuple (s (l) i , v i ), algorithm A directly sends the input and relies on the universality of ALG 1 l+1 to simulate the action of ALG 2 l . The message size bound follows by adding the size c of the state with that of representing the node id (log 2 n bits suffice for unique node ids). This line of reasoning leads to a contradiction when c ≤ b − log 2 n, as it implies that there exists an algorithm (namely A ) that can solve P in less than d rounds while using messages of at most b bits. Hence, no algorithm whose state is less than b − log 2 n bits can solve P in LOCAL, and the width of GNN n mp has to be at least (b − log 2 n)/p.
C AN EXPLANATION OF THE LOWER BOUNDS FOR CYCLE DETECTION AND DIAMETER ESTIMATION
A common technique for obtaining lower bounds in the CONGEST model is by reduction to the set-disjointness problem in two-player communication complexity: Suppose that Alice and Bob are each given some secret string (s a and s b ) of q bits. The two players use the string to construct a set by selecting the elements from the base set {1, 2, . . . , q} for which the corresponding bit is one. It is known that Alice and Bob cannot determine whether their sets are disjoint or not without exchanging at least Ω(q) bits (Kalyanasundaram & Schintger, 1992;Chor & Goldreich, 1988).
The reduction involves constructing a graph that is partially known by each player. Usually, Alice and Bob start knowing half of the graph (red and green induced subgraphs in Figure 3). The players then use their secret string to control some aspect of their private topology (subgraphs annotated in dark gray). Let the resulting graph be G(s a , s b ) and denote by cut the number of edges connecting the subgraphs controlled by Alice and Bob. To derive a lower bound for some problem P , one needs to prove that a solution for P in G(s a , s b ) would also reveal whether the two sets are disjoint or not. Since each player can exchange at most O(b · cut) bits per round, at least Ω(q/(b · cut)) rounds are needed in total in CONGEST. By Theorem 4.1, one then obtains a d = Ω(q/(w log n · cut)) depth lower bound for GNN n mp . The two examples in Figure 3 illustrate the graphs G(s a , s b ) giving rise to the lower bounds for even k-cycle detection and diameter estimation. To reduce occlusion, only a subset of the edges are shown. (2018), each player starts from a complete bipartite graph of p = √ q nodes (nodes annotated in dark grey) with nodes numbered from 1 to 2p. The nodes with the same id are connected yielding a cut of size 2p. Each player then uses its secret (there are as many bits as bipartite edges) to decide which of the bipartite edges will be deleted (corresponding to zero bits). Remaining edges are substituted by a path of length k/2 − 1. This happens in a way that ensures that G(s a , s b ) contains a cycle of length k (half known by Alice and half by Bob) if and only if the two sets are disjoint: the cycle will pass through nodes t and p + t of each player to signify that the t-th bits of s a and s b are both one. It can then be shown that n = Θ(p 2 ) from which it follows that: CONGEST requires at least d = Ω(q/(b · cut)) = Ω(n/(b · p)) = Ω( √ n/b) bits to decide if there is a cycle of length k; and GNN n mp has to have d = Ω( √ n/(w log n)) depth to do the same.
(b) In the construction of Abboud et al. (2016), each string consists of q = Ω(n) bits. The strings are used to encode the connectivity of subgraphs annotated in dark gray: an edge exists between the red nodes i and q if and only if the i-th bit of s a is one (and similarly for green). Due to the graph construction, the cut between Alice and Bob has O(log q) edges. Moreover, About et al. proved that G(s a , s b ) has diameter at least five if and only if the sets defined by s a and s b are disjoint. This implies that d = Ω(n/(w log 2 n)) depth is necessary to compute the graph diameter in GNN n mp .
D THE COST OF ANONYMITY
There is a striking difference between the power of anonymous networks and those in which nodes have the ability to uniquely identify each other, e.g., based on ids or discriminative attributes (see the survey by Suomela (2013)).
To illustrate this phenomenon, I consider a thought experiment where a node is tasked with reconstructing the graph topology in the LOCAL model. In the left, Figure 4 depicts the red node's knowledge after two rounds (equivalent to a GNN n mp having d = 2) when each node has a unique identifier (color). At the end of the first round, each node is aware of its neighbors and after two rounds the entire graph has been successfully reconstructed in red's memory.
In the right subfigure, nodes do not possess ids (as in the analysis of (Xu et al., 2018;Morris et al., 2019)) and thus cannot distinguish which of their neighbors are themselves adjacent. As such, the red node cannot tell whether the graph contains cycles: after two rounds there are at least two plausible topologies that could explain its observations. | 2019-07-06T22:26:17.000Z | 2019-07-06T00:00:00.000 | {
"year": 2019,
"sha1": "7f42da4abfadf9d5a464affe22d0bd4bf21c0edb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4bf73865e5625f4dda6f770526cbe5914531bd65",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
4607362 | pes2o/s2orc | v3-fos-license | The optimal choice of medication administration route regarding intravenous, intramuscular, and subcutaneous injection
Background Intravenous (IV), intramuscular (IM), and subcutaneous (SC) are the three most frequently used injection routes in medication administration. Comparative studies of SC versus IV, IM versus IV, or IM versus SC have been sporadically conducted, and some new findings are completely different from the dosage recommendation as described in prescribing information. However, clinicians may still be ignorant of such new evidence-based findings when choosing treatment methods. Methods A literature search was performed using PubMed, MEDLINE, and Web of Sciences™ Core Collection to analyze the advantages and disadvantages of SC, IV, and IM administration in head-to-head comparative studies. Results “SC better than IV” involves trastuzumab, rituximab, antitumor necrosis factor medications, bortezomib, amifostine, recombinant human granulocyte-macrophage colony-stimulating factor, granulocyte colony-stimulating factor, recombinant interleukin-2, immunoglobulin, epoetin alfa, heparin, and opioids. “IV better than SC” involves ketamine, vitamin K1, and abatacept. With respect to insulin and ketamine, whether IV has advantages over SC is determined by specific clinical circumstances. “IM better than IV” involves epinephrine, hepatitis B immu-noglobulin, pegaspargase, and some antibiotics. “IV better than IM” involves ketamine, morphine, and antivenom. “IM better than SC” involves epinephrine. “SC better than IM” involves interferon-beta-1a, methotrexate, human chorionic gonadotropin, hepatitis B immunoglobulin, hydrocortisone, and morphine. Safety, efficacy, patient preference, and pharmacoeconomics are four principles governing the choice of injection route. Safety and efficacy must be the preferred principles to be considered (eg, epinephrine should be given intramuscularly during an episode of systemic anaphylaxis). If the safety and efficacy of two injection routes are equivalent, clinicians should consider more about patient preference and pharmacoeconomics because patient preference will ensure optimal treatment adherence and ultimately improve patient experience or satisfaction, while pharmacoeconomic concern will help alleviate nurse shortages and reduce overall health care costs. Besides the principles, the following detailed factors might affect the decision: patient characteristics-related factors (body mass index, age, sex, medical status [eg, renal impairment, comorbidities], personal attitudes toward safety and convenience, past experience, perception of current disease status, health literacy, and socioeconomic status), medication administration-related factors (anatomical site of injection, dose, frequency, formulation characteristics, administration time, indication, flexibility in the route of administration), and health care staff/institution-related factors (knowledge, human resources). Conclusion This updated review of findings of comparative studies of different injection routes will enrich the knowledge of safe, efficacious, economic, and patient preference-oriented medication administration as well as catching research opportunities in clinical nursing practice.
Introduction
Injections are among the most common health care procedures throughout the world, with at least 16 billion administered in developing and transitional countries each year. 1 Intravenous (IV), subcutaneous (SC), and intramuscular (IM) are three most frequently used injection routes in medication administration. IV injection is the introduction of a medication into the veins using a needle, and it is used when rapid absorption is called for, when fluid cannot be taken by mouth, or when the medication to be administered is too irritating to be injected into the skin or muscles. SC injection is administered as a bolus into the subcutis. IM injection is the technique used to deliver a medication deep into the muscles, allowing the medication to be absorbed into the bloodstream quickly. Prescribing information for some medications notes that they can be injected via one or more routes (eg, epinephrine can be delivered by IV, IM, or SC route), while prescribing information for the majority of injectable medications only describes one injection route.
On December 11, 2013, National Health and Family Planning Commission of the People's Republic of China advocated a principle of rational use of medicines, that is, "Injection is not suggested for patients when oral formulations would be more appropriate and IV infusion is not recommended when IM would be possible". 2 A questionnaire-based survey by Plumb et al in 2013 demonstrated that only 74% of junior doctors selected the correct IM route of epinephrine in management of anaphylaxis. 3 In the first half-year of 2014, The Second Affiliated Hospital of Zhejiang University (SAHZU), People's Republic of China, experienced a fatal adverse event which was partially associated with inappropriate injection route (ie, physician ordered SC epinephrine instead of IM epinephrine in anaphylaxis treatment, which may have delayed epinephrine absorption). SAHZU revised the standard operation procedure for anaphylaxis treatment in which IM was listed as the unique injection route of epinephrine. The epinephrine case in SAHZU together with the survey by Plumb et al has raised concerns about lack of knowledge concerning the recognition and management of anaphylaxis among front-line medical staff.
There is little literature reviewing the advantages and disadvantages of IV, IM, and SC administration. Gabriel reviewed the access of IV versus SC for palliative care patients. 4 Stoner et al conducted a systematic review of randomized controlled trials investigating patient preference for SC versus IV administration and concluded that patients preferred SC over IV delivery. 5 Comparative studies of SC versus IV, IM versus IV, or IM versus SC were sporadically reported, and many new findings cropped up. For example, Reggia et al analyzed the switch to SC administration of abatacept from IV administration in patients with rheumatoid arthritis and observed that SC abatacept showed a risk of relapse in 27% of cases and return to the IV administration quickly reinstated disease control (ie, IV is better than SC). 6 The findings of Reggia et al challenge the viewpoint that the SC route is generally preferred over IV route because it enables at-home injection, improves quality of life, and reduces health care costs. 4,5,7 Meanwhile, some findings are completely different from the dosage recommendation in prescribing information. Clinicians may still be ignorant of such information when choosing administration route.
In this paper, we investigate this issue in-depth and present an updated review, with the intention of improving the readers' knowledge of rational use of injectable medications.
Methods
Potentially relevant literature during 1980 to February 2015 was identified by performing searches in the PubMed, MEDLINE, and Web of Sciences™ Core Collection with an English-language filter. For PubMed, the search query was to identify each paper with the title containing "(intramuscular and intravenous)" or "(intramuscular and subcutaneous)" or "(subcutaneous and intravenous)", with a filter of "species: human". For Web of Sciences, the search string was "title: (intramuscular and intravenous) or title: (intramuscular and subcutaneous) or title: (subcutaneous and intravenous)", with a document-type filter excluding proceeding papers, meeting abstracts, notes, correction, correction addition, and book review. For MEDLINE, the search string was "title: (intramuscular and intravenous) or title: (intramuscular and subcutaneous) or title: (subcutaneous and intravenous)", refined by "MeSH headings: (humans)".
Two reviewers (MC and HMX) independently searched the literature and screened the relevant studies. The flowchart showing selection of literature is presented in Figure 1. Each reviewer was blinded to the other reviewer during the process of data extraction. In the event of disagreement between the two reviewers, a third reviewer (QZ) was consulted. Sixty-five original papers on the comparison of injection route were finally included based on the inclusion/ exclusion criteria. Valuable information was summarized by data interpretation.
Reviewers (MC, HMX, and QZ) listed all medications exhibiting obvious advantages of one route over the other route, and then focused on these medications and performed
925
Optimally choosing injection route expanded literature retrieval with the search string -title/ abstract contains "(intramuscular and intravenous)" or "(intramuscular and subcutaneous)" or "(subcutaneous and intravenous)". An additional 20 original articles were identified for data mining. Prescribing information for related medications was also read by reviewer (QZ).
SC versus iv Trastuzumab
Trastuzumab is a key agent for human epidermal growth factor receptor 2 (HER2)-positive breast cancer in both the early and metastatic settings. It is given by IV infusion, with between 17 and 52 infusions in standard regimens over 1 year, as described in prescribing information for Herceptin ® (trastuzumab; Genentech, Inc., South San Francisco, CA, USA). 8 However, IV administration of trastuzumab requires substantial time commitments on patients and health care staff, and may cause patient discomfort. In 2013, the European Union approved a new SC formulation of trastuzumab for use in patients with early or metastatic breast cancer. The SC formulation is administered in 2-5 minutes rather than 30-90 minutes with the standard IV form, and it contains a recombinant human hyaluronidase (rHuPH20) that opens up channels in the extracellular matrix of the skin and enables trastuzumab to be administered via the SC route. 9 In the international, randomized, two-cohort PrefHer study, patients with HER2-positive early breast cancer were randomized to receive four adjuvant cycles of 600 mg fixeddose SC trastuzumab followed by four cycles of standard IV trastuzumab, or vice versa. Compared with IV trastuzumab, SC trastuzumab has shown non-inferior efficacy, similar pharmacokinetic and safety profile, and higher patient preference (88.9% versus 9.6%, P,0.0001). 10,11 Results from the Phase III HannaH study also confirmed comparable efficacy and safety profile of trastuzumab for HER2-positive early breast cancer between the IV and SC groups. 12
Rituximab
Rituximab is a part of standard therapy for many non-Hodgkin's lymphoma patients. Prescribing information for Rituxan ® (rituximab; Genentech, Inc.) notes that it is given by IV infusion. 13 The fixed dose of 1,400 mg SC rituximab was confirmed to have non-inferior serum trough concentrations relative to IV rituximab 375 mg/m 2 dosing during maintenance, with a comparable safety profile. 14 Rule et al conducted a time and motion study to investigate the staff time and costs associated with administration of SC and IV rituximab. 15 Compared with IV rituximab, SC rituximab per session was associated with reductions in active health care professional time (48.5 minutes versus 223.3 minutes), total mean staff costs, and patient time in the treatment room (70.0 minutes versus 263.8 minutes). Switching from IV to SC rituximab could increase treatment room capacity and patient throughput, as well as improving the patient experience. 15 Interestingly, the effect of dose on absorption of SC rituximab was observed in rats. Rituximab exhibited linear kinetics following IV administration; however, bioavailability following SC injection was inversely related to the dose level. The bioavailability values following 1 mg/kg, 10 mg/kg, and 40 mg/kg injected at the back were approximately 70%, 31%, and 18%, respectively. Saturable binding may be a major determinant of the nonlinear absorptive transport of rituximab. 16 Antitumor necrosis factor medications Antitumor necrosis factor (anti-TNF) medications are indicated for the treatment of chronic inflammatory conditions such as rheumatoid arthritis, inflammatory bowel disease, and psoriasis. Anti-TNF medications include monoclonal antibodies (eg, infliximab, adalimumab, certolizumab pegol, and golimumab) and circulating receptor fusion protein such as etanercept. A Phase I, open-label trial evaluated the effects of SC or IV golimumab on inflammation markers (eg, C-reaction protein, interleukin-6, serum amyloid A, TNF receptor II, matrix metalloproteinase-3, haptoglobin, ferritin, and hepcidin) in patients with rheumatoid arthritis. Patients were randomized to receive SC golimumab 100 mg at baseline and every 4 weeks thereafter through week 20 or IV infusions of golimumab 2 mg/kg at baseline and week 12. Decreases in concentrations of these inflammatory markers were maintained through week 24 in SC group but began to reverse after week 8 in IV group, indicating that SC route was superior to IV route in golimumab treatment of rheumatoid arthritis. 17 Sylwestrzak et al examined patient preferences regarding available anti-TNF agents and mode of administration options. 18 Participants were asked about their use of anti-TNF agents, locations of administration, preferences for IV or SC route, interest in anti-TNF home therapy options, and their physician's role in their decision-making process. The SC group showed a higher preference for the injection route they were using compared with the IV group; that is, 89.9% of the SC group preferred the SC route, whereas 71.8% of the IV group preferred the IV route (P,0.001). Another questionnaire survey identified the determinants of patient preferences in the choice of anti-TNF therapies for rheumatoid arthritis. 19 The IV and SC routes of administration were preferred by 50.2% and 49.8%, respectively. Reasons for the choice of IV route were the safety of treatment at the hospital and the reassuring effect of physician presence. The SC route was chosen for the convenience of treatment and in particular for home treatment. Patients who were dissatisfied with current therapy due to side effects preferred SC administration (P=0.029), whereas patients choosing the IV route had higher scores on "today pain" (P=0.047) and "articular pain" (P=0.023) of the Rheumatoid Arthritis Disease Activity Index. The questionnaire survey indicates that physician should discuss with patients about the choice of injection route because individual preference seems to be determined by personal attitudes toward safety and convenience, by past experience, and by the perception of current disease status.
Bortezomib
Bortezomib is indicated for treating relapsed multiple myeloma and mantle cell lymphoma. The two most significant bortezomib-related issues include peripheral neuropathy and the IV route required for its administration. In January 23, 2012, the US Food and Drug Administration (FDA) approved a supplemental new drug application for Velcade ® (bortezomib; Takeda Pharmaceutical Company Limited, Osaka, Japan) which had updated the label to include the SC method of administration in all approved indications. The approval was based on results from a randomized, Phase III, open-label, international, non-inferiority trial which compared the efficacy and safety of SC versus IV bortezomib twice weekly in patients with relapsed multiple myeloma. SC bortezomib exhibited non-inferior efficacy to IV bortezomib and was associated with a lower incidence of neuropathy 20 Moreau et al revealed that SC administration resulted in equivalent bortezomib plasma exposure to IV administration and comparable blood 20S proteasome inhibition. 21 An updated survival analysis of a randomized Phase III study showed that SC bortezomib was associated with comparable efficacy regarding time to progression, progression-free survival and overall survival, and significantly lower rates of peripheral neuropathy compared with IV bortezomib. 22 Barbee et al quantified the difference in efficiency practice variables and preferences regarding SC versus IV bortezomib route in patients with multiple myeloma who received at least one dose each of SC and IV administration. Compared to IV bortezomib, SC bortezomib was more time efficient for the patient and institution (eg, a 38% reduction in chair time [P,0.001] and a 27% reduction in infusion center visit time [P,0.001]) and was preferred by 68% of patients (P=0.0002). 23 Interestingly, prescribing information for Velcade ® does not specify the anatomical site of SC injection, whereas evidence has shown that SC injections of bortezomib in the abdomen cause fewer grade 2 injection site reactions than those in the thigh (1.6% versus 11.0%, P,0.001). 24 In the Phase III MMY-3021 study, 25 response rates were 52% and 57% with SC and IV bortezomib, respectively, in patients with creatinine clearance (CrCl) .50 mL/min, and 53% and 31% in patients with CrCl of 20-50 mL/min. Clinical benefit of bortezomib in terms of renal impairment reversal (to CrCl .60 mL/min) was reported in 30% of patients receiving SC bortezomib and 15% of patients receiving IV bortezomib. Long-term outcomes were similar with SC and IV bortezomib in patients with CrCl .50 mL/min (median time to progression 9.7 months and 9.8 months) and with SC bortezomib in patients with CrCl of 20-50 mL/min (10.5 months). It indicates that SC bortezomib is more suitable than IV bortezomib for patients with renal dysfunction.
Amifostine
Amifostine is a cytoprotective agent. Prescribing information for Ethyol ® (amifostine; MedImmune Pharma B.V., 6545 CG Nijmegen, The Netherlands) notes IV infusion as the administration method. The IV administration of amifostine is associated with reversible clinical hypotension, protracted emesis, and malaise in various percentages of patients. 26 Bardet et al compared the compliance with and efficacy of IV and SC amifostine in patients undergoing radiotherapy for head and neck cancer. Patients were randomly assigned to receive either IV amifostine (200 mg/m 2 daily for 3 minutes, 15-30 minutes before irradiation) or SC amifostine (500 mg, two sites, 20-60 minutes before irradiation). SC amifostine was not significantly superior to IV amifostine regarding patient compliance or efficacy; however, it was associated with significantly lower acute toxicity including grade 1-2 hypotension (8% for SC versus 19% for IV, P=0.01), grade 1-2 skin rash (21% for SC versus 9% for IV, P=0.01), and local pain (8% for SC versus 0% for IV, P=0.003). 27 Koukourakis et al prospectively evaluated the tolerance difference between SC and IV route in patients receiving chemotherapy. Protracted vomiting or clinical hypotension was absent in SC group, and the tolerance profile in SC group was significantly better than the IV group (P=0.001). 28 The absence of hypotension with SC administration facilitates patient monitoring and management in tumor treatment departments. Meanwhile, switching to the SC route in patients with poor tolerance to IV route allows the continuation of cytoprotection with minor side effects. 29 Recombinant human granulocyte-macrophage colony-stimulating factor Recombinant human granulocyte-macrophage colonystimulating factor (rhGM-CSF) can stimulate the production of white blood cells and thus prevent neutropenia following chemotherapy. It can be administered via IV or SC route. Rosenfeld et al compared the toxicity and hemopoietic effects of SC and IV rhGM-CSF in patients with primary myelodysplasia. Increases in the absolute neutrophil count exhibited a dose-response relationship and were more pronounced with SC than IV administration. Compared to a 2-hour infusion of rhGM-CSF, SC administration is more myelostimulatory without an increase in toxicity. 30 Lieschke et al demonstrated that bacterially synthesized rhGM-CSF was an ineffective stimulant of leukopoiesis when given as a bolus (0.3-3 μg/ kg/day), but it was effective when administered by 2-hour IV infusion (3-20 μg/kg/day). Moreover, IV dose of rhGM-CSF was less potent at inducing a leukocytosis than equivalent SC doses and was associated with a higher incidence of generalized rash (P,0.01) and first-dose reactions (P,0.05). 31 Granulocyte colony-stimulating factor Granulocyte colony-stimulating factor (G-CSF) can stimulate the bone marrow to produce granulocytes and stem cells. It is administered to oncology patients via SC or IV routes. Paul et al conducted a randomized, open-label trial to compare IV versus SC G-CSF administration to hospitalized hemato-oncological patients receiving chemotherapy. 32 The mean time
928
Jin et al to neutropenia resolution was longer with IV G-CSF compared with SC G-CSF (7.9 days versus 5.4 days, P=0.001), indicating that bolus IV G-CSF could result in longer neutropenia duration than SC administration. 32 Eguchi et al evaluated the efficacy and toxicity of recombinant human G-CSF given subcutaneously in patients with advanced lung cancer undergoing intensive chemotherapy. When G-CSF is given subcutaneously, the dose required for a comparable effect in alleviating neutropenia is 50% of that required when it is given intravenously. 33
Recombinant interleukin-2
Recombinant human interleukin-2 (rIL-2) has demonstrated survival benefits over chemotherapy for the majority of patients with metastatic renal cell carcinoma (mRCC). The rIL-2 can be administered as high-dose IV bolus injection, continuous IV infusion, or SC administration in a lower dosage regimen. Hospitalization/intensive care is required throughout due to the side effects if rIL-2 is intravenously delivered. 34 Geertsen et al compared the safety and efficacy of administration by SC and continuous IV infusion of rIL-2 in patients with mRCC. Compared with continuous IV infusion, SC delivery of rIL-2 was associated with improved tolerability. SC administration was associated with more patients experiencing stable disease (50.5% versus 29.8%), fewer patients undergoing disease progression (35.0% versus 43.6%), lower incidence of severe adverse events (46% versus 76%, P,0.001), and fewer patients requiring dose reductions due to toxicity (20% versus 82%). There was no difference between two injection routes of rIL-2 regarding efficacy in terms of overall survival, duration of response, and response rate. 35 A prospective, non-randomized study showed that low-dose SC interleukin-2 therapy was associated with low clinical and hematologic toxicity after autologous transplantation, while the immunomodulation achieved was no less than that achieved with the IV route. 36 immunoglobulin A shift from IV immunoglobulin (IV-IG) to less laborintensive SC immunoglobulin (SC-IG) has the potential to help alleviate nurse shortages and reduce overall health care costs. 37 Martin et al demonstrated that rapid push home-based SC-IG was less costly than hospital-based IV-IG for immunoglobulin replacement therapy in adult primary immunodeficiency patients in the Canadian context. 38 A systematic review revealed that changing IG replacement therapy from IV-IG to SC-IG may be of benefit to qualified patients with primary immunodeficiency. 39 Ducruet et al revealed the pharmacoeconomic advantages of SC-IG versus IV-IG treatment in a Canadian pediatric center. The SC route was less costly than the IV route from either the patient perspective (P,0.001) or the government perspective (P,0.001). 40 Kittner et al evaluated the attitude of patients toward switching to SC-IG home therapy from IV-IG therapy by a questionnaire survey. Patients on SC therapy were significantly younger than those in the IV treatment group (37±9.1 years versus 51.2±14.5 years, P,0.001). Compared with the SCtreated group, more IV-treated patients were afraid of being more busy with self-administration (P,0.001), being worried about severe adverse reactions at home (P,0.001), and had higher points on a Likert scale about the statement "I dislike to puncture myself" (P,0.001). 41 Another study investigated the perceptions of the SC method and the perceptions of the home therapy regime among patients who already had considerable experience of this therapy. Significant correlations were found between age and perception of the lifelong treatment with SC infusion of IG for primary antibody deficiencies. The younger patients perceived the SC infusions more uncomfortable and were less determined to continue with the therapy as compared with the older patients (P,0.01). Additionally, the women were more positive concerning the SC method (P,0.05), as compared with the men. 42 The primary disadvantage of SC-IG is the inherent resistance of the extracellular matrix, which limits the volume infusible in one site (usually 15-30 mL), and requires multiple infusion sites weekly or every second week, rather than a single monthly IV infusion. Wasserman et al investigated the efficacy and tolerability of rHuPH20-facilitated SC-IG in patients with primary immunodeficiency. Eighty-seven patients with primary immunodeficiency aged $2 years received 10% IV-IG for 3 months, and then rHuPH20facilitated SC-IG (n=83) for approximately 14-18 months at 108% of the IV-IG dose. rHuPH20-facilitated SC-IG brings together the advantages of IV-IG and SC-IG, that is, combining the once-monthly, single-site infusion of IV-IG with the option of home-based self-administration of SC-IG. The rHuPH20-facilitated SC-IG had a 20% higher bioavailability than SC-IG and caused fewer systemic reactions than IV infusion (8.3% versus 25.0%), while it was effective, safe, and pharmacokinetically equivalent to IV-IG. 43
929
Optimally choosing injection route the groups, the SC insulin group achieved better glycemic control and significantly lowered daily insulin bolus and infusion requirements. 44 On the contrary, IV insulin has advantages over SC insulin in some clinical circumstances. Neff et al studied the efficacy of protocol-based IV insulin delivery as compared to SC insulin prescribed individually outside of the critical care setting. Among a total of 122 who developed parenteral nutrition (PN)-associated hyperglycemia, those receiving IV insulin achieved glycemic target for more time than those on the SC regimen (62% versus 43%, P=0.008), indicating that IV insulin should therefore be considered optimum therapy for patients with PN-associated hyperglycemia in the noncritically ill inpatient setting. 45 Use of perioperative continuous IV insulin infusion in diabetic patients undergoing open-heart surgical procedures could significantly reduce major infectious morbidity and its associated socioeconomic costs. Compared with sliding-scale-guided intermittent SC insulin injections, continuous IV insulin infusion induced a significant reduction in perioperative blood glucose levels and significantly reduced the incidence of deep sternal wound infection (0.8% versus 2.0%, P=0.01). 46 Pezzarossa et al observed that the outcome difference between SC and IV route depended on the time of insulin administration for the perioperative management of diabetic patients. During the intraoperative period, the total insulin requirement expressed as the insulin-to-glucose ratio was significantly lower in subjects receiving IV insulin (0.18±0.3 versus 0.27±0.3 in the SC group, P,0.05), whereas no difference was observed during the pre-and postoperative periods. It indicated that IV insulin administration achieved better glycemic control during the intraoperative period, whereas it did not offer advantages over the SC route during the other two periods. 47
epoetin alfa
Epoetin alfa may be given either as an IV or as an SC injection. Clinical and pharmacokinetic studies have shown that target hemoglobin or hematocrit levels can be maintained using a reduced epoetin dosage by switching from IV to SC administration. Wazny et al conducted a cost analysis of a conversion from IV to SC epoetin in patients receiving chronic in-center hemodialysis during a 6-month period of IV or SC usage. SC epoetin was associated with a yearly cost savings of US$1135 per patient. 48 A meta-analysis by Besarab et al concluded that the cost of epoetin was reduced substantially when administered via route of SC in comparison to IV. 49 SC epoetin is twice as efficient as IV epoetin for the treatment of anemia in hemodialysis patients, with a dose saving of 53% (P,0.001). 50 Conversion of epoetin from SC to IV dosing could increase the epoetin dosage requirements by 26%-32% and costs of anemia therapy. Overall, epoetin doses increased. [50][51][52][53][54] Heparin Patient preference for IV or SC heparin in the treatment of deep venous thrombosis (DVT) was assessed in a randomized cross-over study. Twenty patients with venographically proven DVT were randomized to receive SC or IV heparin for 3 days followed by 3 days of the other treatment. Compared with IV administration, SC treatment was associated with significantly less discomfort at the injection site (P,0.001), better mobility (P,0.005), and patients' overall preference (P,0.001). 55 Assuming that the SC and IV routes were equally effective on the basis of the medical literature, Barber and Hoffmeyer compared the cost-effectiveness of SC heparin (20,000 IU, twice daily, prefilled syringes), a continuous IV infusion of 24,000 IU heparin in 24 hours, and the IV infusion of 48,000 IU heparin as two consecutive 12-hour infusions of 24,000 IU. Incorporating the cost of human resource in vein cannulation, infusion preparation, and medication administration with the use of heparin syringes, SC heparin therapy was significantly more cost-effective than IV heparin therapy. The reduction in cost and liberation of nursing time mean that the SC route should be preferred. 56 Ketamine Ketamine can suppress hyperalgesia and allodynia. Evidence has shown that IV ketamine is more appropriate than SC ketamine in controlling postoperative analgesia. Honarmand et al evaluated the clinical efficacy of preincisional IV or SC infiltration of ketamine (0.5 mg/kg) for postoperative pain relief after appendectomy. Visual analog scale scores were not significantly different between IV group and SC group at the time of arrival in the recovery room, and at 10 minutes, 20 minutes, and 30 minutes thereafter, but were significantly lower at 12 hours, 18 hours, and 24 hours after surgery in IV group compared with SC group (P,0.05). Preincisional IV administration of low-dose ketamine provides analgesia for 24 hours after surgery without significant side effects in patients undergoing appendectomy, whereas SC ketamine at the same dose only controls analgesia within the first 6 hours after surgery. 57 Ketamine is also known to have major cardiovascular side effects. It has been shown that, in some patients, ketamine may temporarily increase heart rate and blood pressure by increasing sympathetic nervous system activity. A prospective randomized double-blind study compared the efficacy Patient Preference and Adherence 2015:9 submit your manuscript | www.dovepress.com
930
Jin et al of SC and IV ketamine (0.6 mg/kg) added to narcotics in patients scheduled for laparoscopic peritoneal dialysis catheter implantation. Results showed that SC ketamine was as effective as, but safer than, IV ketamine. There were no differences between two groups regarding pain intensity, recall rate during the surgery, and surgeon satisfaction; however, SC administration of ketamine for induction of conscious sedation is accompanied by a significantly lower rise in systolic blood pressure and rate-pressure product before skin infiltration, after insertion of the second trocar, and at the end of surgery. 58
vitamin K1
The 60 For patients excessively anticoagulated with warfarin, small doses of SC vitamin K1 may not correct the INR as rapidly or as effectively as IV administration, and higher doses must be considered for more rapid and complete reversal of anticoagulation by the SC route.
Ceftriaxone SC ceftriaxone administration is approved for use in France, although IV and IM administration are currently FDA approved. Gauthier et al compared the IV administration of ceftriaxone to SC administration in patients older than 75 years. SC route seems to be preferred for fragile elderly patients, while it is not associated to an impaired effectiveness or to an increased death rate. Compared with the IV group, patients in the SC group were significantly older, more often bedridden, and more frequently had dementia. 61 The reasons for using SC administration in weaker patients include a difficult venous access, risks related to venous access, and difficulty to maintain the venous access, especially in patients presenting with cognitive or behavioral disorders. Nevertheless, this does not justify an excessive use of SC administration, as for septic shock. Harb et al compared pharmacokinetics and safety of rHuPH20-facilitated SC ceftriaxone administration versus SC ceftriaxone preceded by SC saline placebo or IV ceftriaxone administration. 62 IV ceftriaxone exhibited higher C max and shorter T max values than either SC treatment, while ceftriaxone area under the curve (AUC) did not differ significantly between the three administration routes. Compared with SC preceded by placebo, rHuPH20-facilitated SC achieved higher C max and shorter T max .
Opioids
Daoust et al assessed the impact of age, sex, and route of administration on the incidence of adverse events due to opioid administration in the emergency department. The IV route was linked with higher rates of all adverse events, the SC route with moderate rates, and the oral route with fewer overall rates. Adjusted odds ratio (OR) (95% confidence interval) of nausea/vomiting in IV and SC group relative to the oral group was 4.21 (3.12-5.67) and 2.44 (1.81-3.30), respectively. Regarding major adverse events, adjusted OR (95% confidence interval) in IV and SC group relative to the oral group was 6.10 (4.43-8.39) and 2.07 (1.48-2.89), respectively. 63 Flexibility in the route of administration may contribute to better clinical outcomes. A prospective survey evaluated parenteral morphine-prescribing patterns among inpatients with pain from advanced cancer. There were five major inpatient prescribing patterns, that is, IV to oral, IV to SC, IV only, SC only, and mixed. The IV-to-oral group had more stable pain control than the IV-to-SC group. Pain control was 100% in the mixed group, compared with 67% in IV-only group, 33% in SC-only group, 82% in IV-to-oral group, and 56%-75% in IV-to-SC group. 64 65 Therefore, there is a need for extreme caution and further education about IV bolus epinephrine in anaphylaxis.
Hepatitis B immunoglobulin
Hepatitis B virus (HBV)-infected patients may receive longterm combination therapy with hepatitis B immunoglobulin (HBIG) and oral antiviral medication to prevent reinfection after orthotopic liver transplantation (OLT). Costs of IV HBIG are extremely higher than those of IM HBIG. Faust et al studied the outcome of patients after a switch from 2,000 IU IV HBIG (Hepatect ® , Biotest AG, Dreieich, Germany) administered every 2 weeks to 2,000 IU IM HBIG (Behring ® , Aventis-Behring, Marburg, Germany) given once a month and after the patients initially treated with IM HBIG were discharged from the hospital. Long-term administration of IM HBIG could save up to 60% of the usual costs for IV prophylaxis of HBV reinfection in patients after OLT. In combination with lamivudine, long-term IM HBIG therapy is as efficient as IV HBIG treatment, but its lower costs clearly favor its use in preventing HBV reinfection after OLT. 66 Economic analysis showed an average cost-effectiveness ratio of combination IM HBIG plus lamivudine of US$52,600 per recurrence prevented, which was far below the cost of lamivudine monotherapy and of IV HBIG alone or in combination with lamivudine. 67 IM HBIG in combination with lamivudine offers a safe, effective, and cost-effective approach to preventing HBV recurrence after OLT.
A questionnaire evaluated the impact of HBIG prophylaxis on the quality of life of liver transplant recipients, and the results showed that the route of HBIG administration could have a significant impact on specific health-related quality-of-life (HRQOL) domains beyond a patient's satisfaction. In comparison with patients on IV HBIG, patients on IM HBIG reported significantly better HRQOL scores on the flexibility (P=0.01) and negative feelings scales (P=0.04), but they reported worse HRQOL scores on the side effects scale (P,0.001). There was no difference between the two routes of HBIG administration regarding satisfaction, positive feelings, impact, and support scales. 68 Pegaspargase Pegaspargase, a modified version of l-asparaginase that is covalently conjugated to monomethoxypolyethylene glycol, is important for treatment of acute lymphoblastic leukemia. The recommended dose of Oncaspar ® (Pegaspargase, Sigma-Tau Pharmaceuticals, Inc., Gaithersburg, MD, USA) is 2,500 IU/m 2 intramuscularly or intravenously. Anaphylaxis and serious allergic reactions can occur in patients receiving Oncaspar ® . 69 There is a potential concern that the IV route may result in more severe allergic reactions due to immediate exposure to reactive antibodies in the blood, which is delayed after IM administration. Occurrence rate of allergic reactions was 9% in newly diagnosed patients receiving IM pegaspargase, significantly less than that (36%) in patients receiving IV pegaspargase (P=0.019), whereas the severity of reaction was not increased with IV versus IM. 70 A study by Petersen et al showed an increased incidence of allergy in children with acute lymphoblastic leukemia who received IV pegaspargase compared to IM (19.5% versus 10.7%, P=0.028) and a more rapid onset of allergic reactions to IV pegaspargase (ie, time to onset of symptoms was within 30 minutes for 96.3% of evaluable IV patients versus 18.2% of evaluable IM patients, P,0.001). 71 Antibiotics Generally, antibiotics available for IM injection should be considered as an economically efficient alternative to IV injections in appropriate patients. Milkovich and Piazza assessed the economic efficiencies of IM and IV administration of antibiotics. Regarding cefazolin, gentamicin, penicillin, and imipenem, the IM route was found to be up to one-tenth the expense of the IV route and may facilitate early discharge and self-administration in the home. 72 The recommended dosage of imipenem-cilastatin is 250 mg to 1 g given intravenously every 6-8 hours, or IM doses of no more than 1.5 g daily, usually for 5-14 days. A study showed that the dosing of imipenem-cilastatin 750 mg given intramuscularly every 12 hours is a more cost-effective method of drug delivery with equal efficacy and safety when compared to imipenem-cilastatin 500 mg given intravenously every 6 hours. Incorporating acquisition costs, salaries of pharmacists and technicians for manufacturing infusion, and nursing administration costs, the mean total drug therapy costs were significantly greater for the IV group as compared to the IM group (P,0.01). 73 Pharmacokinetic studies of isepamicin, teicoplanin, and ceftriaxone following administration by IV infusion or IM injections showed complete absorption following IM administration and the feasibility of interchangeable administration by either route without compromising clinical efficacy. 74 There was no significant difference in complications and level of sedation in 5 minutes, 10 minutes, and 15 minutes in both groups, but sedation was longer in the IM group (P,0.05). Thus, IV ketamine may be the desirable approach for orthopedic procedures in sedating children. 77 Ramaswamy et al compared the side-effect profiles of IM or IV ketamine and investigated which injection route will lead to quicker discharge from the emergency department. Total time in the emergency department (triage to discharge) between the two routes was not significantly different. However, time from drug administration to patient discharge was 21 minutes shorter for IV compared with IM administration (P,0.001). Moreover, IM group was more prone to adverse events compared with IV group (P=0.01). 78 Roback et al compared adverse events, efficacy, and length of sedation of 1 mg/kg IV versus 4 mg/kg IM ketamine procedural sedation and analgesia for orthopedic procedures in the emergency department. Patients in the IM group reported significantly less pain and lower distress during the painful procedure but experienced more commonly vomiting and significantly longer length of sedation. 79 Morphine IV morphine has a more rapid and extensive initial effect compared with IM, which is supported by the findings of the pharmacokinetic study by Dale et al. 80 Dale compared the time course of morphine and metabolite concentrations in serum and cerebrospinal fluid (CSF) after IV and IM administration of morphine 10 mg after surgery. The uptake of morphine to the CSF was consistently higher after IV administration than after IM administration. The higher CSF concentration may be caused by an initially higher morphine blood/CSF gradient following IV morphine injection. Tveita et al compared patient safety and analgesic efficacy of a single high dose of morphine given intramuscularly or intravenously for postoperative pain management. 81 The IV group experienced a significantly faster onset of analgesic effect than the IM group (5 minutes versus 20 minutes). During 5-25 minutes after morphine administration, pain status in the IV group was significantly improved compared with the IM group. A 10 mg bolus dose of IV morphine given to patients with moderate pain after surgery can provide more rapid and better initial analgesia than 10 mg dose of IM morphine without causing severe respiratory depression. A prospective study investigated the safety, efficacy, and complication rate of intermittent IM versus continuous IV infusion of morphine sulfate in 46 nonventilated children following major chest, abdominal, or orthopedic surgical procedures. Mean pain scores in the IV group were significantly lower than those of the IM group (P,0.01). IV infusion provided better pain relief than IM injections without respiratory depression. 82 Antivenom Redback spider antivenom can be given by IM or IV route. The manufacturer recommends IM use, with IV administration reserved for life-threatening cases. Ellis et al compared the efficacy of IV versus IM antivenom in the treatment of redback spider envenoming and observed that the proportion pain-free at 24 hours in the IV group was better compared with the IM group (76% versus 21%), although redback spider antivenom was initially effective by both IM and IV routes. 83 Isbister et al compared antivenom serum concentrations following IV and IM administration of redback spider antivenom in ten patients. 84 Antivenom could not be detected in patients receiving one or two doses of IM antivenom at any time point but could be detected in all patients receiving IV antivenom. Thus, IM redback spider antivenom should not be recommended in the treatment of redback spider bite.
iM versus SC epinephrine A prospective, randomized, blinded, parallel-group study in 17 children with a history of anaphylaxis compared two injection methods (SC injection of 0.01 mL/kg epinephrine solution, maximum 0.3 mL [0.3 mg], or 0.3 mg IM epinephrine from an autoinjector). 85 In nine children who received SC epinephrine, the mean C max (1,802±214 pg/mL) was achieved at a mean time of 34 minutes. However, in eight children who received IM epinephrine, C max (2,136±351 pg/mL) was achieved at a mean time of 8 minutes. The T max exhibited statistically significant difference between two groups (34±14 minutes for SC versus 8±2 minutes for IM, P,0.05). Only two out of the nine children receiving SC epinephrine achieved C max by 5 minutes, while six out of the eight children receiving IM epinephrine achieved C max by 5 minutes. Fatal anaphylaxis is associated with delayed epinephrine administration. Therefore, the IM route of injection is preferable during an episode of systemic anaphylaxis. A retrospective review in a military hospital showed that epinephrine was given largely by the SC route and no IM epinephrine was administered. Greater educational efforts and collaboration are needed between the allergy community and Patient Preference and Adherence 2015:9 submit your manuscript | www.dovepress.com
933
Optimally choosing injection route other providers regarding the importance of administering epinephrine intramuscularly. 86 The advantage of IM administration over SC administration is challenged by the anatomical site of injection. Simons et al conducted a prospective, randomized, blinded, placebo-controlled, six-way cross-over study of IM versus SC injection of epinephrine (0.3 mg) in young men. C max values of IM epinephrine were significantly higher after injection into the thigh than after injection into the upper arm (9,722±4,801 pg/mL versus 1,821±426 pg/mL, P,0.01). An identical dose of epinephrine injected IM into the deltoid did not result in significant elevation of C max in comparison with endogenous epinephrine concentrations measured after saline solution injections. 87 The greater absorption of epinephrine from the vastus lateralis muscle in comparison with the deltoid muscle is most likely due to the greater blood flow in the vastus lateralis. Therefore, IM injection of epinephrine into the thigh was recommended as the preferred route and site of injection of epinephrine in the initial treatment of anaphylaxis; otherwise, the advantage of IM epinephrine versus SC epinephrine will disappear.
interferon-beta-1a
Interferon-beta-1a (IFNβ1a) is indicated for the treatment of patients with relapsing forms of multiple sclerosis to slow the accumulation of physical disability and decrease the frequency of clinical exacerbations. REBIF ® (IFNβ1a; Merck KGaA, Darmstadt, Germany) is formulated as a sterile solution in a prefilled syringe or REBIF Rebidose autoinjector intended for SC injection, and its recommended dose is 22 μg or 44 μg three times weekly. 88 AVONEX ® (IFNβ1a; Biogen Idec Inc., Berkshire, UK) is formulated as a lyophilized powder, single-use prefilled syringe, or single-use prefilled autoinjector intended for IM injection, and its recommended dose is 30 μg once a week. 89 The EVIDENCE trial concluded that administering high-dose/high-frequency SC IFNβ1a was more effective in preventing relapses among patients with relapsing multiple sclerosis than low-dose weekly IM IFNβ1a after 64 weeks. SC IFNβ1a was estimated to prevent 0.50 relapses and save 23 relapse-free days per patient, resulting in greater health benefits over 4 years than IM IFNβ1a. 90 Calabrese et al compared the effects of IM IFNβ1a and SC IFNβ1a on the development of cortical lesions and cortical atrophy in patients with relapsing-remitting multiple sclerosis. After 12 months, 74% of untreated patients developed greater than or equal to one new cortical lesion (mean 1.6) compared with 64% of IM group (mean 1.2, P=0.021) and 26% of SC group (mean 0.4, P,0.001). After 24 months, greater than or equal to one new cortical lesion was observed in 82% of untreated (mean 3.0), 72% of IM IFNβ1a-treated (mean 1.6, P,0.001), and 52% of SC IFNβ1a-treated patients (mean 0.8, P,0.001). The SC group exhibited faster and more pronounced effects in decreasing new cortical lesions development and cortical atrophy progression compared with IM group. 91 However, a retrospective administrative claims analysis showed that the once-weekly IM IFNβ1a cohort of patients with multiple sclerosis had significantly higher regression-adjusted odds of adherence compared with the three times weekly SC IFNβ1a cohort, while there were no consistent differences in persistence between the two cohorts. The underlying mechanism for adherence difference may be possibly attributable to IM IFNβ1a's less frequent dosing schedule. 92
Methotrexate
For the treatment of rheumatic diseases, the antimetabolite drug methotrexate (MTX) can be administered weekly by different routes: oral, SC, or IM. Studies compared the serum concentrations and the pharmacokinetics of low-dose MTX after both IM and SC injections in patients with rheumatoid arthritis. Values of the C max , T max , and AUC for IM injections were not significantly different from these values for SC injections, indicating that IM and SC are interchangeable routes. SC administration may be a more convenient and less painful way of administering low-dose MTX. 93,94 Arthur et al reported the findings of a 13-week study to compare the safety and efficacy of MTX administered by IM and SC injection. Although serum MTX levels were not significantly affected by the route of administration, seven out of eight preferred self-administration by the SC route at home. Self-administration was associated with reduced hospital visits and improved patient satisfaction; therefore, parenteral MTX should be prescribed by the SC route instead of the IM route. 95 Interestingly, tolerability and patient/physician satisfaction with SC MTX were closely related with formulation characteristics. Müller-Ladner et al determined preference, satisfaction, usability, and local tolerability by patients, physicians, and study nurses of two subcutaneously administered MTX formulations of different concentrations. 96 MTX treatment consisted of 20 mg/week administered as a mediumconcentration formulation (MC) (2.0 mL of 10 mg/mL solution in prefilled syringe; separate needle) or a highconcentration formulation (HC) (0.4 mL of 50 mg/mL in prefilled syringe; pre-attached needle). At the end of the study, 93% of the patients preferred HC over MC as further treatment. Overall assessment of HC was "good" or "very good" Patient Preference and Adherence 2015:9 submit your manuscript | www.dovepress.com
934
Jin et al in 90.6% versus 34.4% of MC-treated patients. Physician's and patients' global assessment of syringe usability showed highly statistically significant differences (P,0.0001) in favor of HC. The total smaller volume of administered drug and the improved usability of a pre-attached needle in combination with a smaller prefilled syringe resulted in preference of the patients of HC over MC.
Human chorionic gonadotropin
Human chorionic gonadotropin (hCG) is used to induce final oocyte maturation and to provide luteal phase support during in vitro fertilization treatment. hCG is usually given via IM route, but SC route has also been described. Stelling et al evaluated serum and follicular fluid levels of hCG after an IM or SC injection of 10,000 IU of hCG 36 hours after injection at the time of oocyte retrieval. SC administration achieves a significantly higher hCG level in serum (348.6±98 IU/L versus 259.0±115 IU/L, P=0.014) and follicular fluid (233.5±85 IU/L versus 143.4±134 IU/L, P=0.021) compared with IM group despite of a nonsignificant difference in pregnancy rates between the two groups. SC hCG obtains the desired clinical effects with less patient inconvenience. 97 Obese women had markedly lower C max , AUC, and average concentration (C av ) of hCG as compared with normal-weight women after SC injection (all P,0.05); however, the pharmacokinetic parameters were similar for normal-weight and obese women after IM injection. 98 Thus, IM route may be more suitable for obese women who would receive hCG.
Hepatitis B immunoglobulin
Despite current protocols allow for HBIG administration either intravenously or intramuscularly, experience with the SC administration of post-transplant HBIG has been reported. Singham et al examined the efficacy and patient preference of SC administration of HBIG in maintaining anti-HBs titers .100 IU/L. It was observed that all patients preferred the SC route to the IM route and SC administration of HBIG can effectively maintain anti-HBs levels above the requisite 100 IU/L while substantially reducing patient discomfort and improving patient satisfaction. 99 A prospective, observational, 18-week, open-label, singlearm, multicenter study confirmed that self-administering SC HBIG at home achieved high compliance and tolerability in maintenance liver transplant patients. 100 Thürmann et al investigated pharmacokinetics and safety of a novel HBIG (Biotest AG) in healthy volunteers after a single SC or IM dose of 30 IU/kg and observed that the two routes have comparable pharmacokinetic parameters. 101 Biotest AG launched Zutectra ® as a new SC prophylaxis against HBV reinfection in 2010.
Hydrocortisone
Hahner et al evaluated the pharmacokinetics and safety of hydrocortisone after SC and IM injection (100 mg) and after SC administration of sodium chloride (0.9%), respectively, at three different visits. Both IM and SC injections increased serum cortisol rapidly and were well tolerated. Regarding the administration route of hydrocortisone, eleven (91.7%) patients preferred SC route (P,0.01), whereas one patient did not have any preference. SC administration of 100 mg hydrocortisone shows excellent pharmacokinetics for emergency use with a good safety profile and is preferred by patients over IM injection. 102 SC hydrocortisone may become an important new tool to improve the self-management of patients with adrenal insufficiency in the ambulatory setting. Interestingly, a significantly inverse correlation was observed between BMI and C max , C av , and AUC after SC administration of hydrocortisone, whereas no correlation between BMI and pharmacokinetic data was seen after IM hydrocortisone administration. 102 The case indicates that patient's BMI should be considered while switching from IM route to SC route.
Morphine
A prospective randomized double-blind cross-over study compared intermittent IM and SC morphine boluses for postoperative analgesia. Patients received 0.15 mg/kg of morphine by SC or IM injection. The majority of patients indicated a strong preference for the SC route despite no significant differences in pain scores, respiratory rate, arterial oxygen saturation, heart rate, mean arterial pressure, sedation, or nausea scores between IM and SC administration of morphine. Postoperative analgesia by SC morphine bolus injection is as effective as IM injection with a similar side-effect profile but with greater patient acceptance and less risk. 103 Double-blind, randomized, cross-over studies compared the pharmacokinetic profiles following a single dose of 2 mg (healthy volunteers) or 5 mg (patients with advanced illness) morphine SC with 150 IU of rHuPH20, SC with 0.9% normal saline, or IV on three consecutive days. Significantly shorter mean T max and greater mean C max were achieved following SC dosing of morphine with rHuPH20 than without rHuPH20, although the extent of exposure of morphine was similar. 104
Discussion
Principles and detailed factors determining the choice of injection route Some medications have been observed with the existence of optimal injection route (Table 1). Obviously, it is inappropriate to simply say that one injection route is overwhelmingly better than another route. With respect to insulin and ketamine, there seems to be conflict over whether IV is superior to SC. Statistically significant difference between SC administration and IV administration was observed in efficacy of ketamine for postoperative analgesia in the Javid et al's study (IV . SC) and safety of ketamine for dissociative conscious sedation in the Honarmand et al's study (SC . IV), 57,58 indicating that the best choice of route for ketamine administration may vary at different clinical circumstances. This can also partly be explained by the fact that the ketamine dose was different between two studies (0.5 mg/kg versus 0.6 mg/kg) and therefore researchers considered more about side effects (eg, rise in systolic blood pressure and rate-pressure product), given that SC ketamine was as effective as IV ketamine. The belief that insulin delivered by the IV route should act more rapidly and decrease blood glucose levels faster than the SC route has become questionable because continuous SC insulin infusion using an insulin pump seemed to control the glycemia better compared to IV insulin infusion in medical intensive care unit patients. Bodur et al explained that continuous SC insulin infusion could avoid extra problems brought by IV insulin infusions including 100-fold dilution (50 IU in 50 mL) which may bring slight variations in the preparation of the final syringe and change the concentration of insulin considerably, adsorption of insulin to the surface of the syringe and the lines, and sharing the lumens of central catheters where other treatments are streamed with the potential for interference with infusion rates. 44 We would like to provide an additional explanation; that is, evidence has shown that continuous SC insulin infusion could significantly reduce the total daily insulin dose and achieve significantly lower incidence of hypoglycemia compared with multiple daily insulin SC injections, 106 Safety, efficacy, patient preference, and pharmacoeconomics are four principles governing the choice of injection route (Figure 2). Safety and efficacy must be the preferred principles to be considered. Firstly, clinicians should know whether there is a contraindicational route in some cases. Prescribing information for some medications has described the IV, SC, or IM route-related contraindication information. For example, calcium gluconate injection is only for IV use. SC or IM injection may cause severe necrosis and sloughing, and thus, they are contraindicational routes. 107 The preferred parenteral route of administration for promethazine hydrochloride injection is deep IM injection. SC promethazine is contraindicated as it may result in tissue necrosis. When used intravenously, promethazine hydrochloride injection should be given in concentration no greater than 25 mg/mL at a rate not to exceed 25 mg/min. 108 The FDA required a boxed warning for promethazine hydrochloride injection on September 16, 2009, highlighting the risk of serious tissue injury when this drug is administered incorrectly. Norepinephrine bitartrate injection must be diluted in dextrose-containing solutions prior to infusion. Use of IM and SC is contraindicated because of poor absorption and potential local necrosis due to the vasoconstrictive action of the drug. 109 For methylprednisolone sodium succinate and hydrocortisone sodium succinate, IM administration is contraindicated for treatment of idiopathic thrombocytopenic purpura, and IV is the only injection route for this indication, although the two medications can be given intravenously or intramuscularly for other indications. Chlorpromazine hydrochloride injection is intended for deep IM use. The SC route is contraindicated for chlorpromazine administration to avoid causing skin irritation, while IV route is only for severe hiccups, surgery, and tetanus. Potassium chloride injection must be diluted and infused over a certain period of time. IV push/bolus, SC, and IM are contraindicated routes that would result in the patient receiving too much potassium too quickly.
Second, literature, rather than prescribing information, has provided good suggestions regarding safety and efficacy issue in the choice of injection route. For example, epinephrine should be given intramuscularly during an episode of systemic anaphylaxis, IM morphine should not be given for pain management, and IM redback spider antivenom should not be recommended in the treatment of redback spider bite. If the safety and efficacy of two injection routes are equivalent, clinicians should consider more about patient preference and pharmacoeconomics because patient preference will ensure optimal treatment adherence and ultimately improve patient experience or satisfaction, while pharmacoeconomic concern will help alleviate nurse shortages and reduce overall health care costs. iv Higher patient preference in addition to comparable efficacy and safety profile [10][11][12] Rituximab SC .
iv Reduced active health care professional time, declined total mean staff costs, as well as reduced patient time in the treatment room 15 Anti-TNF medications SC .
iv Lower incidence of neuropathy in the treatment of multiple myeloma, more time efficient for the patient and institution, and higher patient preference 20,22 Amifostine SC .
iv iv dose of rhGM-CSF was less potent at inducing a leukocytosis than equivalent SC doses and was associated with a higher incidence of generalized rash and first-dose reactions 30,31 G-CSF SC .
iv Shorter time to neutropenia resolution and lower dose in alleviating neutropenia with SC G-CSF compared with iv G-CSF 32,33 Recombinant human interleukin-2 SC .
iv More patients with metastatic renal cell carcinoma experience stable disease, and fewer patients undergo disease progression and lower clinical and hematologic toxicity 35,36 immunoglobulin SC .
iv Significantly less discomfort at the injection site, better mobility and patients' overall preference, and more cost-effectiveness compared with IV heparin therapy 55,56 Opioids SC .
SC
Compared with SC vitamin K1, iv vitamin K1 caused a more prompt reduction in the iNR. For patients excessively anticoagulated with warfarin, small doses of SC vitamin K1 may not correct the iNR as rapidly or as effectively as when administered iv 59,60 Abatacept iv .
SC
Among patients with rheumatoid arthritis who receive SC abatacept after the switch from iv administration, SC abatacept shows a risk of relapse in 27% of cases, and return to the iv administration quickly reinstates disease control 6 insulin SC .
iv Compared with the traditional continuous iv infusion method, continuous SC insulin infusion using an insulin pump can achieve better glycemic control and significantly lower daily insulin requirements among patients in medical intensive care unit 44 insulin iv .
SC iv insulin could achieve glycemic target for more time in noncritically ill patients with parenteral nutrition-associated hyperglycemia than those on the SC regimen. 45 Compared with sliding-scale-guided intermittent SC insulin injections, continuous IV insulin infusion induced a significant reduction in perioperative blood glucose levels and the incidence of deep sternal wound infection in diabetic patients after cardiac surgical procedures 46 Ketamine iv .
SC
For postoperative analgesia following a dose of 0.5 mg/kg, IV route provides analgesia for 24 hours after surgery without significant side effects, whereas SC ketamine only controls analgesia within the first 6 hours after surgery 57 Ketamine SC .
iv For dissociative conscious sedation following a dose of 0.6 mg/kg, SC ketamine was as effective as, but safer than, IV ketamine (a significantly lower rise in systolic blood pressure and rate-pressure product) 58 epinephrine iM .
iv Lower occurrence rate of adverse cardiovascular events and overdose 65 Hepatitis B immunoglobulin iM .
iv Lower costs, significantly better HRQOL scores on the flexibility, and negative feelings scales in patients on IM HBIG compared with patients on iv HBiG [66][67][68] Pegasparaginase iM .
iv Lower occurrence rate of allergic reactions and a significantly less rapid onset of allergic reactions 70,71 Antibiotics iM .
iv IV group has significantly greater costs of the mean total drug therapy compared with the IM group 72,73 Ketamine iv .
iM Sedation was longer in the iM group, whereas iv group experienced shorter time from drug administration to patient discharge 77,78 Morphine iv .
iM IV infusion gave better pain relief and significantly faster onset of analgesic effect than the IM injections [80][81][82] Antivenom iv .
iM The proportion pain-free at 24 hours in the iv group was better compared with the iM group; antivenom could not be detected in serum following iM administration 83 92 Once-weekly SC administration of epoetin beta is as safe and effective in maintaining hemoglobin levels in stable hemodialysis patients as two or three times weekly administration of the same total dose, and would reduce clinic time for patients who do not self-administer, encourage self-administration, and improve overall compliance. 110 Reducing the dosing frequency of IV Velcade ® from twice-weekly to once-weekly significantly reduced severe peripheral neuropathy (28% versus 8%, P,0.001) but did not affect the efficacy. 111 Continuous SC insulin infusion is superior to multiple daily insulin SC injections regarding efficacy and incidence of hypoglycemia in Chinese patients with type 2 diabetes mellitus 106 ) • Formulation characteristics (SC high-concentration MTX versus SC medium-concentration MTX, rHuPH20-facilitated SC trastuzumab versus IV trastuzumab, rHuPH20-facilitated SC-IG versus IV-IG, rHuPH20-facilitated SC ceftriaxone versus SC ceftriaxone) 9,43,62 • Administration time (SC versus IV: insulin during pre-, intra-, and postoperative periods) 47 interferon-beta-1a SC .
iM Higher efficacy in preventing relapses among patients with relapsing multiple sclerosis, and faster and more pronounced effects in decreasing new cortical lesions development and cortical atrophy progression 90,91 Methotrexate SC .
iM Self-administration could reduce hospital visits and improve patient satisfaction 95 hCG SC .
iM SC administration achieves higher hCG level in serum and follicular fluid, as well as less patient inconvenience compared with IM group 97,98 Hepatitis B immunoglobulin SC .
iM Higher patient preference due to effectively maintaining anti-HBs levels while substantially reducing patient discomfort and improving patient satisfaction 99,100 Hydrocortisone SC .
iM Higher patient preference 102 Morphine SC .
iM Postoperative analgesia by SC morphine bolus injection is as effective as IM injection with a similar side-effect profile but with greater patient acceptance and less risk 103 Abbreviations: SC, subcutaneous; iv, intravenous; TNF, tumor necrosis factor; rhGM-CSF, recombinant human granulocyte-macrophage colony-stimulating factor; G-CSF, granulocyte colony-stimulating factor; iNR, international
Further research opportunities
Further studies are needed. First, it is necessary to conduct prospective intervention studies with multiple outcome measures following a switch to a more appropriate injection route. Second, both retrospective and prospective pharmacoeconomic studies should be encouraged. Third, patient preference is important in medical decision making when choosing treatment methods because it has implications for adherence and quality of life. Therefore, future drug efficacy and safety studies should include contemporaneous, actual patient preference where possible, utilizing appropriate measures. Fourth, it is interesting to evaluate the association of potential determinants with the choice of injection route.
For example, the effect of dosing frequency on the choice of injection route for bortezomib remains to be investigated. In patients with multiple myeloma, once-weekly IV or twice-weekly SC bortezomib has been proven to offer noninferior efficacy and improved safety profile compared with standard twice-weekly IV administration. 20,111 However, whether once-weekly SC bortezomib has advantages over once-weekly IV bortezomib needs to be validated in future prospective randomized trials. We did not focus on one point in the form of a Cochrane-style systematic review or meta-analysis because controversial conclusions derived from comparative studies of injection routes are currently rare and our objective is to enrich the knowledge of optimally selecting administration route following a panorama investigation. Some limitations of this review are as follows: First, other databases like EMBASE could not be used due to inaccessibility in the People's Republic of China. Second, we did not include the comparative studies of injection routes without any difference in efficacy, safety, cost, or patient preference, and such exclusion criteria would restrict the vision of advantages brought by SC administration. For example, even though insignificant difference in safety and efficacy has not been observed when comparing SC route and IV route, SC administration may still be preferred because it is usually more time efficient for the patient and institution. However,
939
Optimally choosing injection route a new horizon might be opened up by this updated review with plenty of new findings.
Conclusion
In this updated review, we summarized the findings of comparative studies of different injection routes, which will enrich the knowledge of safe, efficacious, economic, and patient preference-oriented medication administration as well as catching research opportunities in clinical nursing practice. Patients are complex because of biological, medical, socioeconomic, and cultural factors. Personalized therapeutics would go deep into routine practice and improve patient-specific outcomes if clinical practitioners perform comprehensive interventions, such as optimally choosing administration route, dosage and administration consultation services provided by pharmacy, prospective review of the appropriateness of physician orders by pharmacists, and pharmacotherapeutic monitoring following injection therapy. | 2017-04-09T03:29:47.482Z | 2015-07-02T00:00:00.000 | {
"year": 2015,
"sha1": "452d4ec93184b297d039c6d4e89cdcfb6c36cb77",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=25770",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e940dcc0c6422b76cbf6273cb8647af9b0c4a494",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20583274 | pes2o/s2orc | v3-fos-license | From COPD epidemiology to studies of pathophysiological disease mechanisms: challenges with regard to study design and recruitment process
ABSTRACT Background: Chronic obstructive pulmonary disease (COPD) is a largely underdiagnosed disease including several phenotypes. In this report, the design of a study intending to evaluate the pathophysiological mechanism in COPD in relation to the specific phenotypes non-rapid and rapid decline in lung function is described together with the recruitment process of the study population derived from a population based study. Method: The OLIN COPD study includes a population-based COPD cohort and referents without COPD identified in 2002–04 (n = 1986), and thereafter followed annually since 2005. Lung function decline was estimated from baseline in 2002–2004 to 2010 (first recruitment phase) or to 2012/2013 (second recruitment phase). Individuals who met the predefined criteria for the following four groups were identified; group A) COPD grade 2–3 with rapid decline in FEV1 and group B) COPD grade 2–3 without rapid decline in FEV1 (≥60 and ≤30 ml/year, respectively), group C) ever-smokers, and group D) non-smokers with normal lung function. Groups A–C included ever-smokers with >10 pack years. The intention was to recruit 15 subjects in each of the groups A-D. Results: From the database groups A–D were identified; group A n = 37, group B n = 29, group C n = 41, and group D n = 55. Fifteen subjects were recruited from groups C and D, while this goal was not reached in the groups A (n = 12) and B (n = 10). The most common reasons for excluding individuals identified as A or B were comorbidities contraindicating bronchoscopy, or inflammatory diseases/immune suppressive medication expected to affect the outcome. Conclusion: The study is expected to generate important results regarding pathophysiological mechanisms associated with rate of decline in lung function among subjects with COPD and the in-detail described recruitment process, including reasons for non-participation, is a strength when interpreting the results in forthcoming studies.
Background
Chronic obstructive pulmonary disease (COPD) is a common disease strongly associated with increasing age and environmental exposure, most commonly own tobacco smoke exposure [1] but globally also indoor exposure to biofuel combustion exhaust [2]. The prevalence of COPD is generally reported in the range of 8-10% among adults. However, actions for smoking control in the society have been followed by reduced smoking in many countries, and a recently published study indicates that the prevalence of COPD has decreased in parallel with changes in smoking habits in Sweden [3]. Still, the underdiagnosis is substantial; merely 20-30% of all individuals with COPD are identified by health care [4,5]. Thus, the results from studies including selected populations recruited from the health care ('register-based studies') must be interpreted with care, as the large underdiagnosis will affect generalizability. Still, most studies on COPD epidemiology are cross-sectional, limiting the understanding of the disease process from a general population point of view and, furthermore, seldom include an evaluation of possible pathophysiological mechanisms in relation to disease progress.
Studies with the aim to understand the underlying pathophysiological mechanisms of the disease process in COPD are often limited to small and highly selected study populations [6,7]. Whilst important findings on antiprotease imbalance in COPD have emerged from a number of small selective patient studies, showing negative associations between Matrix MetalloProteinase-9 (MMP-9) and lung function [6,7], such study populations can hardly be considered representative for COPD in the general population; hence, it can be questioned to what extent the observed results are generalizable. However, there are a few population-based studies reflecting similar results; higher plasma metalloproteinase-9 levels, indicating protease-antiprotease imbalance, were associated with lower FEV 1 percent predicted among subjects in a population-based COPD cohort, when compared to subject without airway obstruction [8]. Nevertheless, to increase the understanding of the pathophysiological mechanisms related to disease progression in COPD, further studies of representative samples from the general population are warranted.
COPD was previously described as a smoke-induced lung injury but is today regarded as a heterogeneous syndrome, comprising several phenotypes [9], which may be related to different pathophysiological mechanisms. Rate of decline in lung function, degree of emphysema, exacerbation frequency, cardiovascular co-morbidity, and tobacco smoke exposure are just a few possible clinical factors related to underlying pathophysiological mechanisms and, thus, involved in the disease process.
The intention of this report is to describe the recruitment process of a study sample from a population-based COPD cohort to a study aiming at evaluating the pathophysiological mechanism in COPD in relation to one of the above-mentioned clinical phenotypes, characterized by rate of decline in lung function. The study was designed to address the hypothesis that certain biomarkers would differ between COPD subjects with rapid decline in lung function compared to those with a non-rapid decline, also including comparisons with subjects with normal lung function, both non-smokers and those with a history of smoking. Here, the recruitment process will be presented, along with rate and reason for non-participation, as well as the basic characteristics of the study population at recruitment.
Material and method
In 2002-2004, previously examined subjects from four population-based adult cohorts from the OLIN (Obstructive Lung disease In Northern Sweden) studies were invited to re-examination. All subjects fulfilling the spirometric criteria for airway obstruction, FEV 1 /VC <0.70, were identified (n = 993) together with age-and sex-matched subjects without obstructive lung function impairment. Since 2005, the study population (n = 1986) has been invited to annual examinations with a basic program including spirometry and a structured interview [10]. The current report is based on data from baseline in 2002-2004 and at follow-up in 2010 (first recruitment phase) or 2012/2013 (second recruitment phase).
The study sample in the current report was identified based on predefined criteria, into groups labelled A-D, as defined later. The aim was to include 15 subjects in each of the groups A-D in the study. The intention of groups A and B was to clearly separate COPD with rapid decline in FEV 1 (A) from COPD with more stable, FEV 1 , i.e. non-rapid decline (B), thus the predefined levels of FEV 1 decline in these groups were separated by an empty interval. The Global Initiative for Obstructive Lung Disease (GOLD) spirometric criterion for COPD was used [11].
Predefined groups
Group A -COPD, rapid decline: COPD GOLD grade 2-3 with a rapid decline in FEV 1 , ≥60 ml/ year Ever-smokers with a smoking history of >10 pack years at baseline. Group B -COPD, non-rapid decline: COPD GOLD grade 2-3 with a non-rapid decline in FEV 1 , ≤30 ml/year Ever-smokers with a smoking history of >10 pack years at baseline. Group C -Ever-smokers with normal lung function: FEV 1 /VC ≥ 70% and FEV 1 ≥80% of predicted at baseline and at recruitment. A decline in FEV 1 <20 ml/year. Ever-smokers with a smoking history of >10 pack years at baseline. Group D -Non-smokers with normal lung function: FEV 1 /VC ≥ 70% and FEV 1 ≥80% of predicted at baseline and at recruitment. A decline in FEV 1 <20 ml/year. Non-smoker at baseline and at recruitment.
The Regional Ethical Review Board at Umeå University, Sweden, approved the study. All participants signed a written informed consent and the study was carried out according to the Helsinki declaration. The study is referred to as 'Respiratory and Cardiovascular Effects in COPD (KOLIN)', ClinicalTrials.gov Identifier: NCT02729220.
Definitions
Smoking habits were classified as non-smoker (less than 1 cigarette per day during a maximum of 1 year), ex-smoker (stopped smoking since at least 12 months), and current smoker (current smoker or stopped smoking within the last 12 months). Eversmoker was defined as ex-smoker or current smoker. Pack-years at baseline (2002)(2003)(2004) were calculated.
Spirometry and spirometric classification
Spirometry was performed using a dry volume spirometer, the Mijnhardt Vicatest 5, the Netherlands following the American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines [12]. Vital capacity (VC) was defined as the highest value of forced vital capacity (FVC), or slow vital capacity (SVC). Reversibility testing was performed if FEV 1 / VC < 0.70 or forced expiratory volume in one second, FEV 1 < 80 percent of predicted. COPD was defined by the spirometric criteria FEV 1 /VC < 0.70, using the highest value pre-or post-bronchodilation. Disease severity was classified according to the GOLD guidelines [11]; grade 2-3 includes subjects with FEV 1 < 80 and ≥30 percent of predicted value. Swedish spirometric reference values for FEV 1 were used [13], corresponding well to FEV 1 in a symptom-free population of Northern Sweden [14]. Decline in FEV 1 (ml/year) was calculated as (FEV 1 at baseline -FEV 1 at recruitment)/number of years (based on person-days) of follow-up based on highest value pre-or postbronchodilation.
Exclusion criteria for participation
Systemic disease and/or treatment with immune-modulating therapy. Clinical signs of upper or lower respiratory tract infection within the last six weeks. Asthma Contraindication for bronchoscopy -Severe or unstable cardiovascular disease -Other significant diseases, for example dementia, porphyria, cancer, respiratory insufficiency -Abnormal pulmonary x-ray prompting specific investigation
Recruitment procedure
From the OLIN COPD study database, all subjects fulfilling the A-D predefined criteria of lung function, FEV 1 decline, and smoking history were identified. The predefined criteria were based on baseline characteristics (n = 1986) and data at the examinations in 2010 (first recruitment phase) or 2012/2013 (second recruitment phase). The reason for two recruitment phases was to increase the study population, as the first recruitment phase did not identify enough number of subjects in group A and B who met the predefined criteria and no exclusion criteria for participation in the study program. Identified subjects within each of the groups A-D were contacted by telephone in a consecutive order with brief information about the study. Those with a primary interest to participate received written information to their home address and were offered a first appointment with a physician and a research assistant for informed consent, clinical examination, spirometry, and ECG recording. For those who met the study criteria and were willing to participate, the research assistant coordinated a second appointment for the study examinations, including amongst all blood sampling, measurement of arterial stiffness, and bronchoscopy. A pulmonary x-ray was performed in the interval between the first and second appointments. Non-participation was classified, as defined later, at each step of the recruitment procedure.
Classification of non-participation during the recruitment process -Fulfilling any of the exclusion criteria -Declined participation due to unwillingness to undergo the bronchoscopy procedure -Declined participation due to other reason, as specified -Impossible to reach by telephone
Non-participation
Groups C and D, but not groups A and B, reached the preset aim of 15 participants; group A included 12 participants and group B 10 participants after inclusion of two individuals with a FEV 1 decline of 33 ml/year (thereby exceeding the preset criteria ≤30 ml/year). The reasons for non-participation in the study program among subjects fulfilling the preset criteria for group A and B, respectively, are shown in Table 2. In group A, 17 out of 37 subjects, 45.9%, were not eligible due to exclusion criteria (n = 15) or death (n = 2), while in group B, 10 out of 29 subjects, 34.4%, were not eligible due to exclusion criteria (n = 9) or death (n = 1). In group A, 5 out of 37 individuals, 13.5%, declined bronchoscopy, and in group B, 6 out of 29 individuals, 20.7%, declined bronchoscopy.
Characteristics of study participants at recruitment in 2010 respectively 2012/2013
At baseline in 2002-2004, all subjects participating ( p ) in the study program, groups A p -C p, had a smoking history of at least 10 pack-years. In COPD with rapid decline (group A p ), there were 9 active smokers and 3 ex-smokers, and in non-rapid decline (group B p ), there were 3 active smokers and 7 ex-smokers. In group C p , there were 3 active smokers and 12 ex-smokers. Characteristics of the participants in groups A p -D p at the examination in 2010 (first recruitment phase) respectively 2012/2013 (second recruitment phase) are shown in Table 3. The absolute numbers of current smokers had decreased within groups A p -C p since baseline.
Discussion
Tobacco smoking is the most well-known risk factor for COPD. Some smokers with COPD experience rapid decline in lung function, while others have low or 'normal' rate of decline and, at the same time, there are smokers with normal lung function without enhanced rate of lung function decline, similar to non-smoking subjects. In this report, we describe the recruitment process of a study sample and the design of a study with the aim to increase the understanding of underlying pathophysiological mechanisms in COPD contributing to differences in lung function decline, non-rapid and rapid decline in FEV 1 . The study sample was recruited from the OLIN COPD study, including a well-characterized large populationbased COPD cohort and age-and sex-matched referents without obstructive lung function impairment. The OLIN COPD study includes annual follow-ups since recruitment in 2002-2004, and has so far contributed with data covering a wide area as the following examples show; from genetics, disease mechanisms and mortality to comorbidities, physical activity, muscle strength, fatigue, and health economics, including also (7) 59 (6) evaluation of prognostic factors [8,[15][16][17][18][19][20][21][22][23][24]. The study provides valid data for estimation of lung function decline among subjects both with and without COPD [10]. The recruitment of subjects without COPD reached the set goal, while this was not the case for those with COPD, and among them, the most common reasons for a fairly high exclusion rate were medical conditions contraindicating bronchoscopy or inflammatory conditions/medication expected to affect the outcome.
It has been suggested that a decline in FEV 1 among subjects with COPD needs to be evaluated across several years, as lung function values may naturally fluctuate somewhat between repeated examinations performed at shorter time intervals [25]. In the present study, we included an observation period of at least six and even up to 10 years as a basis for calculating decline. Still, the chosen levels of defining rate of decline in FEV 1 may be discussed. Normal rate of decline in FEV 1 among middleaged adults is described as 25-30 ml/year [26]. Here, we chose an even stricter criterion for groups C and D, having normal lung function at recruitment and follow-up as well as an annual decline in FEV 1 within the range of <20 ml. The two groups with COPD were defined to discriminate between subjects with rapid decline in FEV 1 (A) and subjects without rapid decline in FEV 1 (B). There is, however, no established definition of rapid decline in lung function, or specifically rapid decline in FEV 1 , in the literature. In 1977, Fletcher and Peto published the classic illustration of lung function decline in relation to smoking habits over eight years among 792 men [27], and they also discussed 60 ml/year as a cut of for rapid decline. More recent data from the COPD Gene study showed that the overall mean (SD) annual decline among subjects with GOLD 2 was 45.6 (61.1) ml/year across a 5-year observation time [28] and, in another study, the mean rate of decline among incident cases of COPD was 51 ml/year, estimated throughout a 10-year period [29]. To exemplify various cut-offs for rapid decline, approximately one third of the patients in a hospital-based COPD cohort were classified as having a rapid decline in FEV 1 with a mean decline of 78 ml/year (95%CI 73-83) [30], while in a populationbased study, half of the subjects above the age of 40 were identified as rapid decliners with a mean (SD) decline in FEV 1 of 53 (21) ml per year [31]. In these two referred studies, the rest of those with COPD, i.e. non-rapid decliners, had a mean decline in FEV 1 of 26 ml (95%CI 23-29 ml) and 27 (18) ml per year, respectively. For comparison, in a study including patients from pulmonary clinics, rapid decline was set at 40 ml/year [32], based on the findings in the ECLIPSE study [33]. Our decision to employ 60 ml/year as a cutoff for group A, representing a more rapid decline in FEV 1 , and <30 ml/year, as a more normal rate of decline in FEV 1 for group B, can be considered well-motivated and in line with the above-referred publications.
In the OLIN COPD cohort, group D, non-smokers with normal lung function and low rate of decline in FEV 1 , was as expected the largest group, whereas group B, COPD with a smoking burden of at least 10 packyears and a fairly normal rate of decline in FEV 1 , was the smallest group. For comparison, in the previously referred study of incident cases of COPD, just over a quarter of the incident cases, 27.6%, had a decline of 2 (20.0) 1 (6.7) 1 (6.7) a Based on values best of pre-and post-bronchodilation. b Increased medication or received new medication during the last 12 months. c Treatmed with antibiotics and/or oral steroids during the last 12 months. d One person in each of the groups B and C was excluded due to impaired mobility for reasons other than respiratory. e Two persons were treated, one of them was treated twice. f Antibiotics. g Heart disease include any of angina pectoris, percutaneous coronary intervention (PCI), coronary artery bypass surgery (CABG), myocardial infarction, or chronic heart failure. less than 30 ml/year [29] and, on the other hand, in the ECLIPSE-study, only 38% of the participants presented a decline in FEV 1 above 40 ml/year during the threeyear follow-up [33]. Furthermore, the previously referred recent publication in New England Journal of Medicine [31] provides additional support for the assumption of a low rate of decline in lung function in a sub-population of individuals with COPD; rapid decline in lung function is not obligate among subjects with COPD and a FEV 1 below 80% of predicted, corresponding to GOLD 2 and higher. One explanation is that lung function never reached expected normal values in this group and that they by normal rate of decline in lung function eventually fulfilled the spirometric criteria for COPD [31,34], and at least some of those with COPD in group B in our study may belong to such a trajectory.
There were no problems to recruit the intended number of 15 subjects to groups C and D, while this goal could not be reached in COPD groups, group A and B, despite a second recruitment phase. To get closer to the goal of recruitment, we allowed two individuals with a FEV 1 decline of 33 ml/year to be included in group B. The main reason for non-participation in groups A and B was exclusion due to clinical contraindications for bronchoscopy, or medical conditions, such as inflammatory diseases or need of immunosuppressive drugs, which may affect the results on inflammatory endpoints in the main study; in total, nearly every other subject in group A and more than one out of three in group B. It is well known that comorbidities are common among subjects with COPD and cardiovascular diseases are the most frequent [17,35]. The reviewing of non-participation revealed that co-morbidities contributed to a considerable obstacle when recruiting participants to this study also including an invasive procedure, such as bronchoscopy. It was evident that, even with access to a large population-based longitudinal COPD cohort, we could not meet the intended number of participants in groups A and B.
Most studies on pathophysiological mechanisms in COPD are cross-sectional and include highly selected study populations unrelated to population-based samples [36,37]. When recruiting participants to groups A and B in the current study, non-participation was fairly high among subjects with COPD. Through detailed data from the recruitment process, we will be able to discuss non-participation in relation to forthcoming results and, moreover, generalizability in relation to COPD in the society in a way that most other studies do not allow. However, in a longitudinal study also, the healthy survivor effect must be taken into account. In a previous publication based on cross-sectional data collected in 2010 from the OLIN COPD study, it was reported that subjects deceased from baseline in 2002-2003 until 31 December 2009 were older and had a higher prevalence of COPD, productive cough, and heart disease compared to the participants in the 2010 examinations, thus supporting a healthy survivor effect [18]. As a consequence of an expected healthy survivor effect, forthcoming results from the current study may rather underestimate than overestimate differences and/or associations when comparing groups.
In this study, the fixed ratio spirometric criterion for COPD according to GOLD was used. It is well-known that the fixed ratio will overestimate COPD among elderly [38] and the lower limit of normal criteria are nowadays recommended to be used in epidemiological research [39]. Still, most current clinical guidelines for diagnosis and management of COPD are based on the fixed ratio criterion and GOLD stage 2 and above are considered to identify clinically relevant disease [40,41]. Thus, the fixed ratio criterion is still highly clinically relevant for spirometric classification of COPD and allows the results to be interpreted in the clinical setting.
The OLIN longitudinal COPD study that was the basis for recruitment of the present study population includes a large population-based COPD cohort comparable to that of NHANES I [42]. Long-term followups of population-based COPD cohorts, in which COPD is classified according to spirometric criteria of accepted guidelines, are rare. The Copenhagen City Heart study was recruited in 1976 and included more than 2000 subjects classified as COPD and, so far, there are three follow-ups within 25 years, the latest in 2001-2003 [43]. However, it is an open study including new subjects at each of the followups and, thus, not comparable to either the OLIN COPD or the NHANES studies. In a recent publication from the Copenhagen City Heart, including data from 8000 subjects with a follow-up of approximately 18 years, merely 303 cases of COPD with at least two lung function tests were identified, and their mean (SD) decline in FEV 1 was 46 (28) ml/year [44]. There are few population-based longitudinal studies with several years of follow-up providing large enough COPD cohorts to study pathophysiological mechanisms in relation to disease progression, here assessed as rate of decline in FEV 1 .
Conclusion
The presented study design provides a good basis for evaluating underlying pathophysiological mechanisms contributing to differences in lung function decline, rapid and non-rapid decline, among subjects with COPD derived from a populationbased sample, also including comparison with eversmokers and non-smokers with normal lung function. A large burden of comorbidities among subjects with COPD was the most important factor affecting participation in this study including an intervention procedure, such as bronchoscopy. When aiming at recruiting patients with a specific COPD phenotype from longitudinal population-based studies, various comorbidities may significantly disable the recruitment process, even if the basis for recruitment is considerably large. However, our study is expected to generate important results regarding pathophysiological mechanisms related to disease progress, assessed as lung function decline, among subjects with COPD, and the in-detail described recruitment process, including reasons for non-participation, is a strength when interpreting as well as assessing the generalizability of the results. | 2018-04-03T05:16:11.659Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "a80ab97ea2268e7da0fec50c4776d2884c2fd088",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20018525.2017.1415095?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2ae0783447c811cb5fb519a5c526e7d69fdbf65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259915774 | pes2o/s2orc | v3-fos-license | COVID-19 and sexual reproductive health service utilisation among women of reproductive age in Zambia
ABSTRACT Zambia has made improvements in addressing maternal and child health in the past two decades. The COVID-19 pandemic outbreak, which could put a strain on the healthcare systems and alter how access to sexual and reproductive health services is provided, could, however, have a negative impact on the gains. Data from a recent nationally representative COVID-19 survey, conducted in April 2021, was used to examine the utilisation of antenatal care services among pregnant women during the pandemic in Zambia. Overall, 10 per cent (95% CI: 8.0–11.4) of pregnant women missed antenatal care during the COVID-19 period, signifying an increase when compared to the time before the outbreak of COVID-19. Age of a woman and level of education were associated with the risk of missing antenatal care services. The study highlights the need to improve communication to women on availability and access to sexual reproductive health services to stimulate demand and use among women during this period.
Introduction
Zambia has made progress in improving sexual and reproductive health (SRH) outcomes in the past two decades, especially in reducing maternal mortality, child mortality and morbidity, and increasing modern contraceptive use.Pregnancy-related mortality ratio reduced from 729 in 2001 to 278 in 2018 (Zambia Statistics Agency, Ministry of Health (MOH) Zambia, and ICF, 2019).Many studies have been conducted in Zambia to examine associations between women's socio-economic and demographic factors and utilisation of different sexual and reproductive health services (Chola et al., 2020;Lasong et al., 2020;Namukoko et al., 2022;Nduku & Simo-Kengne, 2022;Simona et al., 2022;Phiri et al., 2023).These studies found that women who are older, educated, those who live in urban areas, working class or belong to rich households, were more likely to utilise SRH services.However, it is not clear whether these findings would be similar, with the on-going fear of COVID-19 pandemic, which has the potential to threaten maternal and child health care, antenatal care attendance, including facility deliveries especially in SSA counties, Zambia inclusive (Feyissa et al., 2020;Pfitzer et al., 2020;Riley et al., 2020).The COVID-19 pandemic has altered people's lives not only in Zambia but also in other SSA countries.Work at home, social separation, disruption of health service delivery and fear of infection at health facility have the potential to change health-seeking behaviours of the population (Feyissa et al., 2020;Hashem et al., 2021;Kumar et al., 2021;Mudenda et al., 2022).
In Zambia, just like in many other SSA countries, the COVID-19 pandemic has put a strain on health systems and changed the landscape on access to general health as well as sexual and reproductive health service provision, including contraceptive use, antenatal care, postnatal care and sexual behaviour (Saasa & James, 2020;United Nations Zambia, 2020;Mallah et al., 2021;Mudenda et al., 2022;Ncheka et al., 2023).The impact of COVID-19 on the provision of sexual and reproductive health services can have direct maternal health effects such as prevention of unplanned pregnancies, unsafe abortions, obstetric complications without care, maternal deaths and newborn deaths (Kumar et al., 2021;Mallah et al., 2021).The disruption in supply of contraception methods in public facilities may compel women of reproductive age to use traditional methods such as herbs, withdrawal and lactational amenorrhea method that have proved to be ineffective in the past (Kumar et al., 2021;Mallah et al., 2021).
Since the onset of COVID-19 pandemic in Zambia and other parts of SSA, there has been limited information in the literature to understand of how various socio-demographic factors, such as maternal age, socioeconomic status, education level, marital status and access to healthcare facilities influence women's utilisation of antenatal care services during the COVID-19 period.This has been partly due to lack on national level data to adequately measure the impact on COVID-19 pandemic on SRH service utilisation While a few studies have explored the overall impact of the pandemic on healthcare utilisation in the context of SSA (Adelekan et al., 2020;Ferreira-Filho et al., 2020;Feyissa et al., 2020;Hashem et al., 2021), These studied have not specifically investigated its effects on pregnant women seeking antenatal services.Moreover, the majority of the existing research has focused on examining effect COVID-19 restrictions on general health care access in high-income countries were data is available, neglecting the experiences and challenges faced by women in low-income countries like Zambia.Therefore, it is important to have a comprehensive understanding of how the COVID-19 pandemic affected utilisation of SRH services among women of reproductive age in Zambia, in order to generate evidence to inform strengthening of SRH programming in the country amid the pandemic crisis.
Zambia as part of the countries in SSA which was heavily impacted by COVID-19 has been implementing various health measures to mitigate the effects of the pandemic on the population (Ministry of Health Zambia, 2020;Mallah et al., 2021).Zambia, like other countries in the region, developed the COVID-19 multi-sectoral response plan to address both preventive treatment and management of the pandemic.These measures included adequate provision of family planning services and commodities, especially to the vulnerable groups, such as pregnant women (Ministry of Health Zambia, 2020;United Nations Zambia, 2020).Improving access to and utilisation of maternal health services among women of reproductive age, especially during the pandemic period, is key in meeting their reproductive health needs to limit and space child-bearing (Fabic et al., 2015;Ferreira-Filho et al., 2020;Mallah et al., 2021) Theoretically, the use of maternal health care services during the COVID-19 pandemic can be well understood within the underpinning of the Health Belief Model (HBM).HBM is one of the early models in the field of health behaviour.The model was created in the 1950s by social psychologists Irwin M. Rosenstock and colleagues (Jones et al., 2015).It is a theoretical framework that helps explain and predict healthrelated behaviours of a population or a group of people.It was initially developed to determine the reasons why people shunned public health screening initiatives.According to the HBM, a person's decision regarding his or her health behaviour is influenced by ones beliefs regarding their susceptibility to illness, the severity of the illness, the advantages of taking action, and the obstacles to action (Rosenthal et al., 1992;Zhao et al., 2012;Jones et al., 2015).This model therefore, provides explanation of the likelihood of using reproductive health services among women from the point of perceived benefits (Rosenthal et al., 1992;Azfredrick, 2016).
In Zambia, the COVID-19 pandemic has posed significant challenges to the utilisation of maternal health care services such as antenatal services, post-natal and family planning services (Ministry of Health Zambia, 2020;Mudenda et al., 2022).Therefore, analyzing this problem through the lens of the health behaviour models can provide useful insights for improving maternal health programme.The Health Belief Model, for instance, emphasises how perceived susceptibility to the COVID-19 virus and the severity of its consequences can impact pregnant women's decision-making process on whether to attend antenatal care or not.Also, fear of contracting the virus at the healthcare facilities and concerns about potential exposure to infected individuals during ANC visits may contribute to reduced utilisation rates among women.It is crucial to understand these dynamics within the context of health behaviour models in order to develop focused interventions that will address the issues and ensure that use of antenatal services during the COVID-19 period is not negatively affected.
Therefore, examining micro-level national data would be essential in providing a holistic picture of the impact of COVID-19 on sexual and reproductive health behaviour among women of reproductive age, as well as identifying the determinants of healthseeking behaviour in Zambia.Due to lack of national level micro-data in most countries in SSA regarding COVID-19 and sexual reproductive health service utilisation, it is not yet clear what individual and household level factors are influencing women's access and utilisation of SRH care services during the COVID-19 period in SSA and Zambia in particular.Evidence from this current analysis would be useful to support generation of information to guide SRH programming and implementation in the country during the pandemic period in order to ensure that maternal health care services are adequately utilised.
Thus, the aim of this study was to assess the individual-level factors associated with non-utilisation of antenatal care services among women of reproductive age in Zambia during the COVID-19 period.The study purpose was accomplished by making use of the national representative data collected in 2021 to assess the socio-economic impact of COVID-19 on the Zambian population.The COVID-19 Socio-economic Impact Assessment (SEIA) survey collected data on access and utilisation of health services during the COVID-19 period among women of child-bearing age (15-49) during.
Data source
The study used data extracted from the 2021 Socio-economic Impact Assessment (SEIA) survey conducted in Zambia.Data were collected with the reference period 1st March 2020 to 30th April 2021.The SEIA survey drew a nationally representative sample of households selected via a two-stage stratified cluster sampling technique.The survey collected data on family planning utilisation from women aged 15-49 during the COVID-19 period.Interviews were conducted using two main questionnaires, namely, household questionnaire and individual questionnaire.The individual questionnaire included a section which captured indicators on family planning and sexual reproductive health.For our study investigation, we analysed data from the individual respondent data file, which contained all information related to women of reproductive age.This data was data collected using the individual questionnaire.Data were collected by trained field workers who were well-versed in a wide range of topics covered by the survey.A detailed explanation of the methodology is provided in the survey report (Zambia Statistics Agency, 2021).Furthermore, data from the DHS 2007-18 was used to compare the prevalence of antenatal care service utilisation among women before and during the COVID-19 period.
Study sample
The analysis sample for this study comprised women of child-bearing, aged 15-49 years, extracted from the COVID-19 socio-economic impact assessment survey.The survey employed a two-stage stratified cluster sample design.In the first stage, 419 Enumeration Areas (EAs) were selected and in the second stage, systematic sampling was used to select 25 households from each enumeration area (EA) to arrive at 10,490 households.All persons, 10 years and older, in the selected households (usual household members) or visitors who spent a night with the household, were eligible for an individual interview (Zambia Statistics Agency, 2021).For this study, a total sample of 11,055 women of reproductive age (15-49 years) were considered to analyse the prevalence of sexual and reproductive health outcomes.Furthermore, the analysis of antenatal care service utilisation was restricted to women who were pregnant during the COVID-19 period (n = 1,523).This is because these women were in need of antenatal services during the period.Therefore, all non-pregnant women were excluded in the multivariate regression model.The SEIA survey had two main questionnaires, namely; household and individual questionnaires.
Outcome variables
The outcome variables of interest in this study were; (i) antenatal care (ANC) (ii) HIV treatment access and (iii) postnatal care (PNC) service utilisation during COVID-19 period.The SEIA defined these outcome indicators in line with the World Health Organisation's guidelines, and therefore data was collected to measure estimates that would inform health policy and programmes to address the impact of COVID-19 in the country.The COVID-19 socio-economic impact assessment survey collected information on access and utilisation of sexual reproductive services among women of reproductive age (15-49) during the COVID-19 period in Zambia.Each outcome variable was classified as binary outcome; HIV treatment access ('0' representing no access and '1' representing access); Antenatal visit was coded as ('0' representing did not miss an antenatal visit and '1' representing missed an antenatal) and Postnatal access ('0' representing did not miss postnatal care and '1' representing missed postnatal care).All the outcome variables were selected based on prior review of literature.
Independent variables
Based on literature review, we identified individual level predictors that might be associated with family planning utilisation among women in the reproductive age during the COVID-19 period.We classified these variables as socio-economic and demographic factors, and underlying health conditions.The COVID-19 survey questionnaires and data dictionary were used to identify the independent variables.The main independent variables included in this study were: age of a woman (15-24, 25-34, and 35-49 years); residence (urban, rural); province (central, Copperbelt, Eastern, Lusaka, Luapula, Muchinga, Northern, North-western, Southern and Western); employment status (employed, unemployed); marital status (never married, married, formerly married); education level (none, primary, secondary, tertiary); knowledge of COVID-19 preventive measures (yes, no); perceived knowledge of COVID-19 treatment options (yes, no); having any medical condition such as diabetes or HIV or hypertension (yes, no).
Statistical analysis
Statistical software Stata SE version 17.0 was used to perform weighted analysis, which considered the complex survey design.We performed the descriptive analysis to summarise study samples and outcome indicators.Categorical variables were presented using frequencies and percentages.Bivariate analysis was conducted to explore the association between explanatory variables and the outcome variable 'antenatal care service utilisation'.Furthermore, we conducted a binary multivariate logistic regression to examine the determinants of utilisation antenatal care services among pregnant women during the COVID-19 period in Zambia.The choice of the analysis model was informed by the binomial distribution of the outcome variable for the study.The analysis allowed for all explanatory variables to be entered into the model to get adjusted odds ratios.The adjusted odds ratios (AOR) were calculated, along with their respective 95 per cent confidence intervals (95% CI).The 'svy' command in Stata software allowed for correction of under and over sampling and complex survey design.
Description of women captured in the COVID-19 socio-economic impact assessment survey
Table 1 shows the description of background characteristics of women (aged 15-49 yeas), who were included in the study analysis.The study included 11,055 women of reproductive age; of these, 1,523 were pregnant during the COVID-19 pandemic period.The mean age for the respondents was 28 years.There were more women in the age group 15-24 (41.3%) than in the older age groups.Slightly more than half (50.4%) of the respondents were residing in rural areas at the time of the survey.A larger proportion of women had attained secondary level of education.Majority of women captured in the survey were not working.Knowledge of COVID-19 was universal.However, most indicated they did not know the COVID-19 prevention measures.Similarly, majority of women (52.6%) had no knowledge of COVID-19 prevention measures.
Prevalence of sexual and reproductive health outcomes during COVID-19 period
Figure 1 shows that the prevalence of sexual reproductive health outcomes measured by the survey.Results show that 19 per cent of women who gave birth during the COVID-19 period missed postnatal care services while 10 per cent of the pregnant women missed antenatal care services.Four per cent of women indicated that they were unable to access HIV treatment during the same period (Figure 1).Trend results show that the percentage of pregnant women who missed antenatal care increased during the COVID-19 period.The study shows that there is a significant increase in the percentage of women who missed antenatal care services from 1.1 per cent as captured in 2018 DHS to 10 per cent in 2021 (Figure 2).The observed increase in the proportion women who missed antenatal care services could, to some extent, be explained by women's fear of contracting COVID-19 virus at the health facility.Findings also reveal that there was a relationship between level of education and antennal care utilisation during the COVID-19 period.Women with no formal education and those with primary education had a higher chance of missing antenatal care services.
Our study found that women who reported to have knowledge of COVID-19 treatment options had a higher chance of missing antenatal care services compared to those who had no knowledge (13.5% versus 7.5%; p < 0.01).Residence, marital status and employment status were found not to be associated with chance of missing antenatal care service.among women of reproductive age during the COVID-19 period in Zambia.When we controlled for the covariates in the full multivariate model, we observed that age of a woman, education, province and perceived knowledge of COVID-19 treatment In terms of education level, it was noted that an increase in level of education was associated with reduced chance of missing antenatal care services among pregnant women.Women with secondary or tertiary level of education had odds of (AOR: 0.62; 95% CI: 0.28-1.38)and (AOR: 0.17; 95% CI: 0.03-0.94),respectively.Furthermore, an association between perceived knowledge of COVID-19 treatment options and antenatal care utilisation was observed, with women who had knowledge of COVID-19 treatment options, being 6% less likely to miss antenatal care services than those with no knowledge of treatment options.Regional variations were observed in terms utilisation of antenatal care during COVID-19 period in Zambia.Women who were living on the Copperbelt province (AOR: 0.62; 95% CI: 0.28-1.38),Luapula province (AOR: 0.62; 95% CI: 0.28-1.38)and Western province were less likely to miss ANC compared to women who were living in Lusaka.Marital status, place of residence, employment status, knowledge of COVID-19 prevention measures and having a medical condition were not associated with a risk of missing antenatal cares services (Table 3).
Discussion
For over four decades, increasing access to sexual reproductive health and other family planning services has been a major global goal of health and development programming (Alkema et al., 2013;Fabic et al., 2015;Blackwell et al., 2020;Kantorová et al., 2021).Since then, significant progress has been made in reducing unmet family planning needs, increasing contraceptive prevalence, and preventing unintended pregnancies (Cleland et al., 2014;Bongaarts & Hardee, 2019).In recent decades, the rate of progress has, however, slowed down or stalled, particularly in some countries in SSA (Fabic et al., 2015;May, 2017;Bongaarts & Hardee, 2019).However, the gains made in improving sexual reproductive health in SSA may be offset by further disruption of provision and access to SRH due to emerging COVID-19 variants.
Using cross-sectional data from the COVID-19 socio-economic impact assessment survey conducted in Zambia in 2021, this study sought to analyse the influence of the COVID-19 pandemic on utilisation of sexual reproductive health services in Zambia.The study has demonstrated that COVID-19 has affected utilisation of sexual reproductive and maternal health care services in Zambia.This finding is consistent with similar studies conducted in Ethiopia and United States of America, which also reported a decline in access to family planning and sexual reproductive health services (Feyissa et al., 2020;Lindberg et al., 2020;Tadesse, 2020;Gleason et al., 2021;Kassie et al., 2021).This study focused on the influence of socio-demographic as well as health factors on and utilisation of reproductive health services.Another study conducted in Burkina Faso and Kenya in 2020 found that 14 percent and 4 per cent of women who were not using contraception in Kenya and Burkina Faso, respectively, reported COVID-19 related reasons for non-use (Karp et al., 2021), highlighting the significance of establishing secure sexual reproductive health service delivery methods when health crises occur.Adelekan et al. (2020) found that demand-driven usage trends for family planning services decreased in the two months prior to the COVID-19 outbreak and further decreased during the lockdown in Gauteng province of South Africa (Adelekan et al., 2020).Furthermore, in Sierra Leone, Sevalie et al. (2021) found a reduction in the use of health services during the COVID-19 period.
The literature review reveals that there is little evidence of understanding the factors associated with sexual reproductive health outcomes during the COVID-19 pandemic in Zambia.Thus, analysis done in this study bolsters the importance of our findings to inform public health and sexual reproductive policy and programming during the COVID-19 era.
The study found that age, education and knowledge of COVID-19 treatment options were important factors associated with utilisation of antenatal cares services among pregnant women in Zambia.Additionally, this study has revealed that residence, marital status, employment status, knowledge of COVID-19 prevention measures, and having a medical condition (diabetic status, hypertension status and HIV status) had no effect on influencing utilisation of sexual reproductive health services in women.One of the probable reasons that could have led to some women missing antenatal care, was fear of COVID-19 infections at a health facility.
Our study further showed that the odds of missing antenatal services among pregnant women, was lower among those in the age groups 25-34 and 35-49 years.This suggests that a woman's age plays a significant role in determining the extent of sexual reproductive health service utilisation in times of the pandemic.The study further found that a woman's age was negatively associated with use of sexual reproductive health services, implying that older woman had lower risk of missing antenatal care services.This study finding resonates with findings of a previous study conducted in Ethiopia which reported that older women were more likely to attend antenatal care during the COVID-19 pandemic period (Tadesse, 2020).Utilisation of antenatal care services was lower among young pregnant women, probably because some of these young women might had less knowledge about the important of accessing maternal health services, while others might not have been aware that maternal health services were provided by health facilities even during the COVID-19 period.
This study results also showed significant association between level of education and utilisation of sexual reproductive health service (antennal care) among pregnant women in Zambia.A similar finding was reported in a previous study (Tadesse, 2020).This finding suggests the need for women to prioritise access to education in order for them to fully appreciate the importance of accessing maternal health care services even during the COVID-19 pandemic period.Furthermore, women who had knowledge of COVID-19 treatment options had lower odds of missing antenatal care services.This finding signifies the importance of knowledge in determining health-seeking behaviour among women of reproductive age.Meeting the sexual and reproductive health needs of women of reproductive age during the COVID-19 pandemic period, has a potential to reduce further unwanted pregnancies, abortions as well as avoid child and maternal deaths.Continuing to track family planning health-seeking behaviour throughout the pandemic period will create an opportunity to identify trends and factors that will inform targeted public health interventions.
This study utilised the Health Behaviour Model to explain why some women missed antenatal care visits during the COVID-19 period in Zambia.This theory proposes that people are more inclined to engage in health-promoting actions if they believe the benefits of taking the action outweigh the difficulties or expenses associated with the action.The model assumes that individuals make rational decisions through education or communication.
In this regard, women who understand importance of attending antenatal care service as a way of monitoring pregnancy growth and early detection of pregnancy complications, are more likely to attend antenatal care (ANC) regardless of current health systems challenges, such as restrictions caused by the COVID-19 pandemic in Zambia.However, due the fear of contracting the COVID-19 virus or lack of knowledge that maternal health services were still offered during the COVID-19 restriction, some women were expected to fail to attend ANC in Zambia, despite the benefits associated with it.The findings of this showed that women who had higher education were less likely to miss ANC during the pandemic compared to those with no education.This finding affirms the assumption of the HBM that women higher education understands the benefits of ANC and therefore, they made a decision to utilise the service despite the risk of contracting the COVID-19 virus at the health facility.Furthermore, the showed that women with perceived knowledge of how to treat COVID-19 were less likely to miss ANC visits because of they could be able to adopt preventive measure when visiting the health facilities to access ANC services.
The findings of this study have significant implications for theoretical understanding of the factors associated antenatal care utilisation among pregnant women during the COVID-19 period in Zambia.Furthermore, the findings draw the attention to the potential disruptions and difficulties pregnant women encountered in accessing maternal health services during the pandemic period in Zambia.Our findings have also helped to identify vulnerable women groups that need targeted supported during the period of the pandemic to ensure their maternal health needs are met in time of crises to protect the health of both the mother and the unborn child.Thus, the evidence generated by this study will guide health policymakers in designing health policies and interventions that address the unique health care needs and concerns of pregnant women during public health emergencies may occur in the future.
Even though the full impact of the COVID-19 pandemic on women's sexual and reproductive health experiences is not yet not fully assessed in Zambia, this study provides a snapshot of the effects of the COVID-19 pandemic on sexual and reproductive health (SRH) outcomes and family planning health-seeking behaviour among women of reproductive age (15-49) in Zambia.The prevalence of women who missed antenatal care visits, postnatal services as well as those who failed to access HIV treatment during the COVID-19 period March 2020 to April 2021 is of worrying concern for improving maternal health care in the country.The observed increased prevalence of women missing antenatal health service and postnatal health care in Zambia, could be due to lack of knowledge that sexual reproductive health and family planning services were being provided during the partial lockdown which was experienced in the country.The other reasons could be fear of contracting the COVID-19 virus from heath facility or inability to access service due to disruption in provision of the service at a health facility.
Literature shows that 60 per cent more maternal fatalities could occur if maternal health care coverage was reduced by 10-50 per cent (Riley et al., 2020;Murewanhema & Madziyire, 2021).For example, due to the disruption of health care for pregnant women and newborns, Ebola-related direct mortality in Sierra Leone ended up being roughly equal to indirect cause of mortality (Elston et al., 2017).The findings reported in this study, have implications both for shaping sexual and reproductive health policy, and for strengthening family planning communication initiatives, to avert future consequences of the COVID-19 on maternal health in the country.
Conclusion
This study has shown that the COVID-19 pandemic has affected the utilisation of sexual reproductive health services among women of reproductive age in Zambia.This was evidenced by the observed increase in the proportion of pregnant women who missed antenatal care visits during the COVID-19 period, compared to the period before the pandemic.Fear of COVID-19 infection at a health facility could be one of the reasons that might have led to some women missing antenatal care services.The study also established that while some women missed postnatal care services, others were also unable to access HIV treatment during the same period.Depending on the severity of future waves, the pandemic may have a greater impact on the provision and accessibility of sexual and reproductive health services among different population groups.Thus, policy-makers should strive to ensure that universal access to SRH services during the pandemic era will be key in meeting the SRH needs of women.While paying attention to managing of the pandemic is key, policy-makers should also ensure that the health care systems meet SRH needs of women.Future research should further explore whether sexual reproductive health-seeking behaviour use during the COVID-19 era differs in relation to certain socio-demographics such as household income status, parity and cultural values in SSA countries.It is hoped that soon, data will be widely available world over, to assess in more detail, the effects of COVID-19 on sexual and reproductive health outcomes.
Study strengths and limitations
There are a number of limitations to be considered in this study.First, the cross-sectional nature of the study restricts causal inferences to be drawn.Second, participants' responses could have been influenced by recall bias because they were asked to report their sexual reproductive health and health-seeking behaviour in the 12 months prior to the survey, rather than in the current year.Third, some important outcomes and independent variables such as number of children a woman has, religion, sexual activity and household income status were missing in the dataset.This made it difficult to conduct a conclusive analysis of the effects of COVID-19 on sexual reproductive health-seeking behaviour during the pandemic in Zambia.Despite the limitations, this is the first study to examine in detail the influence of the COVID-19 on sexual reproductive outcomes in Zambia.The study will thus, inform SRH programming during the pandemic in order to address the adverse effects of pandemic has on women's health.
Figure 1 .
Figure 1.Prevalence of sexual reproductive health outcomes in Zambia during COVID-19 period.
Figure 2 .
Figure 2. Trends in prevalence of women who missed antenatal care visits before and during COVID-19 period in Zambia (2007-21).
Table2shows the percentage distribution of pregnant women who missed antenatal cares services by background characteristics.Bivariate analysis reveals that some background factors were associated with women's utilisation of antenatal care services during COVID-19 period (p < 0.01) in Zambia.Young women in the age group 15-24 years had highest percentage of those who missed antenatal care services during the COVID-19 period, and those in the age group 35-49 years had the lowest proportion.
Table 2 .
COVID-19 and sexual reproductive health.Percentage of women of reproductive age 15-49 who missed antenatal care visits by background factors during the COVID-19 period, SEIA 2021.optionsinfluenced utilisation of antenatal care services during the COVID-19 period in Zambia.Age of a woman was positively associated with the risk of missing antenatal care services among pregnant women.Pregnant women in the older age groups (25-34 years and 35-49 years) had lower odds of missing antenatal care services; (AOR: 0.55; 95% CI: 0.31-0.99)and (AOR: 0.54; 95% CI: 0.24-1.23),respectively.But the odds for the age group 35-49 years were not significantly different from the odds of those aged 15-24 years.
Table 3 .
Multivariate logistic regression analysis examining risk of missing antenatal care during the COVID-19 period among women of reproductive age, SEIA 2021, Zambia. | 2023-07-16T15:09:08.099Z | 2023-07-14T00:00:00.000 | {
"year": 2023,
"sha1": "ee188701495652bbf9eb0d934ba853df0282c464",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0376835X.2023.2229865?needAccess=true&role=button",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "93aed7dd0303337546e1127f8d7c3ad46ffe025e",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
268259276 | pes2o/s2orc | v3-fos-license | The Morphology of the Asteroidal Dust around White Dwarf Stars: Optical and Near-infrared Pulsations in G29-38
More than 36 yr have passed since the discovery of the infrared excess from circumstellar dust orbiting the white dwarf G29-38, which at 17.5 pc it is the nearest and brightest of its class. The precise morphology of the orbiting dust remains only marginally constrained by existing data, subject to model-dependent inferences, and thus fundamental questions of its dynamical origin and evolution persist. This study presents a means to constrain the geometric distribution of the emitting dust using stellar pulsations measured at optical wavelengths as a variable illumination source of the dust, which reradiates primarily in the infrared. By combining optical photometry from the Whole Earth Telescope with 0.7 – 2.5 μ m spectroscopy obtained with SpeX at NASA ’ s Infrared Telescope Facility, we detect luminosity variations at all observed wavelengths, with variations at most wavelengths corresponding to the behavior of the pulsating stellar photosphere, but toward the longest wavelengths the light curves probe the corresponding time variability of the circumstellar dust. In addition to developing methodology, we fi nd the pulsation amplitudes decrease with increasing wavelength for principal pulsation modes, yet increase beyond ≈ 2 μ m for nonlinear combination frequencies. We interpret these results as combination modes derived from the principal modes of identical ℓ values and discuss the implications for the morphology of the warm dust. We also draw attention to some discrepancies between our fi ndings and theoretical expectations for the results of the nonlinearity imposed by the surface convection zone on mode – mode interactions and on the behavior of the fi rst harmonic of the highest-amplitude pulsation mode.
Introduction
White dwarf stars are known to host circumstellar dust that is widely interpreted as tidally disrupted, minor rocky bodies (Jura & Young 2014;Farihi 2016;Veras 2016;Guidry et al. 2021).The standard scenario requires that one or more minor bodies are gravitationally perturbed by major planets (Frewen & Hansen 2014;Petrovich & Muñoz 2017;Smallwood et al. 2018), exciting eccentricities and eventually resulting in nearly radial orbits.If a rocky body passes within ≈1 R e of the white dwarf, it will be tidally shredded, leaving orbiting dust that then has a large cross section to self-collisions and Poynting-Robertson drag (Bochkarev & Rafikov 2011;Veras et al. 2015;Malamud & Perets 2020).
In the limit where the circumstellar debris is the result of just one disrupted body, this dust is expected to be in a plane and its orbit is assumed to circularize eventually, although this process is not yet fully understood (Nixon et al. 2020;Malamud et al. 2021).If the dust is derived from multiple minor bodies originating on independent orbits, then there is no a priori reason why the circumstellar material should be confined to a single plane.Therefore, constraining the dust morphology Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
should constrain its origin, and thereby the dynamical evolution of the contributing bodies.While the typical dust emission detected at 3-5 μm implies orbital radii within or around the Roche limit of the star, currently there are only modeldependent inferences, and no compelling empirical constraints on the precise distribution and geometry of the circumstellar material (Bonsor et al. 2017;Swan et al. 2020).
Giclas 29-38 (G29-38 = ZZ Psc = WD 2326+049) is both a well-known luminosity variable (Shulov & Kopatskaya 1974) of the ZZ Ceti class and the prototype white dwarf hosting circumstellar dust.Zuckerman & Becklin (1987) discovered its infrared excess, and subsequently Graham et al. (1990) and Patterson et al. (1991) compared pulsations observed in the B, J, K (and L in the latter case) photometric bands, both finding consistency with a relatively planar disk geometry.
The idea behind using stellar pulsations in this manner, which forms the basis of our updated effort, is that geometric oscillations are seen differently by Earth-based observers than by the circumstellar dust, and that these two views may, in principle, be disentangled using multiwavelength photometry.Luminosity variations arising from the photosphere are prominent primarily in the optical bands where the stellar emission peaks (Brassard et al. 1995), whereas any dust illuminated by the stellar pulsations will reradiate effectively only at infrared wavelengths.
The infrared instruments of these earlier observations recorded only weak pulsation signals, and conclusions regarding the pulsation modes were not possible, thus limiting the interpretation of the circumstellar dust morphology.Spitzer infrared observations utilizing all three instruments, including photometry and spectroscopy, were used to characterize the overall infrared dust continuum, and identify broad 10 μm emission from small silicate dust grains (dominated by olivines), but ultimately the data are consistent with a range of possible circumstellar dust configurations (Reach et al. 2005(Reach et al. , 2009;;Ballering et al. 2022).
Current constraints on the geometrical distribution of dust come purely from modeling the dust component of the spectral energy distribution (e.g., Jura 2003;Reach et al. 2005;Ballering et al. 2022).There are model-dependent inferences from luminosity and calcium equivalent width amplitudes for two large pulsations excited in 2008 that suggest polar accretion (Thompson et al. 2010).And while no stellar magnetic field has been detected toward G29-38 (1σ = 0.5 kG; Bagnulo & Landstreet 2021), a field as modest as 0.01 kG is more than sufficient to truncate a high-rate accretion flow of 10 9 g s −1 , and result in accretion along field lines (Farihi et al. 2018;Cunningham et al. 2021).The stellar optical polarization has been sensitively measured at 275.3 ± 31.9 ppm (Cotton et al. 2020), and in the context of a dust disk model, is consistent either with a nearly face-on, optically thin disk or dust with a low Bond albedo.While highly valuable, these studies do not offer further insight into the circumstellar dust geometry, beyond the dust emission modeling from infrared observations.
The approach taken in this work is an updated version of the multiwavelength pulsation studies (Graham et al. 1990;Patterson et al. 1991), and leverages the fact that infrared detectors and instrumentation have improved dramatically in the last four decades.The primary instrument of this study, SpeX, saw first light in 2000 (Rayner et al. 2003), and its infrared array was upgraded in 2014.In addition to technological advances in sensitivity and readout compared to prior decades, a key advantage of SpeX is the synchronous multiwavelength coverage possible with its low-resolution spectroscopic mode.Simultaneous observations of photosphere-and dust-emitted light obviates the need for phasing detector clocks and readouts.Concurrent, multiwavelength data are particularly valuable for G29-38, as its power spectrum is not stable and can change dramatically from one observing season to another (Kleinman et al. 1998).These significant improvements in observing capability allow what is essentially a new technique for studying the circumstellar dust morphology of G29-38.In this paper, we describe that technique and although we are unable to place firm constraints on the circumstellar dust morphology, we are able to connect the photospheric pulsations and dust responses to viable dust geometries.In addition, we find that some pulsation properties appear to be discrepant with theoretical expectations as a function of wavelength, with relevance to all ZZ Ceti white dwarf stars.
SpeX Infrared Spectroscopy
We observed G29-38 at the NASA Infrared Telescope Facility (IRTF), using the near-infrared imager and spectrograph SpeX (Rayner et al. 2003), during two runs consisting of three contiguous partial nights each, on 2018 September 4-6 and again on 2020 November 8-10.SpeX was operated in spectroscopic prism mode, simultaneously covering approximately 0.7-2.5 μm.In 2018, the standard slit was opened to 3″ width (R ≈ 20), and in 2020 this was changed to 1 6 (R ≈ 40).The sky conditions for the 2018 observing run were variable with light cirrus on two nights, and the seeing ranged from 0 7 to 1 1.In 2020, the conditions were mostly excellent, with seeing between 0 3 and 0 8.In both cases, the wide slits captured the majority of the stellar light, minimizing atmospheric dispersion losses that could alter the true spectral slope.
Integrations on the array were taken continuously during two or three observing blocks each night, preceded by, interspersed with, and immediately followed by calibration frames and standard star observations.The science target was nodded along the slit in the standard ABBA pattern for point sources, for total durations of 45 minutes to 3 hr, typically 1-2 hr.Integration times were 14.8 s in 2018 and 5.10 s in 2020, where the total time on-source was approximately 2.2 to 5.6 hr per night.Calibration frames consisted of arc lamps for wavelength calibration, flat-field images, and telluric standard stars.A total of 1671 and 2200 spectra of G29-38 were obtained in 2018 and 2020, respectively.
The data were reduced with SpeXtool version v4.1 (Vacca et al. 2003;Cushing et al. 2004).All reduction steps were carried out in the standard manner, including flat-fielding, AB pair subtraction, spectral extraction, and wavelength calibration.The science target frames were all extracted individually, while those of the telluric standards were combined into a single spectrum of higher signal-to-noise ratio (S/N).The individual spectra of G29-38 were telluric corrected on a nightly basis, in a uniform manner using a single set of standard observations corresponding most closely to the median airmass of the science target.This choice was motivated by the fact that time-varying signals would be introduced by using several telluric standards (at different airmasses) within a single night.Instead, a single telluric standard was used to correct each nightly data set, and while this does not produce the best correction of telluric features for individual frames, it introduces a minimally time-variable (and well-characterized) signal corresponding to the gradual change in airmass.
The advantage of deriving frequency measurements from SpeX spectroscopy, obtained with a wide slit, is that such measurements are differential both in wavelength and time, and therefore insensitive to variable observing conditions such as seeing or transparency changes.After the individual spectra were reduced, which corrects for nonlinearities in the detector, removes night sky lines, etc., we created a simple procedure to interpolate across thorium radiation events and cosmic rays in the detector.Only 51 spectra, or slightly over 1%, were affected by such artifacts.During this stage, spectra prior to MJD = 58365.48(the beginning of 2018 September 4) were removed owing to clouds, together with some spectra from 2020 November 9 that were significantly fainter, likely due to guiding issues.This procedure resulted in 1576 and 2184 spectra of G29-38 from 2018 and 2020, respectively.
We defined 10 bandpasses for synthetic photometry, along with two further bandpasses dominated by strong telluric lines, which were not analyzed further.Table 1 lists the 10 defined bandpasses, their central wavelength designation, and the actual S/N range in each of these resulting from synthetic photometry (the bandpasses were designed to achieve S/ N ≈ 100 per bandpass per observation).The synthetic photometry S/N values are typically higher per observation for the 2018 spectra because these integrations were three times longer than for the 2020 spectra.We discuss the subsequent normalization steps in the time-series analyses of these data below.
Moris Optical Photometry
In order to extend the wavelength coverage of the IRTF observations, the SpeX observations used the Moris CCD imager for guiding purposes (Gulbis et al. 2011).The Moris images were taken through the Sloan Digital Sky Survey gband filter, over a 60 × 60 arcsec 2 field of view on an 512 × 512 pixel 2 CCD with 0 12 pixels.These data were obtained simultaneously with the SpeX data, but no effort was made to phase the readout times of these two instruments, as the guider was necessarily obtaining shorter exposures of 3.0 s than employed with SpeX.Due to their limited temporal coverage, these data offered little additional help in identifying the stellar pulsation modes, and we did not continue this approach during the 2020 IRTF run.
WET Optical Photometry
In both 2018 and 2020, we organized contemporaneous international, ground-based optical photometry campaigns to support the IRTF observations.The telescopes, detectors, filters, and time-series durations are listed in Tables 2 and 3, where coverage totaled 112.2 hr in 2018, and 173.3 hr in 2020, and data reduction followed the prescription outlined in Provencal et al. (2012).In brief, raw images were calibrated and aperture photometry performed using the MAESTRO photometry pipeline (Dalessio 2010), where each image was corrected for bias and thermal background, and normalized by its flat field.MAESTRO performed photometry for a range of aperture sizes for the target and comparison stars.The combination of aperture and comparison star(s) that resulted in the highest quality raw light curve was chosen for each individual run from each observing site.
The next data reduction step employed the WQED pipeline (Thompson & Mullally 2009).WQED examined individual light curves for photometric quality, removed outlying points, divided by appropriate comparison stars, and corrected for (2009).The final product is a compiled optical light curve for G29-38 spanning the 2018 and 2020 SpeX observations.
Frequency Identification
PERIOD04 (Lenz & Breger 2005) and PYRIOD (Bell 2021) were used to derive discrete Fourier transforms (FTs) of the time-series photometry, with amplitudes in units of millimodulation amplitude (1 mma = 0.1%).Figure 1 presents the FTs of the 2018 WET photometry (upper left and lower right panels) as well as the FTs of the 2018 SpeX synthetic phototometry with wavelength shown in microns, as indicated in each panel.For the synthetic photometry at 1.51 μm, data from the first night of this 2018 run were not included because of excess noise at this wavelength, presumably due to excess sky noise in the adjacent sky band.The pulsation frequencies at 1113 and 1211 μHz are clearly visible in the optical and nearinfrared bandpasses though weaken significantly at 1.6 μm.
The pulsation frequencies at 788 and 2225 μHz display the opposite behavior and appear strongest at the longest wavelengths.Figure 2 presents similar FTs for the 2020 data.At this epoch, all pulsation frequencies were weaker than in 2018, though the peak near 2000 μHz is clearly visible.The amplitude of this frequency decreases as a function of wavelength out to 1.6 μm, then it appears to increase.The lower pulsation amplitude of G29-38 is within its previously observed range (Kleinman et al. 1998).
The 2018 and 2020 optical light curves form the basis for the identification of the pulsation frequencies during the SpeX observations.A statistically significant frequency is defined as a frequency with an amplitude at least four times the average noise level, representing a 99.9% probability that the frequency represents a true signal in the data and is not the result of random noise.The noise level is defined as the the average amplitude remaining in the FT after prewhitening (discussed further below) by the dominant peaks (Provencal et al. 2012), and is a conservative estimate, especially for the 2018 data.In 2018, G29-38 was in a fairly high-amplitude pulsation state and exhibited amplitude modulation, and thus it was impossible to remove all of the significant power completely before calculating the noise level.The mean noise over a broad range of pulsation frequencies in 11 different frequency ranges (WET optical plus the 10 SpeX bands) for 2018 and 2020 are presented in Figure 3.During the 2018 campaign, the WET observations had a typical mean noise level of 1 mma.The IRTF observations, covering fewer pulsation cycles, had somewhat higher noise levels, rising up to ≈4 mma in a few cases.In particular, for the bluest SpeX band, at 0.78 μm, there was additional noise near 1000 μHz, and for the two reddest SpeX bands, at 2.23 and 2.41 μm, there was additional noise near 2300 μHz.The likely cause of this additional noise at these isolated frequencies is our inability to remove the signal produced by amplitude modulation completely.We also calculated Monte Carlo simulations for the WET data using the Monte Carlo routine in PERIOD04.This routine generates a series of simulated light curves using the original times, the fitted frequencies and amplitudes, and also containing Gaussian noise.Each simulated light curve is subjected to a least-squares fit.The uncertainties are produced by the distribution of fit parameters.The Monte Carlo results are consistent with a noise level of 0.5 mma for the 2018 optical data and 0.2 mma for the 2020 optical data.
Armed with an understanding of the noise properties of these data, frequency identification is now possible.In both data sets, the noise in the FTs derived from the WET runs is significantly lower than the FT noise from the SpeX synthetic photometry, which covered fewer stellar oscillations.Given the additional higher frequency resolution of the optical data, we use the WET optical frequencies as the basis to identify the frequencies in the SpeX FTs.
PERIOD04 and PYRIOD were used to prewhiten the FTs.The process involves identifying the largest amplitude resolved peak in the FT, or the Lomb-Scargle periodogram in the case of PYRIOD, fitting the data set with a sinusoid of that frequency, subtracting the fit from the light curve, recomputing the FT, examining the residuals, and repeating the process until no significant power remains.This process works well for stable pulsators, but is more complicated in the presence of amplitude modulation.We illustrate the process in Figure 4 with an example of prewhitening of the dominant power in the 2018 WET data.In the top panel of Figure 4, the red arrow identifies the dominant 1211 μHz peak.The middle panel shows the result of removing a sinusoid with that frequency, and identifies the next highest-amplitude frequency at 1112 μHz.The bottom panel shows the result of removing both the 1211 μHz and 1112 μHz peaks from the data.Significant power remains, but a resolved frequency cannot be clearly identified.This is the signature of amplitude modulation, and the remaining power cannot be prewhitened.
Table 4 contains the resulting frequency identifications, within the constraints imposed by amplitude modulation, for the WET 2018 and 2020 data sets.The table presents frequency and its uncertainty in units of μHz, amplitude and its uncertainty in units of mma, and the S/N of the measured amplitude, which is the ratio of the amplitude of the frequency to the amplitude of the noise in that frequency range.We use the symbols f and g to identify frequencies in the 2018 and 2020 data sets, respectively.The next step was to fit the pulsation amplitudes and phases for the 10 bands of synthetic SpeX photometry in order to determine how these varied as a function of wavelength.This was performed independently for the 2018 and 2020 data sets because G29-38 is, as stated above, not a stable pulsator on a yearly timescale.We fixed the frequencies to those derived from the WET time-series photometry because of the much greater time base of these data, and then used PERIOD04 to fit the amplitude and phases in the SpeX data.
Patterns Among Pulsation Properties
Figure 1 displays a surprising result.The frequencies at 1211, 1112, and 1999 μHz ( f 0 , f 1 , and f 2 in Table 4, respectively) are clearly visible in the optical and near-infrared bandpasses and weaken significantly at 1.6 μm.However, the frequencies at 788 and 2225 μHz display the opposite behavior.These two frequencies have been identified as combination modes, defined as exact numerical combinations (by addition or subtraction) of larger-amplitude, principal modes.In pulsating white dwarf stars, these combination modes are thought to arise from mode-mode interactions in the convection zone (Brickhill 1992b;Wu 2001).The two combination frequencies at 788 and 2225 μHz (corresponding to f 2f 0 and f 0 + f 3 in Table 4) increase in amplitude as a function of wavelength.
Figure 2 presents similar FTs for the 2020 data.At this epoch, G29-38 was in a lower-amplitude pulsation state.These lower pulsation amplitudes provided S/N 10 for the amplitude measurements of the g 0 (2001 μHz) mode in the Spex data, with weaker modes even harder to measure.For this reason, we drop further analysis of the 2020 data and focus on the amplitude and phase behavior of the pulsations in the 2018 data.
Figure 5 presents the amplitudes (top panels) and times of amplitude maxima (bottom panels) as a function of wavelength for pulsation frequencies with sufficient S/N to discern patterns among their properties.The left-hand panels display properties for the three highest-amplitude g-modes (principal modes f 0 , f 1 , and f 2 ) and the right-hand panels display the four other harmonic or nonlinear combination frequencies (2f 0 , f 0 + f 1 , f 0 + f 3 , and f 2 − f 0 ).For the principal modes, the amplitudes decrease with increasing wavelength, as expected for photospheric g-modes (see Brassard et al. 1995, Figure 3) with no significant luminosity contribution via dust absorption and reradiation.Additionally, the time of maxima for f 0 , f 1 , and f 2 are approximately constant versus wavelength except for the optical WET measurement for f 0 which lags the time of maxima of f 0 (period of 825 s) at most other wavelengths by about 1 minute.This one phase offset could be caused by a change in the orientation of the f 0 mode captured during the 2018 WET run, which had a longer temporal coverage than the 2018 IRTF observations.Except for this one WET time lag, the three principal modes again behave as one would expect for atmospheric pulsations with little to no contribution by reradiated light from the dust.Note.The identified frequencies are labeled sequentially in amplitude from f 0 for 2018 and from g 0 for 2020.The amplitudes of the combination frequencies ( f 0 + f 1 , f 0 + f 3 , and f 2 − f 0 ), follow a different pattern.The observed amplitudes of these combination modes generally decrease from the visible through near-infrared, then increase, particularly beyond 2 μm.While the increase in mode amplitude at the longest wavelengths is what one would expect for modes strongly reradiated by dust, the spectral energy distribution modeling of Reach et al. (2009) estimated that 34% of the flux at 2.2 μm is due to dust, with the remaining flux emitted by the star's photosphere.Assuming these pulsation amplitudes have the same fraction of dust contribution as the time-averaged star, even after correcting for the dust contribution the photospheric pulsation amplitudes increase through 2.5 μm.Although tentative, we note that this is in contrast to the prediction of Wu (2001), who finds that the ratios of the amplitudes of parent modes to the amplitudes of their resultant combination modes are independent of wavelength.The 2f 0 harmonic (discussed further below) may also increase in the infrared or may be consistent with a near-constant amplitude at all observed wavelengths.The time of maxima for these frequencies varies considerably, from essentially constant ( f 0 + f 3 ) to decreasing as a function of wavelength (2f 0 and f 0 + f 1 ).One combination mode in the fourth panel, f 2 − f 0 , has had 1100 s subtracted from its times of maxima to aid visualization.
Discussion
We now examine whether the optical and near-infrared mode characteristics (amplitude and phase) observed in 2018 are broadly consistent with a few simple models.We do not make this comparison to the 2020 data because the star's pulsations were too weak at the observed epoch.We seek to find a simple model that can explain how the three highest-amplitude pulsation frequencies ( f 0 , f 1 , and f 2 ), which are all principal pulsation modes, decrease in amplitude as a function of increasing wavelength, while the combination modes display an increasing amplitude with wavelength, particularly beyond 2 μm (Figure 5).Making the interpretation more difficult is that the mode identifications (the specific spherical harmonics) for any of the observed modes are not well known.The general dearth of consistent multiplets in G29-38, along with its empirically variable frequencies and amplitudes with time, have made mode identification challenging (Kleinman et al. 1998).Recent work using long-baseline Transiting Exoplanet Survey Satellite data has revealed some possible mode identification (Uzundag et al. 2023), but since those observations are separated from our IRTF data by many months (and G29-38 has shown changes on shorter timescales), we have not adopted those identifications here.
Information Encoded in the Combination Modes
The combination modes (all combinations with f 0 ) show the strongest infrared signatures.In general, combination modes have different geometries (in terms of the angular dependence of their surface brightness variations) than their underlying parent modes.Currently, with insufficient knowledge of the mode ℓ identifications, it is not possible to model the stellar surface geometry uniquely and potentially constrain the dust geometry.We search instead for a simple picture that unites these infrared signals from the combination models.
Principal pulsation modes have surface variations that can be described via spherical harmonic functions, ( Y , where Θ and Φ are, respectively, the angles of latitude and longitude with respect to some chosen axes in the star, ℓ is the spherical harmonic order, or total number of surface nodal lines, and m identifies the number of nodes along a line of latitude.Note that for stellar g-modes ℓ = 1, 2, 3,K and |m| ℓ for each mode.For a given mode with order ℓ, the relative amplitudes of the components with values m depend on the axes chosen.If the axis of rotation is known, it is commonly used to define the (Θ, Φ) coordinate system.Alternatively, if circumstellar dust were primarily in a plane, that and a perpendicular axis could define the longitude and latitude for mathematical convenience.There is, as yet, no such known natural coordinate system for G29-38.
The angular dependence of a combination mode with individual dependencies Y ℓ m 1 1 and Y ℓ m 2 2 is the product of these two functions (Brickhill 1992b;Brassard et al. 1995;Wu 2001), which equals the sum over a set of Y ℓ m with ℓ running from |ℓ 1 − ℓ 2 | to ℓ 1 + ℓ 2 .While the resulting geometry can be complicated, if ℓ 1 = ℓ 2 , then there is always a component in the sum which has ℓ = 0. (See also Brassard et al. 1995, Appendix C, where they demonstrate that either the product of two ℓ = 1 modes, one m = + 1, the other m = −1, or the product of two ℓ = 2 and m = 0 modes have a spherically symmetric Y 0 0 component, but the product of an ℓ = 1 mode and an ℓ = 2 mode has no such component.)Therefore, if the combination mode results from two modes with the same ℓ value, there will be a component of the combination mode that has no angular dependence (see also Kurtz 2005).This component is an isotropic pulsation, much like a radial-mode pulsator such as an RR Lyrae or Cepheid.In such a case, whatever the dust distribution, such a component will always provide a signal in the reradiated flux.We propose the possibility that the observed 2018 combination modes were visible in the reradiated flux precisely because each contributing mode had the same value of ℓ.This suggestion offers a prediction and possibly a constraint for future mode identification.
Historic Data
Historic time-series optical and infrared data were presented by Graham et al. (1990) and Patterson et al. (1991).The data presented by Patterson et al. (1991) seems to have slightly higher quality, judging by the number of pulsation frequencies detected, but are in agreement with those of Graham et al. (1990).Patterson et al. (1991) detected pulsations in B with periods of 615 s (1620 μHz), 186, 243, and 268 s.They also detected pulsations in K with periods of 186, 242, and 272 s, which are consistent with the pulsations periods seen in B. They interpreted the 615 s period as a principal stellar mode, with no corresponding dust response.They envisaged the dust as being in the form of an optically thin circular disk (see their Section 4.4.1), and suggested that this mode (with respect to the axes aligned with the disk) had m ≠ 0, so that it elicited no response from the dust disk.They argued that the amplitudes of the other pulsation frequencies were too high in K to be just stellar g-modes (indeed Graham et al. 1990 found these pulsations only in K and not in B), and thus that they must represent a dust response.However, since the theory of combination modes, and in particular their surface structures, was only just being developed (Brickhill 1992b;and later Brassard et al. 1995;Goldreich & Wu 1999;Wu 2001), they assumed the stellar surface brightness variations to be of the form relevant to the principal modes, viz, Thus in order to explain why the pulsations seen in K were not seen in B they needed to appeal to a special geometry for the assumed stellar g-modes relative to the disk and to the observer.In particular, the infrared-strong modes all needed to have ℓ = 2 and m = 0 in axes relative to the disk, and the disk needed to be oriented at an angle of ≈55°relative to the observer.In our interpretation, while the 615 s period arises from a principal stellar mode (most likely with ℓ = 1; Kleinman et al. 1998), the other detected pulsation frequencies that are all strong in the infrared would be combination modes, whose fundamental contributors all have the same value of ℓ.In this interpretation a special geometric configuration is not required.
The First Harmonic
The oscillation seen at frequency 2f 0 is the first harmonic of the parent mode f 0 , which gives rise to the combination modes that we find to have strong visibility in the near-infrared.
G29-38 is well known as a high-amplitude oscillator, and as such it would be no surprise if the dominant mode that we find, the f 0 mode, were of sufficiently high amplitude that its underlying variability is slightly nonlinear.Such nonlinearity would primarily manifest itself as a first harmonic in the light curve with period 2f 0 .If the f 0 mode has surface angular dependence Y ℓ m , then this contribution to the variability at frequency 2f 0 would also have the same Y ℓ m surface distribution.As an additional effect, for a single principal mode (Wu 2001), the sinusoidal oscillation at the base of the convection zone is modified as it passes through the zone.Thus the emergent oscillation is no longer purely sinusoidal, but can be represented as a Fourier series with harmonic frequencies 2f 0 , 3f 0 ,....These are seen because the original oscillation has been distorted.The magnitude of the coefficient for the first harmonic 2f 0 contribution caused by this effect is proportional to the square of the amplitude of the original mode, and is therefore nonnegative.This implies that, if the original mode f 0 has a surface distribution of the form Y ℓ m , the first harmonic contribution 2f 0 caused by this effect has surface distribution ´. Thus this contribution to the first harmonic always has a surface component that is isotropic.For this reason the first harmonic frequency 2f 0 can, in principle, be more prominent than the underlying parent mode.As Wu (2001) comments: "This arises because the apparent amplitudes of the higher ℓ modes suffer stronger cancellation when integrated over the stellar disk, while the harmonics of these modes do not."In this respect the first harmonic induced by passage through the convection zone is analogous to the behavior of combination modes with the same value of ℓ (Section 4.1).We note that in our data the near-infrared response elicited by the 2f 0 harmonic is significantly less than for the combination modes that we detect (Section 3.2).It is possible that this comes about because of the two possible sources of the oscillation at frequency 2f 0 , which, as we have noted, have two different surface brightness distributions.
The wavelength behavior of 2f 0 may aid future modeling of g-mode stellar pulsations in constraining the properties of the convection zone (Montgomery et al. 2020), and in that spirit we present additional analysis of this harmonic.For example, in contrast to expectations (Wu 2001), we find that the ratio of the amplitudes of the harmonic (2f 0 ) to the fundamental ( f 0 ) depends strongly on wavelength.Figure 6 presents the f 0 mode (blue dotted sinusoid) over two periods along with superpositions (additive combinations) of f 0 and 2f 0 at different wavelengths.For the pulsations measured at 0.5 μm, the frequencies, amplitudes, and phases are all fit from the 2018 WET time-series photometry.For pulsation frequencies measured at longer wavelengths, the frequencies are fixed at the values determined from the 2018 WET data, but the amplitudes and phases are fit from the 2018 SpeX synthetic photometry.At 0.5 μm, the superposition of 2f 0 and f 0 creates a curve that departs slightly from a sinusoid with a slower rise and steeper fall.This contrasts with the prediction of Wu (2001; see also Brickhill 1992a, Figure 7) that such distortion should lead to peaked light curves with sharp ascents and shallow descents.At 1.51 μm the superposition departs more clearly from a sinusoid and by 2.41 μm the amplitudes of f 0 and 2f 0 are similar (see also Figure 5).Measuring the mode properties of the principal and harmonic frequencies across such a range of wavelengths may offer new constraints on the properties of surface convection zones or their time-dependent depth changes for these stars.
Simple Dust Morphology Models
We note that our observational data extend only to 2.5 μm, where emission from the hottest dust, nearest to sublimation (at the inner edge of a disk or the dust orbital periastron) should peak.Thus any morphological information that can be derived from the pulsations constrains only the inner edge of the circumstellar environment.Additionally, at these wavelengths, the dust is only marginally the dominant contributor to the flux (e.g., Xu et al. 2018); careful modeling will be required to disentangle the atmospheric and dust contributions properly.With these caveats in mind, we proceed to an initial exploration of how the infrared amplitudes of the combination modes constrain the morphology of the circumstellar dust.We start by assuming that the dust is more or less in a plane, of some indeterminate thickness.The dust might be a circular disk, an eccentric disk, or even in a parabolic stream (e.g., Nixon et al. 2020).Given the theories for the origins for the dust, there is no reason to suppose that the rotation axes of the dust and of the white dwarf are in any way aligned.In considering the illumination of the dust by the white dwarf, the natural coordinates have the Θ = 0 axis (direction toward the north pole) perpendicular to the dust plane.In this case, for example, a (2, 0) mode on the star (relative to the starʼs rotation axis, which is unlikely to be aligned perpendicular to the dust plane) would look to the dust as a collection of the multiplets (2, −2), (2, −1), (2, 0), (2, +1), and (2, +2).Similarly, any ℓ = 1 mode on the star would look to the dust as a collection of the multiplets (1, −1), (1, 0), and (1, +1).
Optically Thin Dust
We consider first the simplest case in which the dust is optically thin at all wavelengths.In what follows, we continue to work in a coordinate system based on the dust plane.We consider some simple dust configurations (see Cotton et al. 2020): 1.If the dust is distributed uniformly and spherically symmetrically then the only dust response will be to the (0, 0) component.This surface brightness distribution of the modulation only occurs as a combination mode.2. If the dust is distributed as a uniform circular disk of finite vertical thickness, then by symmetry none of the m ≠ 0 components produce a net dust response.In addition the (1, 0) component produces no net response.Yet the (2, 0) component can produce a nonzero net response (see Figure 9 in Graham et al. 1990).In this case, if the underlying basic modes are all ℓ = 1, we expect zero dust response from these, and the only nonzero dust response to be from the combination modes.But if the underlying modes are all ℓ = 2, then we expect a dust response from both the underlying modes and from the combination modes.3.If instead the dust is in an eccentric disk (of which an extreme example would be a parabolic stream), then as seen from the star, the dust density will have an m = 1 ( i.e., cos Φ) component.In this case there will also be a response from the underlying m = +1/−1 components but not from the underlying m = +2/−2 components.Note that in this case there can be a phase difference between the stellar pulsations and dust response depending on the geometry.
Thus for these simple dust configurations, one should always expect a response from combination modes (of equal ℓ), as they always have a (0, 0) component.We might expect a response from a (2, 0) component.Note that this could be a (2, 0) component of a principal pulsation mode or part of the combination response from (1, 0) × (1, 0) or (2, 0) × (2, 0) (see the appendix of Brassard et al. 1995).If the dust distribution is circular, then we expect no response to ℓ = 1 components.But if it is eccentric then there can be a response to such components, with the amplitude dependent on the degree of eccentricity.
Overall, if we see only a response to combination modes and none to the principal modes, the best guess would be that the underlying modes have ℓ = 1 and the dust distribution is circular.But, importantly, there are uncertainties here depending on the expected amplitudes of the response and the observational uncertainties.For example, the f 0 , f 1 , and f 2 modes do show amplitudes at 2.5 μm; one would need to be able to disentangle how much of these oscillations at this wavelength might be due to the dust response.Such detailed modeling is beyond the scope of the present paper.
Optically Thick Dust
If the dust is optically thick, and indeed that seems probable given the large infrared excess of G29-38, then there are too many possibilities to be considered here.We restrict the discussion to two obvious simple generalizations, based on previous theoretical ideas and spectral modeling: 1. Consider the dust to be arranged in a circular, infinitesimally thin disk, which is optically thick in the vertical direction.In this case none of the m ≠ 0 components contribute to the infrared response.But both the (1, 0) and the (2, 0) components would give rise to a response.2. Consider the dust to be in a circular disk of finite thickness so that the main dust response comes from the inner edge of the disk (see Ballering et al. 2022).In this case, assuming an intermediate inclination angle, one expects no contribution from (1, 0), but contributions from (2, 0) as well as from the m ≠ 0 components, but not from m = +2/−2 components.
While our observations of higher infrared contributions in the combination modes may favor some of these models, in order to rule out any of them definitively requires modeling the variations of amplitudes and phases as functions of wavelength.
Conclusions
We obtained near-infrared spectrophotometry of G29-38 using SpeX at the IRTF in 2018 and 2020, along with contemporaneous optical WET time-series photometry.We detected six principal pulsation modes, one harmonic, and three combination modes in the 2018 data set.Pulsations during the 2020 observations were too weak to see clear trends in amplitude as a function of wavelength.Among the 2018 pulsations with sufficient S/N to measure across the full range of observed wavelengths, the three principal modes all showed declining pulsation amplitudes with increasing wavelength.These pulsation amplitudes decreased from 20 to 35 mma in V, g, and the other WET optical bands to ≈5 mma at 2.5 μm.The three nonlinear combination modes from 2018 exhibited strikingly different behavior, rising from approximately 6-9 mma in the optical to 8-19 mma in the near-infrared.The harmonic of the principal f 0 mode may show similar behavior, though at a reduced level.
The basic theory of the origin and the behavior of harmonics and combination modes for stars with a surface convection zone, which includes all ZZ Ceti stars, has been set out by Brickhill (1992aBrickhill ( , 1992b) ) and by Wu (2001).We have found some discrepancies with the expectations laid out in those papers.These papers predict for a single mode that the ratio of the amplitude of the first harmonic to the amplitude of the underlying mode should be independent of wavelength and that the effect of the first harmonic is to give rise to a peaked light curve with a sharp ascent and shallow descent.It is evident from Figures 5 and 6 that our results disagree.Also, these papers predict that the ratios of the amplitudes of the principal modes to the amplitudes of their resultant combination modes are independent of wavelength.While we have not performed modeling to separate the dust versus photospheric contributions, from Figure 6 and the time-averaged spectral energy distribution modeling of Reach et al. (2009), it appears that our results disagree.These findings warrant further investigation.
We conclude that the dominant near-infrared response is most likely due to the isotropic component of the combination modes.Unfortunately, isotropic pulsation components are of no use in determining the geometric structure of the dust.To make future gains in determining the dust morphology, we need to measure the dust response to modes with ℓ 1, i.e., principal modes and perhaps nonisotropic components of the combination modes.This will require careful modeling.Additionally, in order to clarify what fraction of the infrared pulsations is from the dust response versus from the stellar photosphere, we need detailed modeling of atmospheric pulsation modes, which depend on identifying the spherical harmonics of these modes.An alternative approach would be to observe at wavelengths longer than 5 μm, where G29-38ʼs spectrum is almost entirely due to the heated dust (Reach et al. 2009).Because of the timevarying nature of the stellar pulsation modes, some epochs have only weak pulsations that are insufficient to constrain the dust environment.However, whenever G29-38 exhibits different pulsation frequencies these may provide new diagnostics and the dust distribution itself may vary with time.The phase behavior of the pulsation frequencies also warrants further investigation and may help resolve the dust and surface pulsation geometries.
While not a goal of this study, our analysis of the combination modes likely indicates that the observed principal modes all have the same ℓ value.This constraint is both a prediction and a potentially useful tool for future mode identification on this important star.
.4 Note.The ERAU 1.0 m is at Embry Riddle Aeronautical University in the USA.The Krakow 0.5 m is at the Astronomical Observatory of the Jagiellonian University in Poland.The Molėtai 0.35 and 1.65 m are at the Molėtai Astronomical Observatory in Lithuania.The SARA-CT 0.6 m is at the Cerro Tololo Inter-American Observatory in Chile.The SARA-RM 1.0 m is at the Roque de los Muchachos Observatory in Spain.The TSAO 1.0 m is at the Tien Shan Astronomical Observatory in Kazakhstan.differential extinction.The result was a series of light curves with times in seconds and amplitude variations represented as fractional intensity.The final step in the reduction process combined the individual light curves from the Whole Earth Telescope (WET) photometry and applied barycentric corrections.For this step, it was assumed that G29-38 oscillates around a mean light level.This important assumption allowed the correction of instrumental intensity offsets for any overlapping light curves.The question of the treatment of overlapping data is discussed in detail inProvencal et al.
Figure 1 .
Figure 1.FTs for the 2018 WET photometry, labeled "optical," and the 10 2018 SpeX bands (labeled with their central wavelengths).The WET optical FT is repeated in the lower right panel to aid in visualizing the amplitudes of the pulsation frequencies vs. wavelength.The pulsation frequencies at 1113 and 1211 μHz are clearly visible in the optical and near-infrared bandpasses, though weaken at wavelengths 1.6 μm.The pulsation frequencies at 788 and 2225 μHz display the opposite behavior and appear strongest at the longest wavelengths.
Figure 2 .
Figure 2. Same as Figure 1, except based on the 2020 November data.The strongest pulsation frequencies are approximately half the amplitudes of those in 2018 September.
Figure 3 .
Figure 3. Mean noise as a function of frequency for the optical photometry and the SpeX synthetic photometry.The increased noise in several of the SpeX channels, particularly in 2018 at 0.78 μm near 1000 μHz and at 2.23 and 2.41 μm near 2300 μHz, are likely due to our inability to remove the dominant frequencies completely.
Figure 4 .
Figure 4. Demonstration of prewhitening using the dominant 1211 μHz frequency in the 2018 WET data.The first panel is the original FT.The red arrow identifies the dominant 1211 μHz peak.The second panel has been prewhitened by 1211 μHz, and the third panel is prewhitened by both 1211 μHz and 1112 μHz.The green line indicates the formal noise level.
Figure 5 .
Figure5.The top two panels display the pulsation amplitudes measured during 2018 as a function of wavelength.The three highest-amplitude g-modes (lefthand panels) clearly decrease in amplitude with wavelength.The amplitude of the first harmonic (2f 0 in the right-hand panels) of the highest-amplitude gmode ( f 0 ) displays a different behavior.The amplitudes of the combination modes ( f 0 + f 1 , f 0 + f 3 , and f 2 − f 0 in the right-hand panels) increase beyond 2 μm.The bottom panels display the times of maxima for these pulsations as a function of wavelength.
Figure 6 .
Figure6.Comparison of the f 0 mode (blue dotted sinusoid) with the superposition of f 0 and 2f 0 at three different wavelengths (black curves, see legend).The gray shaded zone covers the ±1σ uncertainties in pulsation amplitudes and phases.
Table 1
Bandpasses Used in the SpeX Synthetic Photometry
Table 3
Journal of 2020 Time-series Photometry | 2024-03-07T16:04:46.656Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "e4aad42537cb24ac527404b1f7f7d73ea32cd43b",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad1bd0/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9cfd330e7c5cba2a441c57187ec9fd03c45330e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
111354882 | pes2o/s2orc | v3-fos-license | Instructional Development Programs for International TAs: A Systems Analysis Approach
This Article is brought to you for free and open access by the Professional and Organizational Development Network in Higher Education atDigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in To Improve the Academy by an authorized administrator ofDigitalCommons@University of Nebraska - Lincoln.
Introduction
Instructional development programs for international teaching assistants cut across many lines of authority and power in a research university. The person who designs, implements, and administers an IT A development program may take actions that are seen as interfering with the work of individuals who are not used to such interference: graduate deans, research faculty, and department chairs are accustomed to making decisions about graduate students in certain ways. IT A training may require changes in those traditions and even in regulations. For example, if research faculty are accustomed to selecting their assistants on the basis of research potential, they may not understand the need for anyone else to have any influence on the selection 151 process-much less someone coming from ESL or instructional development. Or, if research faculty have in the past had total control over the schedules of their graduate students, they may not understand immediately that the IT A training program has prior claim on the time of the IT A.
Many ESL and instructional development specialists may not be aware of the structure and culture of the larger institution in which they work or of the structure and culture of academic specialties other than their own. If these specialists are charged with developing IT A training programs, they can have problems resulting from their lack of knowledge of the institution. These problems may include {1) providing inappropriate training for teaching and (2) poor communication with the academic departments. The remedy we are suggesting for these problems is study: (1) study of the structure of U.S. research universities, {2) study of the particular institution, (3) study of particular departments, and (4) study of the teaching styles and traditions of particular professions.
The literature developed in higher education administration (for example, Kuh & Whitt, 1988), in various ethnographic studies of higher education (for example, Rounds, 1985Rounds, , 1987, and in strategic planning (Bryson, 1988) suggest the culture audit and the environmental scan as effective methods for institutional analysis. We are not recommending that IT A program administrators develop a new academic specialty, but rather that they use these methods for their own education so they can better design programs to fit the needs of their particular institution and its academic departments.
The strategic planning process generally includes a step known as the "environmental scan" (Bryson, 1988). Through study of a system's context using the methods developed for strategic planning, administrators can learn about and even anticipate the impact of forces from outside the system. If the university has carried out a self-study recently, that study is likely to contain significant information about the institution's environment. Moreover, the study will probably discuss the methods used by the institution to prepare the environmental scan, valuable information on this important process. Individuals directing programs within a university can also benefit from studying their own environments, both the institutional setting and forces from outside the institution that influence their units. For example, an IT A program could be influenced by attitudes of state legislators or parents of undergraduates toward IT As; at the same time, the IT A program will be under pressure from forces within an institution, including graduate faculty and the IT As themselves. Accreditation and professional organizations can also impose standards that the system must fmd ways to meet. The popularity and status of the institution can influence the flow of resources into the system.
A Research University as a System
To illustrate the complexity of a university, Kells (1988) compares its operational and governance system to that of a factory. Management and workers in a factory generally understand the purpose of the factory-to produce bread or cars, for example. Agreement also exists on the raw materials that must be brought into the system and on the products that flow out of the system. If the products do not meet expectations, fmding the problem in the system or in the raw materials is a fairly straightforward process of getting better flour or steel or of improving the performance of machinery or workers. In addition, those people involved with the factory generally agree on who their customers are.
In contrast, university faculty, administrators, and students all have differing views of the purpose of the university-teaching undergraduates, teaching graduate students, service, research, and differing views of the university's customers-the public, students, faculty members, the professions. When there is unhappiness about the products of the system (poor students, nonsensical research), agreement is difficult to negotiate on the causes of the problems because of the disagreements on purposes and methods.
In sum, a modem U.S. research university is a complex system made up of intertwined units that are in competition with each other for resources and that disagree about purposes and methods. Moreover, they differ in the status and power they have within the larger system. A graphic representation of the system (see Figure 1) would show many independent units, some more important and more powerful than others.
Another important feature of this system is that it is only superficially hierarchical. The president and academic vice president do have a great deal of power over the individual units, especially through the control of the flow of resources to the departments. However, faculty members (especially graduate research professors) have tremendous independence provided by the tenure system that has evolved to protect the freedom of thought and freedom of action of faculty members. A president, vice president, or graduate dean who wants an ITA training program can have difficulty forcing department chairs or graduate faculty to support the program, unless he/she controls resources (funding for assistantships, for example) or can persuade the department that support of IT A training is to the benefit of the department and/or the individual faculty member.
Thus, the IT A program administrator will usually function within a system that can be graphically represented as a box containing many smaller boxes ( Figure 1). Some of the smaller boxes are tied together (into colleges or professional subgroups within a single college). Some of the boxes are located on the far side of the system and have no (or few) connections to the other boxes-these unconnected and un-powerful units could be labeled ESL or instructional development.
Some of the boxes represent traditional barriers to the flow of students through the system: the English Department blocking students from Engineering, for example, or, the Mathematics Department controlling flow into Business. The IT A program is to be inserted into this system in a similar fashion and can be pictured either as a filter or a roadblock depending on the design and role of the ITA program. The organized chaos within the box is constantly being influenced by forces outside the boxes, shown graphically by arrows that represent the public, the professions, and accreditation agencies.
When students flow into a university system, they find inside many different units, many with little or no connection to each other. To exit the system, students must work their way through the maze of boxes. Administrators who establish new programs must fmd out as much as possible about the units that make up the larger system and decide on the most appropriate location for their new unit. The Culture Audit The University as a System What can we do to gain a better sense of the structure and the values of our own institutions? While the culture audit is a more highly refmed process than most IT A administrators will have time to undertake, certain of its analytical approaches can be of immediate use. What follows is a listing of artifacts that can be collected and analyzed for better understanding of the structure and values that control actions within a particular institution (Kuh & Whitt, 1988).
1. An official organization chart. This chart describes the line of command within the institution, its official ordering of power. It should be remembered, however, that a university is not an army; higher administrators do more persuading than ordering.
2. The most recent undergraduate and graduate catalogs. What does the university state as its mission? How does it explain its history? Is anything said about research objectives? Is anything said about international connections or dimensions? What does the catalog look like? Who is pictured on the front? Georgia State University has a separate graduate catalog for each of its colleges, a clear statement about the independence of each college. Auburn University's 1989-90 Undergraduate Bulletin shows a young white man bicycling down a tree-lined street in front of a building of Greek revival design. Its Graduate Bulletin for the same year shows an older man standing on the front porch of a red brick building talking with a younger man of somewhat Asian features; behind the two men you can see four students of graduate school age walking through a park-like area of green grass and trees. What does this research university think are the differences between undergraduate and graduate education? 3. The most recent self-study document; for example, an accreditation self-report. 4. Brochures used by the institution for recruiting students and for advertising programs. What is it proud of? 5. Slogans and colors. How does the institution view itself? 6. Student course evaluation forms. What seems to matter in classroom teaching and relationships with students? 7. Campus maps. How does the university/college organize itself? Who gets the most space? Where are the various units located in relation to each other? At the University of Wyoming, the fme arts building is located in a section of the campus far removed from the rest; its location may be one indication of the relation of the performing arts to the rest of the University. 8. Student, faculty, and staff newspapers. What are the big stories? 9. Rituals and programming used to mark divisions of the year. What kinds of celebrations tie the community together? For that matter, is the community tied together? Does the law school have separate celebrations from the rest of the university?
10. Demographic information on undergraduate students. Where do they come from? How old are they? What do they study? Sexes? Races? 11. Heroes. Are any founders still remembered? For what actions or characteristics? Is anyone more famous than the football coach?
IT A Program Design
As program directors develop a clearer understanding of the nature of their own institution-who the students are, how the institution is organized, who holds the power, where support for the program lies, what the traditions and accepted practices are-they can begin to address program issues. Some of the issues that must be considered are program design, curriculum design, program implementation, and assessment. There are three commonly used program designs in IT A training: orientation,pre-termjpre-teach, and concurrent (Constantinides, 1987) (see Figure 2). Since institutions have unique characteristics, the program should be designed in accordance with the needs and resources of that institution.
Orientation
Orientation programs may be either pre-term or concurrent with the term. Pre-term orientations usually last from 1-5 days and are sometimes part of a general orientation for all new TAs. Pre-term orientation programs have several advantages. They are relatively inexpensive and are logistically simpler to arrange than longer pre-term programs. Stipends for participants are not generally required, nor are special housing arrangements. A great deal of information can be presented in a short time, although the content must be carefully selected.
There are also drawbacks. New T As are often also new students and have many new-student tasks to perform. They must register and prepare for their own classes. They are often overwhelmed by the amount of information provided in the orientation and have very little time to assimilate it. Newly arrived students often have not had time to accustom themselves to American English or unfamiliar accents. The length of the program is generally insufficient to effect any needed changes in pronunciation or to address other language related issues, nor is there time for students to do teaching simula-tions or practice their presentation skills. There are also no classes in session for them to observe.
Concurrent orientations take place during the TAs' first semester of teaching, allowing the students more time to assimilate the material. Some schools also plan T A retreats or other support system projects as a part of the orientation. One major problem with concurrent programs is that the students may be teaching without adequate preparation and training. Another drawback is that T As are often reluctant to devote an appropriate amount of time to the program since orientation programs carry no credit and are scheduled on top of a TA's otherwise heavy teaching and course load.
Pre-term/Pre-teach
A second type of program is the pre-tennfpre-teach design, which may be offered either for credit or for no credit. Pre-term programs are usually a minimum of two weeks, with three weeks being a popular length. Advantages of this design include having more time for instruction and development of teaching techniques, time to address a range of language issues, time to evaluate students' abilities more thoroughly, and time for the students to present lessons in a variety of contexts.
The primary disadvantages are logistical and fmancial. Such programs often involve expenses for housing and food, and stipends for participants, in addition to instructional, clerical, and equipment costs. Often there are no classes for participants to observe during the pre-term session, which is commonly held in August. However, this disadvantage can be overcome somewhat by having a follow-up program in the fall semester to provide observation opportunities and to monitor the students' progress in the classroom. Because of its length and intensity, this design may also be more difficult to coordinate if large numbers ofT As are involved.
Concurrent
The third type of design is concurrent, i.e., a program that occurs during the regular semester. These programs sometimes run every term and may be credit or non-credit. Students may participate prior to any teaching assignment or, in some programs, during their first semester of teaching. Concurrent programs differ from concurrent orientations in length and in the type and amount of instruction provided. Logistically, concurrent programs present fewer problems in terms of housing and staff than do pre-term programs. Courses are available for participants to observe, the pace is not as intense as in a pre-term program, and participants have ample time to prepare practice presentations and effect changes in both their language and teaching skills.
Students will have more enthusiasm for participation, practice, and change in a credit course than a non-credit one.
Concurrent programs are not suitable for all institutions. Some universities offer assistantships to new IT As who must assume their teaching duties during their first semester on campus. If students are to be assessed and trained prior to teaching, then a concurrent design would not be advisable.
There are additional variables that would affect the choice of program design, e.g., availability and training of staff and other resources. If a program is to have a segment to work on pronunciation, a faculty member must be available with appropriate education and experience to design and teach an effective course. If videotaping of microteaching is to be done, the program must have video equipment readily available.
Curriculum Design
The next element to consider is curriculum design. Although most programs include aspects of language, culture, and pedagogy, the exact structure of a curriculum will be determined by institutional needs and program design.
Many institutions speak of the "ITA problem" as a problem of pronunciation. Pronunciation has become the cover term for a whole range of linguistic, pedagogical, and cultural behaviors that may be new to our students. An overriding issue in the language curriculum must be to determine whether or not we can establish threshold proficiency for IT As appropriate to the tasks they are to perform. In addition to pronunciation, there are other aspects of language proficiency that need to be addressed. The language curriculum may also focus on the identification of key vocabulary and linguistic routines and functions specific to various disciplines. Other variables, such as listening comprehension, stress, intonation and rhythm patterns, fluency, and overall comprehensibility might be considered. Goals and objectives should be consistent with the constraints of the program design. For example, it is unrealistic to expect global changes in pronunciation after a three week pre-term course, whereas significant improvement in listening comprehension is possible.
In teaching pedagogy, there are several key questions to consider. Is there a set of teaching behaviors common to "good" teachers across disciplines? What are they? How do we identify these characteristics within the context of a specific institution? How important are questioning and interaction techniques, and how can we teach strategies for repairing flawed communication? If ESL teachers are the primary instructors in an ITA program, they must exercise special care to teach behaviors that are appropriate to the physics lab or the economics or business lecture hall.
Cross-cultural variables will also affect communication in the classroom, and most programs include a strong cultural component. In most IT A programs, culture must be defmed more broadly than social traditions. It is important that IT As and TA trainers understand the institutional culture and the subcultures of the disciplines. An understanding of the classroom culture of American undergraduates is also vital. Care should be taken that trainers have (or have had) regular contact with American undergraduates in order that an accurate picture of the classroom be painted for the IT As.
An emerging and controversial issue in IT A training is that of orienting American students to ITAs. Such orientation might include information about IT As' backgrounds and general academic credentials, information on IT A training programs at the school, issues in international education, and strategies for interacting more effectively with IT As. Part of the frustration of undergraduates with IT As appears to be a sense of powerlessness-not knowing what to do or how to interact with persons different from themselves. An orientation program could serve to alleviate some of their anxiety and improve communication in the classroom. However, some program administrators might be reluctant to suggest such an orientation out of concern that drawing attention to their IT As in this way would create some hostility or negative reactions. It is advisable that directors have a good sense of how an orientation would be received on their campuses before recommending this.
Program Implementation
Besides program and curriculum design, there are pragmatic considerations r~garding program implementation. The success of a program may ultimately rest on how it is implemented. hnplementation issues, in particular, must be approached in accordance with an analysis of each institution. Some of these pragmatic considerations are funding, scheduling, staffing, administrative location, and evaluation of program effectiveness.
• Funding for the program is a major issue. In some states, state appropriated funds cannot be used to finance IT A programs. In such cases, programs may be funded by student services fees, interest income, or even by locally funded Intensive English Programs. It is important to clarify both the source and adequacy of available funds, since this will affect most decisions about program design. (See Byrd, Smith, and Constantinides, 1990, for a detailed discussion of funding practices.) • Scheduling is another possible source of trouble. For example, pre-term programs should be timed to occur when housing and dining facilities are available. All types of non-credit IT A programs may have problems getting classroom space at convenient times, since credit courses may have priority. • Staffmg decisions must also be based on design decisions and institutional resources. A few universities have hired people for the specific purpose of implementing an IT A program; others use available staff. Program directors are generally from the fields of ESL, instructional development, speech communication, or some combination of disciplines. Staff responsibilities vis-d-vis other teaching assignments or administrative duties need to be clearly defined.
The choice of a program director is of utmost importance to the credibility of the IT A program. A successful program director will be able to balance the legitimate needs of various constituencies and still maintain adequate standards. Through close cooperation and interaction with IT A faculty supervisors and administrators, the program director will enhance the credibility of the program within the institutional setting. In order to accomplish this, of course, the director must have a clear and comprehensive view of the institution, the kind of view that can be obtained through the culture audit or environmental scan discussed earlier.
• The administrative location of a program may have a tremendous impact on the success of that program. Programs housed within a single department, e.g., Chemistry, will not have the influence, funding, or credibility of programs housed within the graduate school or the provost's office.
The best location for a program is the one that gives the greatest authority, visibility, credibility, and funding. This will clearly vary from institution to institution. • Another implementation issue is how best to evaluate the effectiveness of the program. Common measures include feedback from the participants, program instructors, departments, and undergraduate students. Follow-up programs may also provide a vehicle for evaluating program effectiveness in that classroom observations can indicate whether the desired outcomes of the program were achieved.
Issues in IT A Assessment
A final critical component of IT A programs is assessment. Several types of assessment may be done: the most common are screening, placement, diagnosis, progress, and exit (final evaluation). Screening instruments may come at the beginning or end of a training program, and are used to separate those IT As who can go into the classroom from those who are not yet ready. Placement testing serves the needs of those who require additional language instruction prior to IT A training; diagnostic testing helps instructors optimize their teaching. Progress tests help trainers know if the participants are improving. And exit tests are final measures of achievement.
ITA Tests
IT A tests commonly assess English language skills, presentation skills, classroom management skills, public speaking skills, and knowledge of the American classroom culture. The ideal IT A test should be oral, interactive, and relevant. Further, the test should be valid, reliable, practical, and suitable for making recommendations. Validity means making the test relevant to the task. To improve reliability, there should be multiple ratings, trained raters, and consistent, standardized test administration and scoring. A practical test is basically the best test obtainable within the restrictions of the program budget and the time allotted for testing purposes.
There are four types of IT A tests commonly used: pre-recorded speaking tests, oral interviews, oral communicative performance tests, and teaching simulations.
Pre-recorded Speaking Tests
A pre-recorded audio or video test is usually administered to a group rather than an individual. Student responses are recorded for subsequent rating. An example of an audio test designed for group administration is the SPEAK (Speaking Proficiency English Assessment Kit), with audiotaped student responses. An example of an oral assessment test with video-recorded responses is the CO VIS test (Council of International Educational Exchange Video Interview Service). This test features audiotaped questions, but videotapes the subject's responses for evaluation. Another variety of oral assessment test used by the University of Michigan IT A Battery presents the testee with videotaped prompts, the responses to which are audiotaped for later evaluation.
A pre-recorded test with audio responses, such as the SPEAK, has one major advantage. Because it is usually given to a group, not much time is required to administer the test. The SPEAK has multiple forms and includes rater-training materials and testing materials in readily available kit form.
There are also several disadvantages to this type of test. The test has low face validity since it is not very relevant to actual Ff A tasks, such as teaching skills, public speaking skills, and other language skills. There is no interaction in the test, and it has poor flexibility with regard to test administration time, test format, proficiency level adjustment, or adjustment for individual needs such as nervousness or illness. Also, since subjective scoring is required, raters must have a high level of expertise, which requires advance training. Finally, the cost of equipment to administer the test can be substantial in that a language laboratory facility is generally required.
Oral Interviews
Oral interviews are most often formalized conversational one-on-one formats, with student responses usually recorded for later rating. Generally, the interviewer gives the student a series of questions to be answered. Examples are the FSI (Foreign Service Institute) Interview, the Ilyin Oral Interview, and the ELTA (English Language Testing Associates) Oral Interview.
The oral interview is fairly flexible with regard to the time needed for administration, although if large numbers are being tested, total administration time could be quite long. The test format is very flexible, allowing for both proficiency level adjustment and individual needs. The cost of equipment needed to administer the test is low, and test materials are readily available.
The oral interview, however, is not very relevant to ITA tasks, and although there is some interaction, it does not usually simulate realistic classroom interaction. The face validity is marginal, and raters need a high level of expertise to score the interview.
Oral Communicative Performance Tests
The oral communicative performance test is a set of predetermined tasks involving oral skills generally required of classroom teachers. Tasks may include pronouncing technical tenns from one's academic field, reading aloud, making classroom announcements, explaining a technical reading, and engaging in role play involving common teaching-related experiences. Student responses are recorded for later rating. Examples are the University of Michigan ITA Test, the UCLA Oral Proficiency Test, the Michigan State University IT A Test, and the Georgia State University IT A Test. The Georgia State University test combines the results of the oral performance test with the SPEAK.
Oral communicative performance tests overcome some of the disadvantages of the SPEAK test and the oral interview. They have greater relevance to IT A tasks, involve more realistic interaction, and have good face validity.
The tests are flexible with regard to administration time, test format, and attention to student needs. Some of the disadvantages of the oral communicative performance test include a longer administration time, questionable availability of rater training and test materials, little flexibility in proficiency level adjustment, and a high level of expertise needed by scorers. The cost of the equipment will vary, depending on whether the test is videotaped or audiotaped.
Teaching Simulations
The final type of commonly used IT A test is the teaching simulation. Here, students make a formalized teaching presentation designed to emulate classroom teaching, with or without an audience (a simulated "class"). The simulation is usually videotaped for subsequent rating. One of the most well-known examples is the Iowa State University ITA test, the TEACH, which is used in conjunction with the SPEAK.
The teaching simulation also has strengths and weaknesses. It has high relevance to IT A tasks and high face validity. There is realistic interaction if an audience is present, and the individual administration time may be quite flexible. Total administration time would vary according to the number of IT As tested. Usually, the simulation and the oral communicative performance test require the greatest amount of total test administration time.
Because of the nature of the teaching simulation, however, there is limited flexibility in format, proficiency level adjustment, or attention to individual needs. The cost of videotaping equipment must also be considered. Raters require a high level of expertise since scoring is also subjective.
Finally, there is somewhat limited availability of test and rater training materials.
The type of assessment chosen by an institution is most often determined on the basis of program design and goals, availability and training of staff, and adequate fmancial resources. Program directors should review their assessment annually and modify the process in response to changing needs and the adequacy of the results.
Models of IT A Assessment
All aspects of the assessment process contribute to the perceived effectiveness and credibility of the program; therefore, program directors must carefully integrate assessment policies into the overall program design. Screening, diagnosis, placement, progress, and exit evaluations must all mesh with the goals, resources, and structure of the program. The models of ITA assessment shown in Flow Charts I, II, and III (see following page) are meant to be illustrative of three types of successful systems in use today. However, other successful models are available as well.
The assessment model described in Flow Chart I assumes that there is an on-campus IT A program with a large number of prospective IT As. There are frequent administrations of the testing instruments, and an on-campus Intensive English Program. The same type of test is used for screening, placement, and exit (final evaluation), but the content of each test varies.
The assessment model shown in Flow Chart II is suitable for a program that has a relatively small number of prospective IT As. There is limited access to ESL training. Assessment is conducted using various means during the course of the training program. Several tests are used for screening and exit.
Flow Chart III models a system used with a large number of prospective IT As. There is an on-campus ESL program, and a combination of tests are used for screening and placement.
Conclusion
IT A training is an important, complex issue. There are many successful program models to examine. It is useful to remember that these programs are probably successful, in part, because they were designed within the context of their particular institutions. Design factors can be identified, but the choices involved in program design, curriculum design, method of assessment, and implementation will vary greatly from institution to institution. An effective systems analysis or self-study will facilitate these decisions and improve the chances that the program will succeed. | 2018-04-05T18:01:54.276Z | 1991-06-01T00:00:00.000 | {
"year": 1991,
"sha1": "3d0732bc99fd15cc6317e8f6ede5484353bcda74",
"oa_license": null,
"oa_url": "https://doi.org/10.1002/j.2334-4822.1991.tb00193.x",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "713df886f1ef15ec1e051979ef1d598ceb9cf5fa",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
234365898 | pes2o/s2orc | v3-fos-license | The Nature of Employee–Organization Relationships at Polish Universities under Pandemic Conditions
: (1) Background: The aim of this study is to describe manager–employee and employee– employee relations during the COVID-19 pandemic and their impact on measures of the likely use of elements of remote teaching by university employees in the future. (2) Methods: The study used a descriptive-correlation research design with a survey as the primary instrument for data gathering. A total of 732 personnel took part in the survey, selected by a convenience sampling technique. The researchers used an adapted and modified instrument to gather data. The instrument underwent a reliability test. This study used structural equation modeling to confirm hypotheses. (3) Results: It was shown that manager–employee relations at Polish universities during the COVID-19 pandemic were of low quality. However, employee–employee relations were of above-average quality, and have a significant positive impact on intentions to use elements of remote working in the future. (4) Conclusions: Based on the results of the study, some general recommendations are presented for change management and relationship-building.
Introduction
Nowadays it is considered that corporate social responsibility (CSR) is an innovative way to build the competitive advantage of a company in the long term. When considering the activity of companies in implementing the concept of social responsibility, a proactive strategy is noticeable [1]. Thus, companies of their own accord seek to develop relationships with groups interested in the functioning of the enterprise, that is, they form relationships with employees [2]. Establishing and then developing a relationship between an organization and its employees is extremely important from a strategic point of view. Indeed, it is nowadays believed that a firm's relationship with its employees is an essential asset needed for both competitive and cooperative market activities [3].
When the unexpected circumstances of the COVID-19 pandemic first arose, organizations found themselves in one of two situations. In the first case, the employer and employees had previous experience of remote working-the employer had knowledge of the tools used for that purpose, had oversight mechanisms in place, and was capable of managing work even without face-to-face contact with employees. Employees were already experienced: they knew how to plan the working day, were able to report on completed tasks and plan subsequent ones, and understood the point of view of the employer, who had been compelled to change the form of working and to restrict possibilities of contact.
In the second situation, employers and employees had not used remote working before March 2020. The employer had to place an unprecedented degree of trust in staff, and to understand that this was a change that had to be accepted and skillfully managed [4,5]. A significant role was played by both sides and by the attitudes they adopted. They had to adapt immediately to the new conditions, and clearly define the parameters of remote work and methods of reporting and monitoring it. Regardless of which situation applied, there came a point at which business meetings began to be held exclusively online, which Information 2021, 12, 174 2 of 12 affected interpersonal relations [6,7]. Online meetings place limitations on relations on the emotional plane, and provide limited room for professional and business privacy. In an atmosphere of concern about the future, pressure for results, anxiety about the market position, and fear of unexpected economic change lead to stressful situations, increase the sense of threat, and have an impact on employee relations, above all those between managers and their subordinates.
Thus, building correct employee relations should play a key role at this time. The implementation of CSR in the area of relations with employees, may contribute to a more effective resolution of conflicts that arise in crisis situations in the enterprise [1]. On the other hand, the time of crisis can be a good moment to introduce changes in the company, which will result in a permanent integration of CSR into its activities and influence a more strategic approach to this issue. Internal relations within the company, especially concerning employees, are the sphere where CSR should be particularly important during the crisis. It would appear important to find answers to questions about the nature of employees' relations with their immediate superior, and with colleagues, during such a period of isolation. Do those relationships-an essential part of work-have an impact on employees' attitude to the prospect of remote working in the future?
Answering these questions will allow us to contribute to the literature on both relationships, or the broader social responsibility of organizations, and work done from home. First, by validating scales measuring perceptions of employee-organization and employee-employee relationships. In addition, we show how these relationships are shaped during the COVID-19 crisis. This information allows for the conscious and effective introduction of appropriate management actions in the organization. Additionally, it can inspire organizations to introduce additional best practices. Second, our results suggest that employee relationships significantly influence the use of remote work comps in the future. This finding suggests that it is important in terms of shaping digital competencies and CSR activities in an organization to have an environment that enables the sharing of knowledge and results of the implementation of individual activities; focusing on further training of employees may not be sufficient.
The rest of this paper is organized as follows. Section 2 presents the literature review and states the research hypotheses. Section 3 describes the data and methodology. Section 4 presents the results on employee-organization and employee-to-employee relationships and their impact on the future of remote working. Section 5 examines the discussion of the results. Section 6 concluded with the research limitations present and directions for further research.
Literature Review
Nowadays, CSR is considered an innovative way to build the competitive advantage of an enterprise in the long term. When considering the activity of firms in implementing the concept of social responsibility, a proactive strategy is noticeable. Thus, companies on their own will strive to develop relations with groups interested in the functioning of the enterprise, i.e., create relations with employees [8]. Moreover, the process approach assumes that people are the motor of every business. It is they who are the most important. It is widely known that a satisfied employee is more productive, loyal and committed [5,7].
Process management indicates what a firm can do to ensure that its employees are satisfied and that their work is effective [9,10]. One of its elements is the monitoring of the process of relationship management, including both formal aspects, defined by policy and procedures on relations between employees and the organization, and informal aspects linked to everyday practice [11,12]. Building positive relationships with employees requires an efficient flow of information.
Manager-Employee Relations
A relationship is characterized by the way in which two people or groups of people perceive each other and behave toward each other. The organization-employee relationship, Information 2021, 12, 174 3 of 12 therefore, means the way in which the employer (a person or unit) and employees perceive each other and treat each other in the workplace. Both individual and collective relations are important. A relationship is a process in which the manager (organization) is aware of employees' multifaceted needs, and is thus able to pay attention to their financial situation, their emotional state, the working atmosphere, and working culture in broad terms [10,13]. This process creates the best conditions for holistic, comprehensive development on the part of employees. From the moment the contract of employment is signed, a relationship begins to develop between employee and employer, the latter represented by the employee's immediate superior. The manager-employee relationship is grounded above all in a social and psychological contract [4,10,13,14]. This contract comprises beliefs concerning the reciprocal obligations of the two parties [14]. If employees perceive that their organization has not duly fulfilled the terms of the contract, this can lower their trust, job satisfaction, desire to remain with the organization, sense of duty, and productivity [4,15]. It is therefore important to identify the causes of psychological contract violation and the moment at which it occurs. According to a study by [13], there are two basic conditions giving rise to a perception of violation: reneging, meaning that an obligation is acknowledged but not fulfilled; and incongruence, meaning that there are differing understandings of an obligation. The nature of the manager-employee relationship is determined by multiple factors and is dependent on the organization [16]. The management of these relationships is usually formalized in an employee relations policy or program.
The process of the shaping of relationships affects the culture of the organization, employees' satisfaction, and staff turnover rates [15]. According to Nesco Resource [17], "When employees have a strong, healthy relationship with their employers, the entire company benefits. Studies show that employees who have mutually respectful relationships with their employers are more likely to be happy, loyal, and productive in the long-run." The value added by manager-employee relations can enable the retention of loyal employees, reduce workplace conflicts, increase productivity, and support the mutual learning process [4,15]. Studies show that companies which have positive relations between employees and managers are generally more successful [18].
According to the literature [12,16] factors affecting the building of strong manageremployee relations include sincerity, trust, assurance of adequate preparation, setting of clear expectations, flexibility, work-life balance, productivity, and commitment. Another list of important issues in the building of manager-employee relations includes flexibility, adaptability, and the ability to work together to find how best to satisfy the needs of both parties [19]. One of the most appreciated values is equality and the expectation of equality in business relations. Thus, everyone in the organization should adhere to the same standards at every level, from interns up to the CEO. This consistency, by building trust and self-confidence, makes a key contribution to relations not only between employees and managers, but also between colleagues. Regular monitoring of employee satisfaction, understanding of the effectiveness of actions undertaken, and open and transparent communication help to build positive, high-quality relations.
Employee-Employee Relations
Employee relations can make or destroy the atmosphere at the workplace. They are a key factor behind an organization's performance, development and learning [15]. Because employees are every organization's driving force, it is important to be confident that both manager-employee relations and relations between colleagues are well maintained [12]. Employee-employee relations refer to how a person perceives his or her relationship with coworkers as a whole [20]. The degree to which members of working groups or units are satisfied with their activities while cooperating on tasks determines the quality of their relations within the team. Seers [20] proposed a ten-item scale for measuring the quality of cooperation in a team, while Raziq and Maulabakhsh [21] proposed their own nineitem scale. Appropriate relations between coworkers bring competitive advantage to the organization [22]. The quality of these relations is also very important, since according to [23] in Poland, only 37% of employees are happy, and a happy employee means greater profits for the firm. Moreover, a satisfied employee is 43% more productive and 86% more creative, takes less sick leave (by 36%), and generates lower staff turnover (by as much as 61%).
Remote Working
In the literature [24,25], three main components of remote working are identified: the use of information technology, a link with the organization, and delocalization of work. Remote working is defined as work done at home for the employer on a full-time or part-time basis, and communication with the organization and interested parties using telecommunications and other information technologies. Both employers and employees believe that remote working will gain importance in the near future [24]. However, this will require changes to be made to the organization of processes within the enterprise [5]. There are some employers who believe that remote working will not be possible for their employees in the longer term, due to, for example, specific features of their line of business. Some employers would like to retain remote working at least partially, as they acknowledge that staff do not need to be in the company office to perform their duties reliably [26]. It should be noted that almost 30% of employers believe that their employees' productivity is unchanged or higher when they work remotely, while 14% report that employees feel better when they have the possibility of remote working [27,28]. Other significant factors for the use of remote working in the future include the satisfaction of employees working in that way and their attitude to the practice [29].
Harmonious interpersonal relations are the basis of well-being in both private and professional life. Therefore, it is always worth investing in their improvement. For employers, such an investment can be particularly profitable. Thanks to trust, openness and acceptance in relations, they will gain committed employees willing to cooperate in the conditions of a crisis or incident. Unexpected situations strengthen and enhance employees' skills and find a place for them in their future.
Hypotheses
Manager-employee and employee-employee relations that have been shaped in normal conditions may undergo changes when remote working is introduced [24,30]. In both cases, the amount of time spent on such relationships is probably reduced, as faceto-face contact is replaced by electronic or virtual communication. Moreover, behavioral factors such as increased anxiety and reduced trust may lead to a lowering of the quality of relations [25].
Sias [31] shows that the quality of relations with colleagues and with one's immediate superior is positively correlated with job satisfaction and commitment to the organization. Later studies [11,15,32] confirm that manager-employee and employee-employee relations have an impact on both job satisfaction and business growth. This research supports the statement that good relations between an organization and its employees, and between employees, are a predictor for business growth, success and job satisfaction. According to statistics [15], the satisfaction of employees grows by almost 50% when they develop a close relationship at work. It has also been proved that having friends at work brings benefits not only to employees themselves, but also to the organization. Relations with managers and colleagues are of fundamental importance for the quality of work as perceived by the employee [33]. From the aforementioned research results, it may be concluded that the desire to use remote working in the future is partly determined by the work environment, which is shaped by-among other things-manager-employee and employee-employee relations.
Based on the above considerations, the following hypotheses were proposed (see Figure 1): H2. the quality of manager-employee relations (RME) is directly linked to the quality of employeeemployee relations (REE).
H3. the quality of employee-employee relations (REE) has an influence on the intent to use remote working in the future (IRWF).
Description of Sample
The studied population consisted of academic teachers employed at universities in Poland. It was sought to make participation in the survey available to all members of this population. Access to subjects was obtained in two stages. At the first stage, university email addresses were obtained from a database, and e-mail messages were sent to them with a request to distribute a prepared questionnaire to employees. The e-mails were sent three times, as a reminder (one university requested that no more messages be sent, as all employees had already been notified). Unfortunately, there were some universities from which no responses were received, and attempts to make individual contact with employees of such an institution were also not successful. Therefore, the sample obtained is not representative. The survey was voluntary: participation was limited to persons who expressed consent and were willing to complete the questionnaire. According to figures from the Polish Central Statistical Office (GUS), on 31 December 2020, a total of 93,088 academic teachers were employed at universities in Poland. The number participating in the survey was 732, which represents a response rate of only 0.79%. It is difficult to understand why academics, who recognize the importance of every respondent to a survey, are unwilling to participate on the other side and support an initiative of this kind by providing their opinions. Survey responses were collected from 10 May to 30 June 2020.
Instrument of Data Collection
The survey was conducted using the CAWI (Computer Assisted Web Interview) technique. A major advantage of CAWI is that the questionnaire can be completed in respondents' natural environment, at their desired time and speed. A person responding to an online questionnaire has control over the questionnaire tool, and has no contact with an interviewer; they thus feel unconstrained and more willing to admit to things that would be difficult to acknowledge in a face-to-face interview. The online questionnaire Hypothesis 1 (H1). The quality of manager-employee relations (RME) has an influence on the intent to use remote working in the future (IRWF).
Hypothesis 2 (H2).
The quality of manager-employee relations (RME) is directly linked to the quality of employee-employee relations (REE).
Hypothesis 3 (H3).
The quality of employee-employee relations (REE) has an influence on the intent to use remote working in the future (IRWF).
Description of Sample
The studied population consisted of academic teachers employed at universities in Poland. It was sought to make participation in the survey available to all members of this population. Access to subjects was obtained in two stages. At the first stage, university e-mail addresses were obtained from a database, and e-mail messages were sent to them with a request to distribute a prepared questionnaire to employees. The e-mails were sent three times, as a reminder (one university requested that no more messages be sent, as all employees had already been notified). Unfortunately, there were some universities from which no responses were received, and attempts to make individual contact with employees of such an institution were also not successful. Therefore, the sample obtained is not representative. The survey was voluntary: participation was limited to persons who expressed consent and were willing to complete the questionnaire. According to figures from the Polish Central Statistical Office (GUS), on 31 December 2020, a total of 93,088 academic teachers were employed at universities in Poland. The number participating in the survey was 732, which represents a response rate of only 0.79%. It is difficult to understand why academics, who recognize the importance of every respondent to a survey, are unwilling to participate on the other side and support an initiative of this kind by providing their opinions. Survey responses were collected from 10 May to 30 June 2020.
Instrument of Data Collection
The survey was conducted using the CAWI (Computer Assisted Web Interview) technique. A major advantage of CAWI is that the questionnaire can be completed in respondents' natural environment, at their desired time and speed. A person responding to an online questionnaire has control over the questionnaire tool, and has no contact with an interviewer; they thus feel unconstrained and more willing to admit to things that would be difficult to acknowledge in a face-to-face interview. The online questionnaire was presented using the publicly available Google Forms tool. The aim of the research was to learn about respondents' experiences with remote working under crisis conditions. Completion of the questionnaire took approximately 30 min. It consisted of 15 different parts. In the present study, only four of them were used, concerning relations with one's Information 2021, 12, 174 6 of 12 immediate superior, relations with colleagues, and plans to use remote working tools in the future, and personal information (control variables).
The questionnaire was discussed among its creators, and was then reviewed by experts, who changed the wording of some of the questions and made numerous corrections to the form of the questionnaire. At the next stage, a pilot study was carried out among a group of 20 university employees, which enabled the identification and elimination of ambiguities and repetitions. After this validation process, the questionnaire was used in the full survey.
Measurement Scales
The scale used in the questionnaire to evaluate relations with managers was based on works on leadership theory [15,21,24,34]. The proposed scale measures the quality of the relations between a manager and subordinates which are essential for work. In its final form, following the pilot study, it consisted of seven items ( Table 1). The scale used to evaluate employee-employee relations (three items; Table 1) was constructed based on the theory of teamworking [15,20,24], and was significantly reduced in size as a result of the expert evaluation and pilot study. It measures how a person perceives their relations with colleagues as a whole [20]. The scale for intent to use remote working in the future (five items; Table 1) was constructed based on existing publications [9,24,26,29], with details added as a result of the expert evaluation. In each case, respondents reported their opinions using the seven-point Likert scale, selecting answers from 1 (strongly disagree) up to 7 (strongly agree).
Results
In the period of the pandemic, the use of remote working at universities became a necessity. It had previously been used only for international communication under academic and teaching projects, and for e-learning courses. Now universities were forced to switch to remote forms of teaching overnight. From 10 March 2020, remote working was introduced for academics and teachers at Polish universities and for more than threequarters of administrative employees. Before the outbreak of the coronavirus pandemic, this fraction had been much lower; not all universities had suitable tools, and employees were not trained for such a process. The switch was a major challenge for management and for the employees themselves. It required a new approach to the implementation of particular processes, and above all a different approach to management, since-although academic teaching staff are accustomed to a high level of autonomy in their activities-only 52.5% of those responding to the survey had had even partial prior experience with remote working. Additionally, the process itself had never previously been implemented on such a scale at any Polish university.
Details of the Results
Of the respondents to the survey, 58.2% were women and 41.8% were men. In terms of age, 20% were under 40, 47.5% were aged 41-50, 22.1% were aged 51-60, and 11.4% were over 60. Full-time university employees accounted for 94.8% of the total, while 5.2% were subject to other forms of employment (self-employment, part-time working, freelance contract or contract for specified work). Of the respondents, 14.5% performed managerial functions (and thus had other members of the study population as subordinates). As regards the length of service, 21.4% had been teaching for less than 10 years, 34.2% for 11-20 years, 31.4% for 21-30 years, and 13% for more than 30 years.
First, the information collected from the selected parts of the questionnaire underwent factor analysis. For these items, the KMO index was 0.812 (>0.7), which meant that the analysis could be accepted. Bartlett's test, with Chi-Square = 8512.447 (p = 0.000), indicated that the factor model was suitable for the analyzed variables, as the variables were statistically significantly related. The explained variance was 69.81%. In the next step, Kaiser's criterion was used to identify three hidden variables, namely the number of factors suggested by eigenvalues greater than one for a given factor solution. Next, the Varimax rotation method was applied; the obtained factor loadings are given in Table 1. All of the loadings are greater than 0.63 (see Table 1). Based on theoretical determinants, three measurement scales were obtained, concerning manager-employee relations (RME), employee-employee relations (REE), and intent to continue remote working in the future (IRWF); measures of reliability are given in Table 2. These measures confirm the reliability of the measurement scales used; in all cases, we have Cronbach's alpha > 0.7, AVE > 0.5 and CR > 0.7, and hence the values are greater than the accepted boundary values [35]. The descriptive statistics ( Table 2) indicate that respondents' evaluations are lowest in the case of relations with managers, and here also the views expressed were subject to the greatest variation. The highest evaluations were those of the intent to use remote working in the future, and in this regard, respondents formed a relatively homogeneous group.
For verification of the conceptual model and the proposed hypotheses, structural equation modeling was used. Here it is important to verify whether each construct differs sufficiently from the other constructs. This may be done using discriminant validity analysis according to the criterion of Fornell and Larcker [35]; that is, checking whether the share of variance between each construct and its measures is greater than the share of variance between the given construct and other hidden variables. To do this, the square root of the AVE value for each construct (measurement scale) was compared with the correlations between constructs. As shown in Table 2, all square roots of AVE are greater than the correlations of the constructs, which confirms adequate discriminant validity. The formulated conceptual model was verified empirically using AMOS 18. The following acceptable values of fit statistics were obtained: χ 2 = 409.77, df = 87, p = 0.000, χ 2 /df = 4.71, CFI = 0.907, TLI = 0.906, RMSEA = 0.055. Results of estimation are given in Table 3.
Summary of Hypothesis Tests
The following hypotheses were tested: The path from RME to IRWF was not significant (p = 0.062). Therefore, the hypothesis that the quality of manager-employee relations has an influence on the intent to use remote working in the future was not supported by the data.
The correlation between RME and REE is statistically significant (p < 0.001) and positive (in the appropriate direction). Thus, the data strongly supported the hypothesis that the quality of manager-employee relations is directly linked to the quality of employeeemployee relations.
The path from RME to IRWF was significant (p < 0.0001), in the hypothesized direction, and strong (0.355). The data strongly supported the hypothesis that the quality of employeeemployee relations has an influence on the intent to use remote working in the future.
Discussion
Because of the lack of earlier research, the results of this study could not confirm findings that a switch to remote working can lead to a change in manager-employee and employee-employee relations [24,30]. In the present conditions, the manager-employee relationship and the employee-employee relationship are assessed differently. According to respondents' evaluations, manager-employee relations were poor, while relations between employees were better than average. The research confirmed that after the move to remote working there was a decrease in the amount of time devoted to manager-employee relations, and electronic-based communication is very poorly perceived by employees. Although they collected information, managers did not give feedback and did not evaluate the work of their employees [10,26]. In the opinion of employees, the worst feature related to remote working was the lack of support in planning, organization, and motivation [4,5]. Moreover, managers did not obtain information about employees' problems and needs in remote working [10]. The described situation in the first phase of the pandemic, when people experienced increased anxiety about their own health and that of their families, led to a lowering of the quality of relations between employees and their immediate superiors [25]. The same behavioral factor contributed to an improvement of relations between colleagues, who not only sought to cope with anxiety about their health, but also became more flexible, and supported each other by sharing knowledge about the new conditions in which they were required to work.
Our research shows that the quality of relations with colleagues is positively correlated with their intent to use remote working in the future and their commitment to the digital development of the organization. This is only partly in agreement with the findings reported by Sias [31], since our research did not indicate that the quality of relations with immediate superiors has an influence on the intent to use remote working in the future. In agreement with past studies [11,15,32] we were able to confirm that employee-employee relations have an impact on the further development of competencies and contribute to business growth, due to employees' satisfaction with working together. The results confirm that competencies that have been acquired in a friendly atmosphere-that is, where there exist close relations with coworkers-are desired to be used in the future [15]. Thus, relations with colleagues are of fundamental importance for the quality of work as perceived by the employee [33]. It should also be noted that having friends at work brings benefits not only to employees themselves, but also to the organization, since those employees intend to continue to use elements of remote working in the future, thus contributing to the digital development of the business [30].
The research has also confirmed that the working environment is shaped by the mutual influence of manager-employee and employee-employee relations [22,24]. Our study confirms that at the level of change management there was a lack of oversight and monitoring of employees' actions by the organization [4]; employees felt that they were left to deal with their own problems and concerns [11,12]. The lack of contact or support from immediate superiors led to the formation of strong bonds between employees. More detailed research in this area has shown that employees even created epicenters of knowledge, around which they gathered. Notably, these groups might even consist of employees of different organizations, facing similar problems [20,24]. Employees also underline how their acquired competencies can be beneficial for their future work, by increasing its quality and productivity [18,29].
This study established a causal relationship between coworker relationships and future use of learned remote work tools and skills. We show that the relationship between immediate supervisor and co-workers are correlated in the same direction, that is, they describe elements of the work environment. The study showed that the relationship with co-workers is an important factor to consider when developing competencies that we intend to use in the future, or that will shape the future of the organization, and indicated digitalization is a certain challenge that every organization faces. Through this research, we provide evidence on the direction of causality between the relationship at the level of immediate supervisor and employees and its impact on the perception of work in the future. The paper presents a new opportunity to understand the predecessors' perceptions of the possibility of digitizing their activities in the future.
Conclusions
As society continues to grow and evolve, we are becoming more aware of the important impact that organizations and ourselves have on our interactions and their consequences. With the assumption that companies must continue to evolve and become more socially responsible, it is important to understand who and what mechanisms can accelerate this agenda. Our work shows that one such element is perceived relationships with co-workers. They determine what we can and how we can leverage in the future.
The foregoing analysis has confirmed that the competencies in remote working acquired by employees and organizations during the crisis will be put to use in the future. The situation has forced both parties-employer and employee-to come to terms quickly with the new reality. However, universities need to work on change management, as relations between employees and their immediate superiors were found to be poor. It is interesting that relations between coworkers in the crisis situation were assessed as above average, also being reinforced by grassroots initiatives.
Limitations
Every research study has limitations, and this present study is not an exception. The observed relationships could be driven by an unobservable firm or manager characteristics. This research is missing much unobservable info that could simultaneously affect the two relations are studying. In this analysis the working environment was considered in terms of manager-employee and employee-employee relations, but it is also affected by other factors, including remuneration and other benefits, but also work-life balance, productivity, and-in the case of remote working-the conditions of the home office. Thus, one of the principal limitations of this study is the incomplete representativeness of the considered elements of the working environment. Moreover, the research concerns different employers and very varied working conditions in the period studied.
Future Research
In future research, it is important to determine whether-following the difficult period of the first phase of the pandemic-attitudes to these relationships changed during subsequent periods. A frequently declared reason for the dissatisfaction of academic teachers with the switch to remote working was the excessive workload and resulting tiredness, which may have a large impact on the low quality of relations with the immediate superior, or the reduction of the high level of autonomy to which those teachers had been accustomed. Thus, in future studies, it will be valuable to determine how these relations are affected by the factors of independence, work-life balance, and tiredness. Thus, it would be interesting to explore the extended environment in future research. More importantly, future researchers should try to explore the aspects of social responsibility and refer to the studies depicting the different factors of social responsibility visibility in the organization [2]. Acknowledgments: Data collected as part of the research project "Remote Work at University" carried out by Technical University of Lodz, University of Lodz and University of Aveiro were used. | 2021-05-12T13:13:28.430Z | 2021-04-19T00:00:00.000 | {
"year": 2021,
"sha1": "6b005f06f09197541ee38dae1767e58d9f7cf4f5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/info12040174",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2ff0122072d2fce8c00d99dde6b1d68ca7868e2c",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59541041 | pes2o/s2orc | v3-fos-license | The Development of In-Service Mathematics Teachers ' Integration of ICT in a Community of Practice : Teaching-in-Context Theory
The goal of the present research is to describe five lowersecondary in-service mathematics teachers' development of ICT (Information Communication Technology) integration in their teaching as a result of mentoring pre-service teachers who practiced the integration of technology in their mathematics teaching. This is done through the combination of Schoenfeld's Teaching-in-Context theory and Goos' community of practice framework. To collect the research data, the researchers used semi-structured interviews with the in-service teachers at the beginning and end of the academic year. In addition, several observations of lessons taught by the in-service teachers with the assist of pre-service teachers were recorded towards the end of the intervention. The research findings indicate that the community of practice context can serve as an escalator that supports the decision of in-service mathematics teachers to integrate ICT in their mathematics teaching. However, these decisions were conditioned by taking care of the students' formal procedural mathematical
Introduction
In-service mathematics teachers' integration of ICT has been researched from various aspects.Here, we try to study it using the lens of community of practice and teaching in context.In previous research, we used other frameworks as the innovation diffusion model of Rogers [20] and technology acceptance model of Davis [8].The two theoretical frameworks that we use here are more related to the professional de-velopment of the mathematics teacher, as well as the mathematics teacher's knowledge.As educators of mathematics pre-service teachers in a teachers' college and in the training schools, we consider it our role to encourage the integration of ICT in teaching among the mentoring mathematics in-service teachers.This role is founded on our conviction that ICT contributes to students' production of mathematical knowledge through supporting them in their mathematical investigations [14].In the current paper, we describe, utilizing Schoenfeld's Teaching-in-Context theory and Goos' community of practice, the development of five mentoring mathematics teachers' beliefs and behaviors regarding the integration of ICT in their teaching in the context of a community of practice.For us, a community of practice is a context in which pre-service teachers, their school mentors and their college educators try to professionally develop together.This development is in-context, conditioned by the members' knowledge, beliefs and goals.Here, the development is in the integration of ICT in in-service teachers' practices.
ICT use in Teaching Math: Benefits and Obstacles Hindering this use
Different studies described the benefits of integrating ICT in education.These benefits are categorized as benefits for students (as contributing to their production of knowledge and their interaction) and benefits for teachers (as promoting new pedagogical practices and enhancing teaching).Becta in [3] clarifies the contribution of ICT to students' production of knowledge, saying that ICT causes students not only to receive information, but also to provide information themselves.This could be considered an indicator that ICT supports constructivist pedagogy, where pupils use technology to explore and reach the understanding of mathematical concepts, which specifically enhances students' understanding of basic concepts [14].Becta in [3] also points at the contribution of ICT to students' interaction, saying that ICT enhances and encourages the interaction between students, as well as between students and the technology itself.ICT also contributes to teachers' instruction.Kreijns et al. in [14] claim that ICT can enable, promote, and reinforce the use of new pedagogical practices that correspond with the educational demands of the twenty-first-century knowledge society.
Inspite of ICT benefits, teachers are often more reluctant than willing to use information and communication technology (ICT) [14].Keong, Horani and Daniel in [13] and Jones in [11] identified different barriers to ICT integration: (1) lack of confidence of teachers in using technology; (2) lack of time in the school schedule for projects involving ICT; (3) insufficient teacher training opportunities for ICT projects; (4) inadequate technical support and lack of resources at the school for these projects; (5) lack of teachers' knowledge about ways to integrate ICT to enhance the curriculum; (6) difficulty in integrating and using different ICT tools in a single lesson; (7) unavailability of resources at home for the students to access the necessary educational materials and (8) the age of the teachers, where older teachers are reluctant to use new digital technologies.These barriers suggest taking care of teachers' professional development in ICT, where this professional development encourages and supports teachers in integrating technology in their teaching.
Teachers' Professional Development in ICT
Clark-Wilson et al. in [7] postulate that the process by which teachers develop their professional identity and associated practices over time is experienced as "professional development," which encompasses the full range of individual and collaborative activities in which a teacher engages inside and outside of school.These include participation in traditional courses, in-school initiatives, research projects, and professional networks.
The integration of ICT has been a key component of the agenda of teachers' professional development, where this agenda is influenced by three elements overlapping with each other to facilitate the use of ICT within schools [17].These elements are institutions, resources, and teachers, where the main barriers preventing the implementation of technology in education are teachers' confidence, beliefs and attitudes towards the role of technology, and towards the ability of successfully implementing it within schools [16,26].Thus, it is important to examine teachers' beliefs towards the integration of ICT in their teaching as a first step towards leading them to integrate ICT in the classroom.
Teachers' Knowledge, Beliefs, Goals and Decisions
Schoenfeld's Teaching-in-Context theory [21] links teachers' decisions to their beliefs by arguing that teachers' knowledge, beliefs and goals influence their decisionmaking in the mathematics classroom.In a subsequent article, Schoenfeld in [22] claimed that the mathematics teacher enters the classroom with a set of goals in mind, and some plans for achieving them.Plans are chosen by the teacher based on his or her beliefs and values.Schoenfeld distinguished between two classroom situations, where in the first situation, there are usual events and the lesson goes according to plan.In the second situation, something unusual happens, which makes the lesson not proceed according to plan.In this case, the teacher sets a new goal and makes a new plan depending on his/her knowledge and beliefs.
Several researches agree with the assumptions of Schoenfeld's Teaching-in-Context theory.For example, Barkatsas and Malone in [2] found that teachers' beliefs affect their practices.Moreover, teachers' beliefs have mutual relationship with their interactions with students [28].Furthermore, teachers' knowledge and attitudes influence teachers' practices [19].
As to teachers' decisions, teachers' decision-making and lesson-practice are interrelated [5,10].Researchers have attempted to characterize the factors that affect teachers' decisions to use technology in their teaching, and how these decisions affect their practice.For example, the authors studied teachers' decisions to use mobile learning in the mathematics classroom, and found that different factors have an impact on teachers' decisions regarding the use of mobile learning in the mathematical lessons.The factors include teacher's history of using technologies in teaching; teacher's perceptions of using technologies in teaching; the school community, including the principal and the coordinating teacher; rules regulating the use of technologies in teaching mathematics; and division of labor regarding this use, as who should prepare the learning activities needed to be used for learning mathematics with the mobile phone.
Several researchers used Schoenfeld's Teaching-in-Context theory to study teachers' practice and decision-making in the mathematics classroom (e.g., Törner et al. in [27]).We follow these researchers to study in-service teachers' integration of ICT in the mathematics classroom.Doing so, we combine the previous framework with Goos' framework of community of mathematics teaching [9].
Community of Mathematics Teaching
Goos in [9] utilizes Wenger's community of practice framework to describe the joint professional development of mathematics teachers and herself as a researcher.This framework describes a model of social learning whose assumptions differ from those made in the common theoretical frameworks of teachers' competencies and professional development [6].This shift in perspective on competencies and professional development connects the concept of identity to the concept of practice.According to Wenger's framework, Learning is developed through three modes of belonging: (a) engagement or mutual participation in joint tasks, (b) imagination, which is the willingness to explore and try new things, then reflect on how these relate to other practices, and (c) alignment, which is the convergence of a common focus, cause, or interest.In addition, Wenger's framework assumes that the community of practice develops through three practices: mutual engagement of participants, negotiation of a joint enterprise, and development of a shared repertoire of resources for creating meaning [9,29].
Research Rationale and Goals
The present research attempts to analyze in-service teachers' integration of ICT in their teaching using two theoretical frameworks, which have been utilized in mathematics education.Little research has utilized these frameworks to analyze the development of mathematics teachers' practices and affective aspects as a result of participating in a community of practice which encourages ICT integration in teaching.We will use the teaching-in-context framework to analyze in-service teachers' knowledge, beliefs, goals and decisions, before and after the practices of the community.In addition, we will use the community of practice framework to analyze the community practices.Doing that, we try to understand how the community practices affected inservice teachers' knowledge, beliefs, goals and decisions regarding the integration of ICT in their mathematics teaching.The research results would help conduct teachers' workshops, including teachers' communities of practice, in order to encourage them to adopt innovative practices as ICT integration.
Methodology
In the present research, we report the development of the integration of ICT by five lower-secondary in-service mathematics teachers in a community of practice context in which they were engaged in integrating technology in mathematics teaching.These teachers accompanied our 18 pre-service teachers, who are in their third academic year majoring in teaching mathematics and computer science in intermediate schools, in the frame of practical training under the Professional Development Schools (PDS) model [1].All of them have a Bachelor degree in mathematics and a teaching certificate, with at least 10 years of seniority in teaching.The teachers had some experience in teaching mathematics using technology.Two of them were part of an experiment led by a courseware development company for a whole year.Their feedback about this experiment was mostly negative, for they claimed that it had destructive effect on their pupils' formal procedural knowledge in mathematics.It was a challenge for us to get these teachers agree once more to get involved in a new experiment in this field.
Our pre-service teachers are trained specifically to use visual and dynamic tools to investigate with their students' questions that encourage higher order cognitive skills, such as: 'Would the three perpendiculars in a triangle meet at the same point?If so, what could you say about the location of that point?'They would help their students phrase conjectures and discuss using mathematical reasoning the correctness of these conjectures.The in-service teachers are encouraged to use the visual and dynamic tools by themselves in order to encourage their students' conjecturing and justification of mathematical relations.
To encourage the in-service teachers to integrate technology, the following methods were employed: watching and analyzing video clips of past-years pre-service teachers performances during integrating technology in their mathematics teaching, preparing ICT-based mathematics lessons together with the pre-service teachers watching the present pre-service teachers teach ICT-based mathematics lessons, teach together with the pre-service teachers ICT-based mathematics lessons, teach with the support of pre-service teachers ICT-based mathematics lessons, and finally teach by themselves ICT-based mathematics lessons on their own.
ICT Tools Utilized by the Pre-Service Teachers and later by their Mentoring Teachers
Our pre-service teachers were required, in their third year of study, to integrate ICT in their practices as mathematics teacher trainees in the training schools.In this integration, they are requested to use various ICT tools and technological pedagogical models, which they were introduced to and discussed in the didactics courses.Examples of such tools and models include videos and presentations; digital books and digital tools; spreadsheets, applets and GeoGebra; applications of cellular phones; Wiki, Google Docs and Sites; and social networking sites such as Facebook.The mentoring teachers were encouraged by the pre-service teachers' educators to use the same ICT tools in their teaching of mathematics.
Data Collecting and Analyzing Tools
We used two collecting tools: observations and semi-structured interviews to collect data about different issues related to the integration of ICT in the lessons of the participating in-service teachers.In more detail, we used semi-structured interviews to collect data about the participating in-service teachers' knowledge, experiences and beliefs regarding the ICT integration in mathematics teaching at the beginning and end of the academic year.These interviews included questions, such as: "What ICT tools do you know that could be beneficial in teaching mathematics?","How do/would you use these tools in your teaching of mathematics?","What experiences do you have of integrating ICT in teaching mathematics?","What obstacles do you tackle when you try to use ICT in teaching mathematics?", and "What are your attitudes toward integrating ICT in teaching mathematics?Explain these attitudes." At the same time, we also used observations to assess the community of practices outcome by the end of the academic year.Our community of practice, which extended through the academic year, included the in-service teachers, the pre-service teachers and the two college educators (The first two authors of the paper).We recorded observations of five lessons, one for each in-service teacher, at the level of teaching ICT-based mathematics lesson with the support of pre-service teachers.
To analyze the data, we used the deductive constant comparison method based on the categories derived from Schoenfeld's Teaching-in-Context theory in combination with Goos' community of practice framework.Using Schoenfeld's Teaching-in-Context theory, we studied in-service teachers' knowledge, experiences, beliefs and goals and how they influenced their decision-making, at the beginning and end of the experiment.Using Goos' community of practice framework, we studied how mutual engagement of the participants, the negotiation of a joint enterprise, and the development of a shared repertoire of resources for creating meaning influenced the participants' advancement of their knowledge, beliefs and goals.
Findings and Analysis
We present the in-service teachers' knowledge, experience, resources, beliefs, goals and decision-making related to ICT practice at the beginning of the academic year, then the community of practices and afterwards we present again their knowledge, experience, resources and beliefs at the end of the academic year.
Teachers' Knowledge, Experience, Resources and Beliefs at the Beginning of the Academic Year
At the beginning of the initiative, the first two authors of the paper, as college educators of pre-service teachers in the training schools, faced some difficulties and obstacles convincing the mentoring teachers to integrate technology in mathematics teaching.One reason for these difficulties was the moderate knowledge possessed by the participating in-service teachers in ICT and its proper use in teaching mathemat-ics.In more detail, regarding the participating in-service teachers' technological knowledge, they reported that generally they were acquainted with some Office programs, like Word and PowerPoint, and with the Internet.This little technological knowledge resulted in the teachers being afraid to use these technological tools.They were also confronting technological pedagogical content knowledge obstacles, such as the need for pedagogical support in integrating technology in a specific mathematical topic.
The difficulties and obstacles were due also to the little experience of the participating teachers in integrating ICT in their mathematics teaching.For example, one teacher reported using ICT infrequently for one year, while another teacher reported using only presentations and online games.
Together with the moderate knowledge in ICT and the little experience of the participating teachers in integrating ICT in their mathematics teaching, all the teachers were confronting logistic obstacles represented in insufficient resources as insufficient technological infrastructure, students' density in the classroom, insufficient number of mathematics lessons for teaching the mathematical topics in the middle school.
In spite of the participating teachers' little experience in integrating ICT in their teaching and the obstacles they confronted and discouraged them to integrate ICT in the classroom, the participating teachers had generally positive beliefs about the ICT integration in mathematics teaching.These positive beliefs were expressed through the in-service teachers' description of the advantages of ICT integration in mathematics teaching.The in-service teachers talked about advantages of three types.First, advantages related to the curriculum (connecting mathematics with real life phenomena, embodiment and visualization of mathematical concepts, manipulation of the mathematical objects).Second, advantages related to the learner (contribution to the learner's cognitive skills, contribution to the learner's technical skills, contribution to the learner's imagination skills, bridging the gap among students, bridging the gap between students and the teacher).Third, advantages related to the teacher (providing tools that assist the instruction, contribution to the material coverage, explaining the content).At the same time, one of the participating teachers had some negative beliefs about the integration of technology in mathematics teaching, specifically regarding the incapability to help students develop formal procedural mathematical knowledge.She described these negative beliefs by saying: "technology indeed assists in supporting the conceptual mathematical knowledge of the students, but it does not assist in supporting their formal procedural mathematical knowledge".
We argue that the positive beliefs of the mentoring teachers about the use of technology in the mathematics classroom were probably due to the general atmosphere regarding the importance of technology in education.This general atmosphere consisted in the atmosphere across schools, as well as in the school itself.The positive beliefs of the mentoring teachers were also due to their preparation as pre-service teachers, where this preparation usually includes at least two courses in ICT integration in teaching.At the same time, one teacher had negative beliefs about the use of technology in the mathematics classroom due to her previous experiences.
Goals and Decision-Making related to ICT Practice at the Beginning of the Academic Year
The positive beliefs mentioned above were not enough for the in-service teachers' goals to include ICT integration in their teaching.We think that the in-service teachers were reluctant to integrate technology in their teaching due to their little technological knowledge and/or little technological pedagogical content knowledge.In addition, this reluctance was, for one teacher, due to the previous experiences with ICT integration that resulted in a loss of formal procedural mathematical skills among her students.The in-service teacher who had negative beliefs about ICT use in teaching mathematics stated: "I had to compensate for the insufficient formal procedural knowledge in the following year".It could be said that for this in-service mathematics teacher, developing formal procedural mathematical skills was an important goal to achieve.All the in-service teachers who participated in the research decided not to integrate technology in their teaching because of the previous mentioned factors.
What is interesting, in the previous description of the factors that affected the participants' teachers' decisions related to the integration of technology in their mathematics teaching, is that the positive beliefs of the participating teachers about the use of technology did not affect sufficiently their goal to integrate ICT in teaching mathematics, where this goal was instead strongly affected by the negative beliefs related to previous experiences, as well as to other teacher factors related to moderate knowledge and insufficient resources as mentioned above.
Community of Practices
The five participating teachers agreed to permit the pre-service teachers' integration of ICT in their teaching.Nevertheless, they were afraid to integrate by themselves ICT in their teaching.In the frame of the professional development school, we, as preservice teachers educators, guaranteed the collaboration of our pre-service teachers with them and promised to accompany them in their integration of ICT in their teaching.This negotiation convinced them to accompany our pre-service teachers in their integration of ICT in mathematics teaching; what resulted in their decision to participate in the initiative.In the interview at the end of the academic year, the mentoring teachers emphasized the importance of the support given by the pre-service teachers' college educators to them and to the pre-service teachers.One of the in-service teachers said: "this support gave us confidence to engage in using ICT tools in our teaching of mathematics.We had expert people to whom we could turn when facing a problem or a difficulty".Overall, it could be said that all the participating teachers, through collaboration that included negotiation, were engaged in the initiative; i.e. using ICT in mathematics teaching.
The pre-service teachers were an integral part of the negotiation that characterized the learning process of the community of practice that included the in-service teachers.These pre-service teachers provided resources of three types for the mentoring teachers: technical support (for example helping in drawing functions using GeoGebra), technological pedagogical content knowledge support (for example helping in building technological pedagogical models (such as applets) using GeoGebra for teaching specific mathematical topics), and affective support (for example comforting the in-service teachers by showing understanding of the difficulties which the mentoring teachers encountered).In addition, the mentoring teachers' experiences also included tackling together with the pre-service teachers logistic obstacles while integrating technology in their teaching, but they looked at these obstacles as a natural part of engaging with ICT in teaching.One of the in-service teachers said: "Sometimes, we had logistic difficulties, but the pre-service teachers were there to overcome these difficulties.The assistance from the pre-service teachers made us decide to continue integrating technology in our teaching".Another in-service teacher said: "It was difficult for me to teach mathematics with technology.Watching the pre-service teachers doing that encouraged me to try myself integrating technology in my lessons".A third in-service teacher said: "together with the pre-service teachers, we not only prepared mathematical lesson plans that integrate technology for a specific lesson, but we prepared lesson plans for whole units in the mathematics book, together with the applets needed for teaching each lesson in each unit.This would encourage us to integrate technology in our teaching not only this year, but in the coming years".This is a good example of developing a shared repertoire of resources.
In addition to the said above, the college, educators held meetings every two weeks with the pre-service teachers alone, with the in-service teachers alone and with them together.In these meeting, the participants discussed issues related to integrating technology in the mathematics classroom in general and in specific topics in particular.This mutual engagement of the community members, together with the mutual engagement of lesson planning and teaching, especially through the collaboration of the pre-service and in-service teachers, facilitated the in-service teachers' integration of technology in their teaching.One important issue, which was repeatedly discussed and investigated, is the designing of a suitable applet using GeoGebra to investigate and discover specific mathematical relations between given mathematical objects.
To assess our methods of intervention, including the community of practices through engagement and negotiation, we recorded observations of five lessons, one for each in-service teacher, at the level of teaching ICT-based mathematics lesson with the support of pre-service teachers.These observations pointed at new actions in the in-service teachers' teaching and interactions with their students.These actions and interactions, which accumulated from our previous methods of intervention, could clarify their statements in the interview at the end of the academic year, and explain the change in their beliefs, goals and decision regarding the integration of ICT in their teaching.Following is some actions and interactions that were observed in these lessons or were stated by the mentoring teachers in the interviews at the end of the academic year.
Teachers' Knowledge, Experience and Beliefs at the end of the Academic Year
The mentoring teachers reported that their experience of accompanying the preservice teachers in integrating technology in teaching mathematics added to their knowledge regarding technology itself, as well as its integration, and encouraged them themselves to engage in the integration of technology into their own teaching.One of the in-service teachers said: "The pre-service teachers' accompanying was crucial for us.They made our integration of technology into our own teaching easier and, in a way, enjoyable".Another in-service teacher said: "the pre-service teachers helped us improve our technical skills, which assisted our ICT use in teaching".They also reported that the pre-service teachers helped them get familiar with new ICT tools and internet sites, as well as how to use them beneficially.One of the in-service teachers said: "I utilized the GeoGebra with the help of the pre-service teachers to let the students investigate and discover the effect of changing the "a" parameter in the linear function (f(x)=ax+b) on the slope of its straight line.I am sure that these options in GeoGebra could be applied to discovering other mathematical concepts and relations".It could be argued that the in-service teachers decided to use GeoGebra applets in their teaching as a tool for dynamic illustration, for their students' exploration of mathematical concepts and relations, and for carrying out interactive discussion with their students.This argument has its merits also in the observations of the in-service teachers' lessons.The in-service teachers concluded that as a result, they improved their knowledge how to integrate ICT in teaching mathematics.
At the end of the experiment, the mentoring teachers had the same positive beliefs about integrating technology in the mathematics classroom, but now these beliefs seemed to be founded on their own experiences and not only on their previous studies and the general atmosphere regarding the importance of ICT in education, especially in mathematics education.Moreover, the mentoring teachers became more knowledgeable regarding what to do with ICT in the mathematics lessons.
Goals and Decision-Making related to ICT Integration at the end of the Academic Year
Regarding their goal to use ICT in their teaching, the mentoring teachers expressed their decision to integrate ICT in their future teaching of mathematics, but not all the time.For example, one mentoring teacher said that GeoGebra is best for students to investigate and discover mathematical relations, but the pencil and paper are needed for the students to assimilate and improve their formal procedural technical mathematical skills.Another mentoring teacher said that using applets and GeoGebra visually and dynamically to facilitate mathematical discourse and investigation can serve students' investigation of mathematical concepts and relations, but justifying the relations need to be done mainly without GeoGebra.It should be stressed that in spite of the restrictions that the in-service teachers put on the use of ICT in the mathematics classroom, they talked about their decision to use ICT as a main tool that supports students' mathematical learning.For example, one in-service teacher said: "My attitudes towards ICT use in teaching changed positively as a result of this experiment".Another teacher stated: "It is difficult now to imagine any mathematics classroom without the use of technology".
Discussion
In the present research, we attempted to analyze the development of in-service mathematics teachers' integration of ICT as a result of being part of a community of practice that worked on that integration.Being part of a community of practice, the participating in-service mathematics teachers substantiated, with classroom-evidence, their starting beliefs regarding the integration of technology in the mathematics classroom.In more detail, the pre-service teachers' initiatives and experiences in teaching mathematics with ICT constituted the first phase of negotiation between the researchers and the in-service teachers; i.e. the negotiation started as a result of the pre-service teachers' initiatives and experiences, which constituted resources for the in-service teachers that they could rely upon to engage in utilizing technology in mathematics teaching [12].
The second phase of negotiation in the community of practice regarding utilizing technology in the mathematics classroom came from the in-service teachers' own experiences of integrating ICT in their mathematics teaching, where this integration was enabled due to the pre-service teachers' support.The negotiation, and thus development of the in-service mathematics teachers' engagement and practices regarding the integration of ICT in their teaching, would probably not happened without the community of practice, for it helped create a community of professionals that encouraged certain teaching behaviors, and substantiated beliefs regarding these behaviors.The community of practice was optimal for the in-service teachers to substantiate their positive beliefs about the integration of ICT in their teaching.This is due to the fact that in this context, it was possible for the in-service teachers to experiment teaching behaviors with the support of the pre-service teachers.This facilitated their decision whether to adopt these behaviors or not.The experimentation supported by the pre-service teachers, negotiated and improved their initial technological pedagogical content knowledge.This second phase of negotiation encouraged the in-service teachers to further use ICT in their teaching of mathematics.It could be claimed that the community of practice in which the in-service teachers, the pre-service teachers and the researchers were part of, was effective due to the characteristics of the community of practice that included the three components of a community of practice, as described in [9]: engagement, negotiation and utilizing/developing shared repertoire of resources.This shared repertoire of resources included different objects (as video clips of past-years pre-service teachers utilization of technology in their mathematics teaching), as well as different processes (as collaborative preparation of ICT-based mathematics lessons and teaching units by the in-service and the pre-service teachers).
The third phase of negotiation started when the in-service teachers began to teach using ICT on their own.The participating in-service teachers reported that teaching on their own encouraged them to set new goals and make new plans regarding their integration of ICT in their mathematics teaching.In fact, the pre-service teachers' practices as well as the mentoring teachers' practices regarding using technology in teaching mathematics encouraged the mentoring teachers to set new goals and make new plans that included utilizing ICT tools in their teaching.The ICT scene could be considered the unusual scene that encouraged the mentoring teachers to set new goals and make new plans and decisions regarding the use of technology in their teaching of mathematics [21,22].
In addition to the arguments above, it could be claimed that the community of practice provided for the in-service teachers a context in which they developed shared vision among all the participants in the community of practice, which is needed for the success of the practice.This shared vision was there from the beginning, but the practices of the community strengthened it.
This positive influence of the community of practice on the educational scene is described by several researchers; for example Price in [18] found the community of practice had positive influence on students' learning, where it enabled them to build on the contributions of others, develop a shared understanding appreciation for the mathematical resources used, and begin to see themselves as mathematicians.Steele in [25] reported that a community of practice, which was formed in a professional development course, had positive influence on the induction of a first-year mathematics teacher into her school.Furthermore, Shúilleabháin in [24] found that, as a result of participating in a community of practice that worked through lesson study context, the participating teachers' content knowledge, teaching approaches, pedagogical content knowledge and awareness of student thinking have been enhanced, which resulted in increased confidence in their knowledge of the mathematics curriculum.In addition, Boote in [4] reported that the community of practice, in the frame of a professional development school, influenced positively mathematics pre-service teachers' emerging pedagogical content knowledge.In the present research, the findings indicate that the community of practice has also positive influence on mathematics mentoring in-service teachers' practices; specifically on their decision to practice ICT integration in mathematics teaching.
Conclusion and Recommendations
The community of the in-service mentoring and pre-service teachers together with the college educators utilized a repertoire of resources as the video clips of integrating ICT in mathematics teaching carried out by previous pre-service teachers.In addition, they were engaged with and negotiated together the ICT integration in mathematics teaching in the training school.This community of practice improved the technological pedagogical content knowledge of the in-service mathematics teachers, and their experiences in using ICT in mathematics teaching.At the same time, the community practices strengthened their positive beliefs about the integration of ICT in mathematics teaching.These knowledge, experiences and beliefs led the in-service teachers to set new goals and decisions related to the active integration of ICT in mathematics teaching [21].These decisions were conditioned by taking care of the students' formal procedural mathematical knowledge.
Depending on the research results, we recommend that teacher-training colleges utilize the community of practice framework in order to improve pre-service teachers', as well as the mentoring in-service teachers' technological pedagogical content knowledge as well as their beliefs and, as a result, decisions regarding implementing new pioneer pedagogies, such as innovative ICT use, in the mathematics classroom.
Furthermore, we recommend utilizing different frameworks to analyze the development of teachers' innovative practices.One of these frameworks is the commognitive framework suggested by Sfard and Prusak in [23].Using this framework, we could say that the in-service teachers were part of a community of practice, where their identity moved towards closing the gap between the actual identity and that designated by the researchers through experimenting with the pre-service teachers.It seems that the identity of the in-service teachers reached equilibrium, regarding their use of ICT in their teaching of mathematics.This equilibrium was between their actual identity before the experiment and that designated by the researchers.Closing the gap or reaching an equilibrium between the actual and the designated identity of teachers indicates that the in-service teachers underwent professional development; in other words learning [23].This explanation is but an initial one, and more research, using the commognitive framework, is needed. | 2019-02-03T14:03:02.241Z | 2019-01-17T00:00:00.000 | {
"year": 2019,
"sha1": "43dc7b75adb543a9b88687bb47358fc9abf2961f",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jet/article/download/9134/5408",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "43dc7b75adb543a9b88687bb47358fc9abf2961f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213280376 | pes2o/s2orc | v3-fos-license | Are Malaysian University Students “MOOCs-Ready”?
Recently, the use of technology in teaching and learning session has had a considerable impact. Massive Open Online Course (MOOC) is one of the platforms and mediums for online learning practiced by most universities and colleges. There have been many researches on the usefulness of MOOC in teaching and learning. However, there is a lack of literature on Malaysian students’ readiness towards the use and application of MOOC. Due to this, this study intended to investigate Malaysian students’ readiness towards the use of MOOC in tertiary level of education. The respondents of this study involved twenty university students from various public and private universities. The findings have gathered that students are prepared and able to utilize MOOC in their learning. They are efficient in using internet, computers and online technologies to communicate, work and perform tasks. This paper aims to assist current and future educators to manage a productive educational setting.
Introduction
With the rise of technological and globalisation era, more universities are stepping up their education system by adopting an online course as part of the teaching and learning process (Gunuc & Babacan, 2018). Technological environment has now been making its waves into the education field replacing the traditional method. The existence of digital native students who prefer technological environment has driven educators to make use of technology as part of their teaching and learning session. Massive Open Online Course (MOOC) is an online platform where everyone can enroll because the courses offered are free. How to cite this paper: Hashim, H. U., Rusli, R., Yunus, M. Md., & Hashim, H. (2019). Are Malaysian University Students "MOOCs-Ready"? Creative Education, 10, 2540-2547. https://doi.org/10.4236/ce.2019.1012181 MOOC has also gained popularity as a tool of learning language (Yunus, 2018).
Due to this, Massive Open Online Courses (MOOCs) are well known by most universities and colleges nowadays. In line with the revolution of 4th Industrial Revolution and 21st Century Learning, the Ministry of Higher Education Malaysia has launched the Malaysia MOOC initiative with another 20 public universities (Yunus et al., 2019). Past researches have gathered on the usefulness of MOOC and its relations to today's education. There have also been many researches done on the use of Massive Open Online Course (MOOC) among top universities all over the world. However, researchers believed that there is a lack of literature on the readiness of the use of MOOC among students in tertiary level of education. In conjunction with this, researchers intend to investigate and measure tertiary level of students' readiness towards the use of Massive Open Online Courses or which is also known as MOOC.
Literature Review
This study is supported by the transactional distance learning theory. The transactional distance learning theory was first found by Moore (1991) and has been viewed as a major contributor in the distance education field. This theory believed that distance is seen as a pedagogical concept between the learners, their peers and the educators or instructors. This theory is believed to be closely related to the use of MOOC as an online platform and medium for teaching and learning. The theory believes that the structure of materials and the course itself plays an important role as it has relation to the learners' autonomy.
Today's digital native students are born with the ability to become a tech-savvy. Today's generation of students is to be considered as generation Z where they are so used to the existence of technology in their environment making them be very tech-friendly. Tech-savvy is defined as knowing a lot about modern technology, especially computer by Cambridge Dictionary. On the other hand, MOOC-savvy is in simpler words when a student or a person knows a lot about MOOC and the use of MOOC. Gunuc and Babacan (2018) in their study believed that the use of information and communication technologies (ICTs) such as mobile phones, tablets and laptops help to increase students' engagement, make the lessons to be more interesting and meaningful, learning becomes more effective and students tend to be more motivated. Yunus et al. (2019) have figured that the use of MOOC as part of the teaching and learning session somehow helped learners developing their communication skills due to the characteristics of MOOC that enable learners to interact with other people and to be more autonomous with their learning. They also figured out that MOOC in a way acts as a booster for learners' self-esteem and critical thinking skills. Badusah et al. (2016) previously conducted a study on the design of learning materials and tasks of a Massive Open Online Course (MOOC) using trans-disciplinary learning where they figured out that most learners preferred the use of live action videos and animations, also humor elements in learning mate-rials. They have also revealed that most learners preferred collaborative learning tasks rather than individual ones when it comes to online learning platform involving Massive Open Online Course (MOOC). The findings have proven that in a way, there are many factors needed to be considered in constructing a course in the Massive Open Online Courses. The findings are then later agreed by Nordin et al. (2016) in their study on the factors for development of learning content and tasks for MOOCs in an Asian context. The same findings have been gathered and a few factors such as the type of learning materials provided, the social settings of learning tasks and also the communication style in video lectures are as equally important in affecting learners' engagement towards the particular course in MOOC.
Methodology
This study is a mixed method type of study. Questionnaire was used as the instrument in this study. The questionnaire was adopted from "MOOC Readiness Questionnaire" by Prof. Dr. Mohammad Amin Embi for National Higher Education Strategic Plan. The questionnaire was divided into two sections namely; section A which is the demographic and background of the respondents, and section B which is the respondents' readiness towards Massive Open Online Courses (MOOCs). The questionnaire was then distributed using random sampling. The samples of this study were the students from public and private universities in Malaysia. The data are collected and analyzed with descriptive statistic using frequency and percentage counts.
Demographic Profile
This study was involved by 60% of undergraduate students and 40% of the post-graduate students. A dominant number of participants with 60% of rate are females and 40% of them are males. A sum of twenty students was associated in this research and they are from 5 public and one private institution. The rates of participants involved as per their institutions are IIUM (10%), UKM (35%), UM (10%), UNISEL (20%), UPSI (15%) and UITM (10%).
MOOCs Readiness
Q1: I have the basic skills for finding my way around the Internet (e.g., using search engines).
Based on Figure 1, it is clearly seen that 100% of the participants agreed to have the basic skills in using the Internet.
Q2: I think that I would be comfortable using a computer several times a week to particpate in a course.
Based on Figure 2, we can see that 90% of the participants are comfortable in using computer several times a week to participate in an online course. There are only 10% of the participants who stated differently. Q3: I think that I would be able to communicate effectively with others using online technologies (e.g., chat). Figure 3, we can see all the participants agreed to have the ability to communicate effectively with others using online technologies.
Based on
Q4: I think that I would be able to express myself clearly through my writing (e.g., emotions, humor). Figure 4, it is clearly seen that all participants agreed that they can express themselves such as their emotions, humor and expressions clearly through their writing.
Based on
Q5: I think that I would be able to ask questions and make comments in clear writing.
Based on the pie chart above (Figure 5), 90% of the participants agreed to have the ability to ask questions and make comments in clear writing. Only 10% of them stated differently.
Discussion
Students nowadays, generation Z students are advanced in technology usage. They know how to utilize mobile phones since the very young age. In fact, these students have never observed the world without the internet, so they can explore online application and platforms without having a formal introduction. They also have been using the internet to access much information online and they are capable to ingest huge amounts of new data consistently (Roseberry, 2017). In addition, Gen Z students can spend a few hours every day online. The implementation of MOOC in Malaysia has been an easier step to be conducted day by day as the education field has been constantly moving towards technological advances in classrooms especially at using online mediums to deliver information to the students. In order to build an effective educational setting, novice and veteran educators must consider the necessities of their students in this age. Successful educators regularly contribute new ways to comprehend, instruct, and bolster their students' growth. Thus, "keeping it new" and "creative" by applying MOOC platform is a progressing attempt. The on-going development and growth of technology has brought new patterns and application in the education field (Rusli & Hashim, 2018). Instead of avoiding generation z students' characteristics in learning, all the faculty members in higher institutions ought to think about the generational contrasts that may impede or help the teaching and learning to be dynamic and motivational (Mohr, 2017).
Moreover, these students are so used to utilizing their gadgets to assist their assignments and tasks and they always use web indexes to look for data. These students have their own mobile phones, computer-based devices such as laptops or tablets to be used and almost all of them have their own social media accounts especially Facebook. The students are also comfortable in using online resources or platforms that are designed to improve their learning skills. These students are happy with utilizing the technological devices as they have sufficient exposure on ICT apparatuses which drove them to flawlessly explore through the assets given to them (Ganapathy & Seetharam, 2016). Thus, the utilization of MOOC in higher institution will not be an issue for the students as they are already on par with the technological tools needed.
Apart from that, ever since the turning of 21st century, the Malaysian higher institutions have been changing in accordance to their issues and needs. They redesign the teaching and learning approaches, install WiFi availability and update the educational plan with experiential learning and capstone courses. Generation Z students experienced many skills through their online skills such as delegating and collaborating with their peers on group tasks and discussions. They are comfortable in using online mediums through online video talk, chat rooms, comments, statuses and posts. For them, learning does not occur in libraries and lecture halls but also through online sites and pages (Hashim et al., 2017). Implementing MOOC would be a perfect way to embrace the objective of collaborative teaching and learning in Malaysia.
Meanwhile, in reality, many students do not like to write. Writing tasks in conventional classroom tend to be boring and uninteresting. The fun aspect in learning is absent. But, in using technology like MOOC to instill the love of writing, students can use the textual communication to practice. The number of text messages, status update and instant messaging used by the students are massive and limitless. Thus, the teachers' challenge is to empower that same interest in formal writing. MOOCs utilization can prepare students to write in shorter texts and promote writing collaborations. Students can make associations with other writers online. MOOCs can provide writing projects with huge writing opportunities. Even if the students may only enroll in the course for only a short amount of time, they still may have accomplished more writing than they generally would have in usual basis (Comer & White, 2016).
Besides that, there are so many online applications that can be used to help students to express themselves clearly in writing such as using spell checker apps, grammar apps, online dictionaries and plagiarism checker. These applications not only help the students to acquire basic knowledge but also enabling them to channel their thoughts creatively, inner emotions and humor in their writings. Learning in this age cannot be separated, for example, learning writing skills will always implement reading skills or speaking and listening skills (Blake, 2016). Using technology platforms like MOOC can enable teachers to guarantee the positive learning outcomes for their students.
Conclusion
MOOC is an open based online platform that offers free courses for students. It allows universities, colleges, educators to design their preferable content to ease their teaching processes. In this paper, the researchers investigated the readiness of Malaysian higher institutions students in using MOOC. Based on the findings, it can be seen that these students are able and ready to utilize MOOC in their learning. These students are able to use internet effectively, comfortable in using computer, able to communicate using online technologies, and able to express themselves in writings. Future research could enrich the current knowledge in investigating the factors contributing to MOOC readiness in Malaysia. In sum, MOOC can be a great solution for educators to boost and vary their teaching techniques. All public and private institutions should find a way to implement MOOC in their curriculum in order to create desired learning goals. | 2019-11-28T12:21:54.844Z | 2019-11-26T00:00:00.000 | {
"year": 2019,
"sha1": "4b154c2ad53cd2b260dd5d6c0fe3db78de25a1b8",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=96638",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "618bf0eb745504e071aaf5c006b75058c7297f23",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
53484532 | pes2o/s2orc | v3-fos-license | Photoionization microscopy in terms of local frame transformation theory
Two-photon ionization of an alkali-metal atom in the presence of a uniform electric field is investigated using a standardized form of local frame transformation and generalized quantum defect theory. The relevant long-range quantum defect parameters in the combined Coulombic plus Stark potential is calculated with eigenchannel R-matrix theory applied in the downstream parabolic coordinate $\eta$. The present formulation permits us to express the corresponding microscopy observables in terms of the local frame transformation, and it gives a critical test of the accuracy of the Harmin-Fano theory permitting a scholastic investigation of the claims presented in Zhao {\it et al.} [Phys. Rev. A 86, 053413 (2012)].
I. INTRODUCTION
The photoabsorption spectrum of an alkali-metal atom in the presence of a uniform electric field constitutes a fundamental testbed for atomic physics. Through the past few decades, study of this class of systems has provided key insights into their structure and chemical properties, as well as the nonperturbative effect of an applied external field. The response of the lower energy eigenstates of any alkali-metal atom to a laboratory strength electric field is perturbative and can be described in terms of the static atomic polarizability. For states high in the Rydberg series or in the ionization continuum, however, even a modest field strength nonperturbatively modifies the nature of the energy eigenstates.
In fact this problem touches on fundamental issues concerning the description of nonseparable quantum mechanical systems. The Stark effect of alkali-metal atoms is one of the simpler prototypes of such systems, because the short-distance electron motion is nearly separable in spherical coordinates while the intermediateand long-distance motion is almost exactly separable in parabolic coordinates. The evolution of a quantum electron wave function from small to large distances thus involves a transformation, termed a local frame transformation (LFT) because it is derived in a localized region of space. (The extent of this region is typically limited to within 10-20 a.u. between the electron and the nucleus.) When one encounters a problem of nonrelativistic quantum mechanics where the Schrödinger equation is nonseparable, one usually anticipates that the system will require a complicated numerical treatment. This is the first and most common approach even if the nonseparability is limited to only two coordinates as is the case with the nonhydrogenic Stark effect since the azimuthal angle φ is separable for this problem (aside from the com- * pgiannak@purdue.edu † robichf@purdue.edu ‡ chgreene@purdue.edu paratively weak spin-orbit coupling). Thus it was a major breakthrough when papers by Fano [1] and Harmin [2][3][4] showed in the early 1980s how the problem can be solved analytically and almost completely using ideas based on the frame transformation theory and quantum defect theory. Since that body of work introduced the LFT method, it has been generalized to other systems that are similar in having an intermediate region of space where the wave equation is separable in both the smalland large-distance coordinate systems. Example applications include diverse systems such as negative ion photodetachment in either an external magnetic [5] or electric field [6][7][8][9], and confinement-induced resonances in ultracold atom-atom scattering [10][11][12][13] or dipole-dipole collisions [14].
The LFT theory has been demonstrated by now to have great effectiveness in reproducing experimental spectra and collision properties as well as accurate theoretical results derived using other methods including "brute force" computations [15]. The deviations between highly accurate R-matrix calculations and the LFT method were found in Ref. [15] to be around 0.1% for resonance positions in the 7 Li Stark effect. The LFT is evolving as a general tool that can solve this class of nonseparable quantum mechanical problems, but it must be kept in mind that it is an approximate theory. It is therefore desirable to quantify the approximations made, in order to understand its regimes of applicability and where it is likely to fail.
The goal of the present study is to provide a critical assessment of the accuracy of the LFT, concentrating in particular on observables related to photoionization microscopy. The experiments in this field [16][17][18][19] have focused on the theoretical proposal that the probability distribution of an ejected slow continuum electron can be measured on a position-sensitive detector at a large distance from the nucleus [20][21][22][23].
While the Harmin-Fano LFT theory has been shown in the 1980s and 1990s to describe the total photoabsorption Stark spectra in one-electron [2,3,15] and twoelectron [24][25][26] Rydberg states, examination of a differential observable such as the photodetachment [27] or photoionization [28] microscopy probability distribution should in principle yield a sharper test of the LFT. Indeed, a recent study by Zhao, Fabrikant, Du, and Bordas [29] identifies noticeable discrepancies between Harmin's LFT Stark effect theory and presumably more accurate coupled-channel calculations. Particularly in view of the extended applications of LFT theory to diverse physical contexts, such as the confinement-induced resonance systems noted above, a deeper understanding of the strengths and limitations of the LFT is desirable.
In this paper we employ R-matrix theory in a fully quantal implementation of the Harmin local frame transformation, instead of relying on semiclassical wave mechanics as he did in Refs. [2][3][4]. This allows us to disentangle errors associated with the WKB approximation from those deriving from the LFT approximation itself. For the most part this causes only small differences from the original WKB treatment consistent with Ref. [15], but it is occasionally significant, for instance for the resonant states located very close to the top of the Stark barrier. Another goal of this study is to standardize the local frame transformation theory to fully specify the asymptotic form of the wave function which is needed to describe other observables such as the spatial distribution function (differential cross section) that is measured in photoionization microscopy.
We also revisit the interconnection of the irregular solutions from spherical to parabolic coordinates through the matching of the spherical and parabolic Green's functions in the small distance range where the electric field is far weaker than the Coulomb interaction. This allows us to re-examine the way the irregular solutions are specified in the Fano-Harmin LFT, which is at the heart of the LFT method but one of the main focal points of criticism leveled by Zhao et al. [29].
Because Zhao et al. [29] raise serious criticisms of the LFT theory, it is important to further test their claims of error and their interpretation of the sources of error. Their contentions can be summarized as follows: (i) The Harmin-Fano LFT quite accurately describes the total photoionization cross section, but it has significant errors in its prediction of the differential cross section that would be measured in a photoionization microscopy experiment. This is deduced by comparing the results from the approximate LFT with a numerical calculation that those authors regard as essentially exact.
(ii) The errors are greatest when the atomic quantum defects are large, and almost negligible for an atom like hydrogen which has vanishing quantum defects. They then present evidence that they have identified the source of those errors in the LFT theory, namely the procedure first identified by Fano that predicts how the irregular spherical solution evolves at large distances into parabolic coordinate solutions. Their calculations are claimed to suggest that the local frame transformation of the solution regular at the origin from spherical to parabolic coordinates is correctly described by the LFT, but the irregular solution transformation is incorrect.
One of our major conclusions from our exploration of the Ref. [29] claimed problems with the Harmin-Fano LFT is that both claims are erroneous; their incorrect conclusions apparently resulted from their insufficient attention to detail in their numerical calculations. Specifically, our calculations for the photoionization microscopy of Na atoms ionized via a two photon process in π polarized laser fields do not exhibit the large and qualitative inaccuracies which were mentioned in Ref. [29]; for the same cases studied by Zhao et al., we obtain excellent agreement between the approximate LFT theory and our virtually exact numerical calculations. Nevertheless some minor discrepancies are noted which may indicate minor inaccuracies of the local frame transformation theory.
This paper is organized as follows: Section II focuses on the local frame transformation theory of the Stark effect and present a general discussion of the physical content of the theory, including a description of the relevant mappings of the regular and irregular solutions of the Coulomb and Stark-Coulomb Schrödinger equation. Section III reformulates the local frame transformation theory properly, including a description of the asymptotic electron wave function. In addition, this Section defines all of the relevant scattering observables. Section IV discusses a numerical implementation based on a two-surface implementation of the eigenchannel R-matrix theory. This toolkit permits us to perform accurate quantal calculations in terms of the local frame transformation theory, without relying on the semiclassical wave mechanics adopted in Harmin's implementation. Section V is devoted to discussion of our recent finding in comparison with the conclusions of Ref. [29]. Finally, Section VI summarizes and concludes our analysis.
II. LOCAL FRAME TRANSFORMATION THEORY OF THE STARK EFFECT
This section reviews the local frame transformation theory (LFT) for the non-hydrogenic Stark effect, utilizing the same nomenclature introduced by Harmin [2][3][4]. The crucial parts of the corresponding theory are highlighted developing its standardized formulation.
A. General considerations
In the case of alkali-metal atoms at small length scales the impact of the alkali-metal ion core on the motion of the valence electron outside the core can be described effectively by a phase-shifted radial wave function: where the Y ℓm (θ, φ) are the spherical harmonic functions of orbital angular momentum ℓ and projection m. r 0 indicates the effective radius of the core, δ ℓ denotes the phase that the electron acquires due to the alkali-metal ion core. These phases are associated with the quantum defect parameters, µ ℓ , according to the relation δ ℓ = πµ ℓ . The pair of {f, g} wave functions designate the regular and irregular Coulomb ones respectively whose Wronskian is W [f, g] = 2/π. We remark that this effective radius r 0 is placed close to the origin where the Coulomb field prevails over the external electric field. Therefore, the effect on the phases δ ℓ from the external field can be neglected. Note that atomic units are employed everywhere, otherwise is explicitly stated.
At distances r ≫ r 0 the outermost electron of the nonhydorgenic atom is in the presence of a homogeneous static electric field oriented in the z-direction. The separability of the center-of-mass and relative degrees of freedom permits us to describe all the relevant physics by the following Schrödinger equation in the relative frame of reference: where F indicates the strength of the electric field, r corresponds to the interparticle distance and ǫ is the total colliding energy. Note that Eq. (2) is invariant under rotations around the polarization axis, namely the corresponding azimuthal quantum number m is a good one. In contrast, the total orbital angular momentum is not conserved, which shows up as a coupling among different ℓ states. The latter challenge, however, can be circumvented by employing a coordinate transformation which results in a fully separable Schrödinger equation. Hence, in parabolic coordinates ξ = r + z, η = r − z and φ = tan −1 (x/y), Eq. (2) reads: where β is the effective charge and ǫ, F are the energy and the field strength in atomic units. We remark that Eq. (3) in the ξ degrees of freedom describes the bounded motion of the electron since as ξ → ∞ the term with the electric field steadily increases. This means that the Ξ wave function vanishes as ξ → ∞ for every energy ǫ at particular values of the effective charge β. Thus, Eq. (3) can be regarded as a generalized eigenvalue equation where for each quantized β ≡ β n1 the Ξ ǫF βm ≡ Ξ ǫF n1m wave function possesses n 1 nodes. In this case the wave functions Ξ ǫF n1m (ξ) possess the following properties: • Near the origin Ξ ǫF n1m behaves as: where N F ξ is an energy-field dependent amplitude and must be determined numerically in general.
• The wave function Ξ ǫF n1m obeys the following normalization condition: On the other hand Eq. (4) describes solely the motion of the electron in the η degree of freedom which is unbounded. As η → ∞ the term with the electric field steadily decreases which in combination with the coulomb potential forms a barrier that often has a local maximum. Hence, for specific values of energy, field strength and effective charge the corresponding wave function Υ ǫF βm ≡ Υ ǫF n1m propagates either above or below the barrier local maximum where the states n 1 define asymptotic channels for the scattering wave function in the η degrees of freedom. Note that for β n1 > 1, the Coulomb term in Eq. (4) becomes repulsive and therefore no barrier formation occurs. Since Eq. (4) is associated with the unbounded motion of the electron it possesses two solutions, namely the regular Υ ǫF n1m (η) and the irregular onesῩ ǫF n1m (η). This set of solutions has the following properties: • Close to the origin and before the barrier the irregular solutionsῩ ǫF n1m (η) lag by π/2 the regular ones, namely Υ ǫF n1m (η). Note that their normalization follows Harmin's definition [2] and is clarified below.
• Near the origin the regular solutions vanish according to the relation: η is an energy-and field-dependent amplitude and must be determined numerically in general.
Let us now specify the behavior of the pair solutions {Υ ǫF n1m ,Ῡ ǫF n1m } at distances after the barrier. Indeed, the regular and irregular functions can be written in the following WKB form: where k(η) = −m 2 /η 2 + (1 − β n1 )/η + ǫ/2 + F η/4 is the local momentum term with the Langer correction being included, η 0 is the position of the outermost classical turning point and the phase δ n1 is the absolute phase induced by the combined Coulomb and electric fields. The phase γ n1 corresponds to the relative phase between the regular and irregular functions, namely {Υ,Ῡ}. We recall that at short distances their relative phase is exactly π/2, though as they probe the barrier at larger distances their relative phase is altered and hence after the barrier the short range regular and irregular functions differ by 0 < γ n1 < π and not just π/2. We should remark that after the barrier the amplitudes of the pair {Υ,Ῡ} are equal to each other and their relative phase in general differs from π/2. On the other hand, at shorter distances before the barrier the amplitudes of the {Υ,Ῡ} basically are not equal to each other and their relative phase is exactly π/2. This ensures that the Wronskian of the corresponding solutions possesses the same value at all distances and provides us with insight into the interconnection between amplitudes and relative phases.
The key concept of Harmin's theoretical framework is to associate the relevant phases at short distances in the absence of an external field, i.e. δ ℓ (see Eq. (1)) to the scattering phases at large distances where the electric field contributions cannot be neglected. This can be achieved by mapping the corresponding regular and irregular solutions from spherical to parabolic-cylindrical coordinates as we discuss in the following.
B. Mapping of the regular functions from spherical to parabolic-cylindrical coordinates
The most intuitive aspect embedded in the present problem is that the Hamiltonian of the motion of the electron right outside the core possesses a spherical symmetry which in turn at greater distances due to the field becomes parabolic-cylindrically symmetric. Therefore, a proper coordinate transformation of the corresponding energy normalized wave functions from spherical to parabolic cylindrical coordinates will permit us to propagate to asymptotic distances the relevant scattering or photoionization events initiated near the core. Indeed at distances r ≪ F −1/2 the regular functions in spherical coordinates are related to the parabolic cylindrical ones according to the following relation: where f ǫℓm (r) are the regular solutions in spherical coordinates with ℓ being the orbital angular momentum quantum number. The small distance behavior is with N ǫℓ a normalization constant (see Eq. (13) in Ref. [2]). Therefore, from the behavior at small distances of the parabolic-cylindrical and spherical solutions the frame transformation U ǫF m n1ℓ has the following form: where n 1 = β n1 ν − 1/2 − m/2 and ν = 1/ √ −2ǫ. The local frame transformation U is plotted for four different angular momenta, namely ℓ = 1, 2, 3 and 6 where we set m = 1 at energy ǫ = 135.8231 cm −1 and field F = 640 V/cm. One sees that the local frame transformation U becomes significant in the interval n 1 ∈ (38, 79) which essentially corresponds to β n1 ∈ (0, 1). For β n1 < 0 or β n1 > 1 the local frame transformation vanishes rapidly. This behavior mainly arises from the normalization amplitudes N F ξ and N F η , which obey the following relations: Note that these expressions are approximately valid only for positive energies and they are exact for F = 0.
From the expressions in Eq. (9) it becomes evident that for negative eigenvalues β n1 the amplitude N F ξ vanishes exponentially while N F η remains practically finite. Similarly, for the case of β n1 > 1 the amplitude N F η vanishes exponentially, and these result in the behavior depicted in Fig.1. Another aspect of the local frame transformation U is its nodal pattern shown in Fig.1. For increasing ℓ the corresponding number of nodes increases as well. For m = 1, every U ǫF m n1ℓ possesses ℓ − 1 nodes.
C. Mapping of the irregular functions from spherical to parabolic-cylindrical coordinates
Having established the mapping between the regular solutions of the wave function in spherical and paraboliccylindrical coordinates, the following focuses on the relation between the irregular ones.
The irregular solution in the parabolic-cylindrical coordinates has the following form: Recall that In order to relate Eq. (10) to the irregular functions in spherical coordinates we employ Green's functions as was initially suggested in [1]. More specifically, the principal value Green's function for the pure Coulomb Hamiltonian G (C) P (r, r ′ ), is matched with a Green's function of the Coulomb plus Stark Hamiltonian G (C+F ) (r, r ′ ), which is expressed in parabolic-cylindrical coordinates.
Of course, in general the two Green's functions differ from each other since they correspond to different Schrödinger equations. However, at small distances the field term in the Stark Hamiltonian becomes negligible in comparison with the Coulomb term. Therefore, in this restricted region of the configuration space, i.e. r ≪ F −1/2 , the Stark Hamiltonian is virtually identical to the Coulomb Hamiltonian, whereby the corresponding Green's functions are equivalent to an excellent approximation. We refer to this region as the Coulomb zone.
For positive energies recall that the principal value Green's function is uniquely defined in the infinite configuration space, and it consists of a sum of products of regular and irregular functions. The employed regular and irregular functions are defined such that their relative phase is exactly π/2 asymptotically [30,31]. Therefore, according to the above mentioned arguments the principal value Green's function obeys the relation expressed in spherical coordinates: where the {f, g} solutions correspond to the regular and irregular functions as they are defined in Eq. (1) Note that the principal value Green's function of the Coulomb Hamiltonian in spherical and in parabolic-cylindrical coordinates are equal to each other, namely G (C),sc P ≡ G (C),pcc P (the abbreviations sc and pcc stand for spherical and parabolic-cylindrical coordinates, respectively).
On the other hand for negative energies, by analytically continuing the {f, g} Coulombic functions across the threshold yields the relation G (C),sc ≡ G (C),pcc . The G (C) is the so called smooth Green's function which is related to a Green's function bounded at r = 0 and at infinity according to the expression [32]: where β(ǫ) = π(ν − ℓ) with ν = 1/ √ −2ǫ is the phase accumulated from r = 0 up to r → ∞. Assume that ǫ n (i.e. ν → n ∈ ℵ * ) are the eigenergies specified by imposing the boundary condition at infinity where n denotes a counting index of the corresponding bound states. Then in the right hand side of Eq. (12) the second term at energies ǫ = ǫ n diverges while the first term is free of poles. The smooth Green's function is identified as the one where the two linearly independent solutions have their relative phase equal to π/2 at small distances. Furthermore, the singularities in Eq. (12) originate from imposing the boundary condition at infinity, though in the spirit of multichannel quantum defect theory we can drop this consideration and solely employ the G (C) which in spherical coordinates reads In view of the now established equality between the principal value (smooth) Green's functions at positive (negative) energies in spherical and parabolic cylindrical coordinates for the pure Coulomb Hamiltonian, the discussion can proceed to the Stark Hamiltonian. Hence as mentioned above in the Coulomb zone, i.e. r ≪ F −1/2 , the Stark Hamiltonian is approximately equal to the pure Coulomb one. This implies the existence of a Green's function, G (C+F ) , for the Stark Hamiltonian which is equal to the G (C),pcc P (G (C),pcc ), and which in turn is equal to Eq. (11) [Eq. (13)] at positive (negative) energies. More specifically, the G (C+F ) the Green's function expressed in parabolic-cylindrical coordinates is given by the expression: where the functions {ψ, χ} are the regular and irregular solutions of the Stark Hamiltonian, which at small distances (in the classically allowed region) have a relative phase of π/2. This originates from π/2 relative phase of the {Υ,Ῡ} as was mentioned is subsection A. The Wronskian W [Υ ǫF n1m ,Ῡ ǫF n1m ] = (2/π) sin γ n1 yields {ψ, χ} solutions have the same energy normalization as in the {f, g} coulomb functions.
We should point out that Eq. (14) is not the principal value Green's function of the Stark Hamiltonian. Indeed, it can be shown that principal value Green's function of the Stark Hamiltonian, namely G (C+F ) P and the Green's function G (C+F ) obey the following relation: where we observe that either for positive energies or for n 1 channels which lie above the saddle point of the Stark barrier the second term vanishes. This occurs due to the fact that γ n1 ≈ π/2 since the barrier does not alter the relative phases between the regular and irregular solutions. For the cases where the barrier effects are absent the G (C+F ) is the principal value Green's function of the Stark Hamiltonian as was pointed out by Fano [1]. However, in the case of non hydrogenic atoms in presence of external fields the barrier effects are significant especially at negative energies. Therefore the use of solely the principal value Green's function G (C+F ) P would not allow a straightforward implementation of scattering boundary conditions. This is why the second term in Eq. (15) has been included.
From the equality between Eqs. (11) [or (13)] and (14), hereafter with the additional use of Eq. (7), the mapping of the irregular solutions is given by the following expression: Additionally, Eq. (7) conventionally can be written as Note that in Eqs. (16) and (17) where k(r, θ) (k 0 (r)) indicates the local momentum with (without) the electric field F . In Eq. (18) it is observed that for field strength F = 1 kV/cm and r < 50 a.u. the phase modification due to existence of the electric field is less than 0.001 radians. This simply means that at short distances both sides of Eq. (16) should exhibit practically uniform phase over the angle θ.
Recapitulating Eqs. (16) and (17) constitute the mapping of the regular and irregular functions respectively from spherical to parabolic cylindrical coordinates.
III. SCATTERING OBSERVABLES IN TERMS OF THE LOCAL FRAME TRANSFORMATION
This section implements Harmin frame transformation theory to determine all the relevant scattering observables.
A. The asymptotic form of the frame transformed irregular solution and the reaction matrix The irregular solutions which we defined in Eq. (6) are not the usual ones of the scattering theory since in the asymptotic region, namely η → ∞, they do not lag by π/2 the regular functions, Eq. (5). Hence, this particular set of irregular solutions should not be used in order to obtain the scattering observables which are properly defined in the asymptotic region.
However, by linearly combining Eqs. (5) and (6) we define a new set of irregular solutions which are energynormalized, asymptotically lag by π/2 the regular ones, and read: where this equation together with Eq. (5) correspond to a set of real irregular and regular solutions according to the usual conventions of scattering theory. The derivation of the reaction matrix follows. Eqs. (19) and (10) are combined and then substituted into Eq. (16) such that the irregular solution in spherical coordinates is expressed in terms of theῩ ǫF, scat Hereafter, the short-range wave function ( Eq. (1)) expressed in spherical coordinates is transformed via the LFT U into the asymptotic wave function. Specifically, Then from Eq. (22) and after some algebraic manipulations the reaction matrix solutions are written in a compact matrix notation as where I is the identity matrix, the matrices cos δ, tan δ, and cot γ are diagonal ones. Note thatψ (χ) indicates a vector whose elements are the ψ ǫF n1m (r) (χ ǫF n1m (r)) functions. Similarly, the elements of the vector Ψ are provided by Eq. (1). Then from Eq. (24) the reaction matrix obeys the following relation: In fact the matrix product U tan δ U T can be viewed as a reaction matrix K which does not encapsulates the impact of the Stark barrier on the wave function. Moreover, as shown in Ref. [33] the recasting of the expression for the reaction matrix R in form that does not involve the inverse [U T ] −1 improves its numerically stability. In addition, it can be shown with simple algebraic manipulations that the reaction matrix is symmetric. Note that this reaction matrix R should not be confused with the Wigner-Eisenbud R-matrix. The corresponding physical S-matrix is defined from the R-matrix via a Cayley transformation, namely where clearly this S-matrix is equivalent to the corresponding result of Ref. [29]. Also, the S-matrix in Eq. (25) is unitary since the corresponding R-matrix is real and symmetric.
B. Dipole matrix and outgoing wave function with the atom-radiation field interaction
As was already discussed, the pair of parabolic regular and irregular solutions {ψ, χ} are the standing-wave solutions of the corresponding Schrödinger equation. However, by linearly combining them and using Eq. (24), the corresponding energy-normalized outgoing/incoming wave functions are expressed as: where the elements of the vectors X ± r are defined by the relation [X ± (r)] ǫF n1m = (−χ ǫF n1m (r) ± iψ ǫF n1m (r))/ √ 2. In the treatment of the photoionization of alkali-metal atoms, the dipole matrix elements are needed to compute the cross sections which characterize the excitation of the atoms by photon absorption. Therefore, initially we assume that at small distances the short-range dipole matrix elements possess the form d ℓ = Ψ ǫℓm |ε ·r |Ψ init . Note that the termε ·r is the dipole operator, theε denotes the polarization vector and |Ψ init indicates the initial state of the atom. Then the dipole matrix elements which describe the transition amplitudes from the initial to each n 1 -th of the reaction-matrix states is Now with the help of Eq. (27) we define the dipole matrix elements for transitions from the initial state to the incoming wave final state which has only outgoing waves in the n 1 − th channel. The resulting expression is Eq. (28) provides the necessary means to properly define the outgoing wave function with the atom-field radiation. As it was shown in Ref. [34] the outgoing wave function can be derived as a solution of an inhomogeneous Schrödinger equation that describes the atom being perturbed by the radiation field. Formally this implies that where Ψ out (r) describes the motion of the electron after its photoionization moving in the presence of an electric filed, H is the Stark Hamiltonian with ǫ being the energy of ionized electron. The Ψ out (r) can be expanded in outgoing wave functions involving the dipole matrix elements of Eq. (28). More specifically we have that
C. Wave function microscopy and differential cross sections
Recent experimental advances [16][17][18][19] have managed to detect the square module of the electronic wave function, which complements a number of corresponding theoretical proposals [20][21][22][23]. This has been achieved by using a position-sensitive detector to measure the flux of slow electrons that are ionized in the presence of an electric field.
The following defines the relevant observables associated with the photoionization-microscopy. The key quantity is the differential cross section which in turn is defined through the electron current density. As in Ref. [34], consider a detector placed beneath the atomic source with its plane being perpendicularly to the axis of the electric field. Then, with the help of Eq. (30) the electron current density in cylindrical coordinates has the following form: where z det indicates the position of the detector along the z-axis, c is the speed of light and ω denotes the frequency of the photon being absorbed by the electron. The integration of the azimuthal φ angle leads to the differential cross section per unit length in the ρ coordinate. Namely, we have that
IV. EIGENCHANNEL R-MATRIX CALCULATION
Harmin's Stark effect theory for nonhydrogenic atoms is mainly based on the semi-classical WKB approach. In order to eliminate the WKB approximation as a potential source of error, this section implements a fully quantal description of Harmin's theory based on a variational eigenchannel R-matrix calculation as was formulated in Ref. [35,36] and reviewed in [37]. As implemented here using a B-spline basis set, the technique also shares some similarities with the Lagrange-mesh R-matrix formulation developed by Baye and coworkers [38]. The present application to a 1D system with both an inner and an outer reaction surface accurately determines regular and irregular solutions of the Schrödinger equation in the η degrees of freedom. The present implementation can be used to derive two independent solutions of any onedimensional Schrödinger equation of the form where The present application of the non-iterative eigenchannel R-matrix theory adopts a reaction surface Σ with two disconnected parts, one at an inner radius η 1 and the other at an outer radius η 2 . The reaction volume Ω is the region η 1 < η < η 2 .
This one-dimensional R-matrix calculation is based on the previously derived variational principle [35,39] for the eigenvalues b of the R-matrix, Physically, these R-matrix eigenstates have the same outward normal logarithmic derivative everywhere on the reaction surface consisting here of these two points Σ 1 and Σ 2 . The desired eigenstates obey the following boundary condition: In the present application the ψ-wave functions are expanded as a linear combination of a nonorthogonal Bspline basis [40], i.e.
where P i denote the unknown expansion coefficients and B i (η) stands for the B-spline basis functions. The first term in the left hand side of Eq. (37) was regarded as the "closed-type basis set in [37] because every function B c (η) vanishes on the reaction surface, i.e. B c (η 1 ) = B c (η 2 ) = 0. The two basis functions B I (η) and B O (η) correspond to the "open-type basis functions of Ref. [37] in that they are the only B-spline functions that are nonzero on the reaction surface. Specifically, only B I (η) is nonzero on the inner surface η = η 1 (Σ 1 ) and only B O (η) is nonzero on the outer surface η = η 2 (Σ 2 ). Moreover the basis functions B I and B O have no region of overlap in the matrix elements discussed below.
Insertion of this trial function into the variational principle leads to the following generalized eigenvalue equation: In addition, the real, symmetric matrices Γ and Λ are given by the following expressions for this onedimensional problem: Λij = Bi(η1)Bj (η1) + Bi(η2)Bj (η2) = δi,I δI,j + δi,OδO,j, (40) where δ indicates the Kronecker symbol and the ′ are regarded as the derivatives with respect to the η. It is convenient to write this linear system of equations in a partitioned matrix notation, namely: Now the first of these three equations is employed to eliminate P C by writing it as P C = −Γ −1 CC Γ CI P I − Γ −1 CC Γ CO P O , which is equivalent to the "streamlined transformation" in Ref. [36]. This gives finally a 2×2 matrix Ω to diagonalize at each ǫ in order to find the two R-matrix eigenvalues b λ and corresponding eigenvectors P iλ : Here, e.g., the matrix element In any 1D problem like the present one, the use of a Bspline basis set leads to a banded structure for Γ CC which makes the construction of Γ −1 CC Γ CI and Γ −1 CC Γ CO highly efficient in terms of memory and computer processing time; this step is the slowest in this method of solving the differential equation, but still manageable even in complex problems where the dimension of Γ CC can grow as large as 10 4 to 10 5 .
Again, the indices C refer to the part of the basis expansion that is confined fully within the reaction volume and vanishes on both reaction surfaces.
The diagonalization of Eq. (44) provides us with the b λ -eigenvalues and the corresponding eigenvectors, which define two linearly independent wave functions ψ λ , with λ = 1, 2. These obey the Schrödinger equation, Eq. 33 and have equal normal logarithmic derivatives at η 1 and η 2 . The final step is to construct two linearly independent solutions that coincide at small η with the regular and irregular field-free η-solutions f ǫβm (η) and g ǫβm (η) (Cf. Appendix A). These steps are rather straightforward and are not detailed further in this paper.
A. The frame transformed irregular solutions
To reiterate, Zhao et al. [29] claim that the Fano-Harmin frame transformation is inaccurate, based on a disagreement between their full numerical calculations of the differential cross section and the LFT calculation. They then claim to have investigated the origin of the discrepancy and pinpointed an error in the frame transformed irregular function. The present section carefully tests the main conclusion of Ref. [29] that Eq. (16) does not accurately yield the development of the irregular spherical Coulomb functions into a parabolic fielddependent solution (see Fig.5 in Ref. [29]). Fig.2 illustrates the irregular solutions in spherical coordinates where r = (r, θ = 5π/6, φ = 0) and the azimuthal quantum number is set to be m = 1. The energy is taken to be ǫ = 135.8231 cm −1 and the field strength is F = 640 V/cm. In addition we focus on the regime where r < 90 au ≪ F −1/2 . In all the panels the black solid line indicates the analytically known irregular Coulomb function, namely g (C) ǫℓm (r) r . Fig.2(a) and (b,c) examine the cases of angular momentum ℓ = 1 and 6, respectively. All the green dashed lines, the diamonds and dots correspond to the frame transformed irregular Coulomb functions in spherical coordinates, namely g (LF T ) ǫℓm (r) r , which are calculated by summing up from 0 to n 1 the irregular χ ǫF n1m (r) functions in the parabolic coordinates as Eq. (16) indicates.
The positive value of the energy ensures that all the n 1 -channels lie well above the local maximum in the η whereby the phase parameter γ n1 is very close to its semiclassical expected value π/2. Furthermore, since only short distances are relevant to this comparison, namely r < 90 au, this means that the summedῩ ǫF n1m (η) functions on the right-hand side of Eq. (16) in the n 1 -th irregular χ ǫF n1m (r) will be equal to analytically known Coulomb irregular functions in the parabolic coordinates. This is justified since at the interparticle distances that we are interested in, namely ≪ F −1/2 , the electric field is negligible in comparison to the Coulomb interaction. Then the corresponding Schrödinger equation becomes equal to the Schrödinger equation of the pure Coulomb field which is analytically solvable in spherical and parabolic coordinates as well. Thus, in the following we employ the above-mentioned considerations in the (black line) particularly in the interval of small interparticle distances r. This can be explained with the help of Fig.1, which demonstrates that the LFT U for ℓ = 1 possesses nonzero elements for n 1 > 60, and those elements are crucial for the growth of the irregular solution at small distances. Therefore, the summation in Eq. 16 for ℓ = 1 does not begin to achieve convergence until n 1 ≥ 100, where the corresponding ele- r . In contrast to the case where ℓ = 1, the convergence is observed to be very slow for ℓ = 6. The main reason for this is that for r < 20 au we are in the classically forbidden region where g (LF T ) ǫℓm r diverges as 1/r ℓ+1 . From Eq. (16) it is clear that the sum will diverge due to the divergent behavior of the irregular functions of the η direction, namely theῩ ǫF n1m (η). Hence, in order thē Υ ǫF n1m (η) to be divergent in the interval of 10 to 80 au it is important to take into account many n 1 states which correspond to β n1 > 1 since only then the term 1 − β/η becomes repulsive and producing the diverging behavior appropriate to a classical forbidden region. Fig.2(c) is a zoomed-out version of the functions shown in panel (b), for r < 20. Similarly, Fig.3 explores the cases of ℓ = 2 (see Fig.3(a)) and ℓ = 3 (see Fig.3(b)). In both panels the black solid lines indicate the field free Coulomb function in spherical coordinates Having analyzed the LFT calculations at positive energies, Fig. 4 illustrates the corresponding LFT calculations at negative energies, namely ǫ = −135.8231 cm −1 where the field strength is set to be F = 640 V/cm. Note that these parameters [41] are used for an analogous comparison in Fig.5 of Ref. [29]. In all panels the azimuthal quantum number is considered to be m = 1, the solid black lines denote the analytically known irregular Coulomb function [ ]. In Fig. 4(a-d) the ℓ = 1, 2, 3 and 6 cases are considered at r = (r, θ = 5π 6 , φ = 0), respectively. In addition, for all the panels of Fig. 4 in the LFT calculations the summation over the n 1 states is truncated at n tot 1 = 25 for the considered energy and field strength values. This simply means that in the summation of the framed-transformed irregular function contribute solely all the fractional charges β n1 that obey the relation β n1 < 1. These states essentially describe all the relevant physics since only for these states the "down field" part of the wave function can probe the core either above or below the Stark barrier. Therefore, the n 1 states for which β n1 > 1 physically are irrelevant since they yield a strongly repulsive barrier in the "down field" degree of freedom shielding completely the core. However, for these states the considered pair of regular and irregular functions in Sec. II C for the η-degree of freedom acquire imaginary parts due to the fact that the colliding energy is below the minimum of the corresponding Coulomb potential. Consequently, these states are omitted from the sum of the frame-transformed irregular function. The omission of states with β n1 > 1 mainly addresses the origin of the accuracy in the LFT calculations.
The impact of the omitted states is demonstrated in Fig. 4 where discrepancies are observed as the orbital angular momentum ℓ increases since more n 1 states are needed. Indeed, in panels (a), (b) and (c) of Fig. 4 a good agreement is observed between the framed-transformed irregular function and the Coulombic one (black solid line). On the other hand, in panel (d) of Fig. 4 small discrepancies, particularly for r > 20 are observed occurring due to poor convergence over the summation of the n 1 states. Though, these discrepancies are of minor importance since they correspond to negligible quantum defects yielding thus minor contributions in the photoabsorption cross section.
The bottom line of the computations shown in this subsection is that the frame-transformed irregular functions g (LF T ) ǫℓm r do not display, at least for ℓ = 1 or 2, the inaccuracies that were claimed by Zhao et al. in Ref. [29]. For negative energies, our evidence suggests that the inclusion of n 1 states with β n1 > 1 will enhance the accuracy of the frame-transformed irregular functions as it is already demonstrated by the LFT calculations at positive energies.
B. Photoionization microscopy
Next we compute the photoionization microscopy observable for Na atoms, namely the differential cross section in terms of the LFT theory. The system considered is a two step photoionization of ground-state Na in the presence of an electric field F of strength 3590 V/cm, which is again the same system and field strength treated in Ref. [29]. The two consecutive laser pulses are assumed to be π polarized along the field axis, which trigger in succession the following two transitions: (i) the excitation of the ground state to the intermediate state 2 P 3/2 , namely [Ne] 3s 2 S 1/2 → [Ne] 3p 2 P 3/2 and (ii) the ionization from the intermediate state 2 P 3/2 . In addition, due to spin-orbit coupling the intermediate state will be in a superposition of the states which are associated with different orbital azimuthal quantum numbers, i.e. m = 0 and 1. Hyperfine depolarization effects are neglected in the present calculations. Fig. 5 illustrates the differential cross section dσ(ρ,z det ) dρ for Na atoms, where the detector is placed at z det = −1 mm and its plane is perpendicular to the direction of the electric field. Since spin-orbit coupling causes the photoelectron to possess both azimuthal orbital quantum numbers m = 0, 1, the contributions from both quantum numbers are explored in the following. More specifically, this method uses a discretization of the Schrödinger equation on a grid of points in the radial coordinate r and an orbital angular momentum grid in ℓ. The main framework of the method is described in detail in Sec. 2.1 of Ref. [42] and below only three slight differences are highlighted. In order to represent a cw-laser, the source term was changed to S 0 ( r, t) = [1 + erf(t/t w )]zψ init (r) with ψ init either the 3p, m = 0 or 3p, m = 1 state. The time dependence, 1+erf(t/t w ) gives a smooth turn-on for the laser with time width of t w ; t w is chosen to be of the order a few picoseconds. The second difference is that the Schrödinger equation is solved until the transients from the laser turn on decayed to zero. The last difference was in how the differential cross section is extracted. The radial distribution in space slowly evolves with increasing distance from the atoms and the calculations become challenging as the region represented by the wave function increases. To achieve convergence in a smaller spatial region, the velocity distribution in the ρ-direction is directly obtained. The wave function in r, ℓ is numerically summed over the orbital angular momenta ℓ yielding ψ m (ρ, z) where m is the azimuthal angular momentum. Finally, using standard numerical techniques a Hankel transformation is performed on the wave function ψ m (ρ, z) which reads ψ m (k ρ , z) = dρρJ m (k ρ ρ)ψ m (ρ, z) which can be related to the differential cross section. The cross section is proportional to k ρ |ψ m (k ρ , z)| 2 in the limit that z → −∞. The k ρ is related to the ρ in Fig.5 through a scaling factor. The convergence of our results is tested with respect to number of angular momenta, number of radial grid points, time step, |z| max , t w and final time.
The bandwidth that the following calculations exhibit is equal to 0.17 cm −1 . In addition, in order to check the validity of our velocity mapping calculation we directly compute numerically the differential cross section through the electron flux defined in Eq. (31). An agreement of the order of percent is observed solidifying our investigations. One sees immediately in panels (a-d) of Fig.5 that the LFT calculations are in good agreement with the full numerical ones, with only minor areas of disagreement. In particular, the interference patterns in all calculations are essentially identical. An important point is that panels (a) and (c) do not exhibit the serious claimed inaccuracies of the LFT approximation that were observed in Ref. [29]. In fact, the present LFT calculations are in excellent agreement with the corresponding LFT calculations of Zhao et al. Evidently, this suggests that the disagreement observed by the Zhao et al originates from coupled-channel calculations and not the LFT theory, in particular for the case of m = 0. Indeed, panels (b) and (d) of Fig.5 are in excellent agreement with the corresponding results of both the LFT and coupled-channel calculations of Ref. [29].
VI. SUMMARY AND CONCLUSIONS
The present study reviews Harmin's Stark-effect theory and develops a standardized form of the correspond-ing LFT theory. In addition, the LFT Stark-effect theory is formulated in the traditional framework of scattering theory including its connections to the photoionization observables involving the dipole matrix elements, in particular the differential cross section. In order to quantitatively test the LFT, the present formulation does not use semi-classical WKB theory as was utilized by Harmin. Instead the one-dimensional differential equations are solved within an eigenchannel R-matrix framework. This study has thoroughly investigated the core idea of the LFT theory, which in a nutshell defines a mapping between the irregular solutions of two regions, namely spherical solutions in the field-free region close to the origin and the parabolic coordinate solutions relevant from the core region all the way out to asymptotic distances. For positive energies, our calculations demonstrate that indeed the mapping formula Eq. (16) predicts the correct Coulomb irregular solution in spherical coordinates (see Figs.2 and 3). On the other hand, at negative energies it is demonstrated (see Fig.4) that the summation over solely "down field" states β n1 < 1 imposes minor limitations in the accuracy of LFT calculation mainly for ℓ > 3. Our study also investigates the concept of wave function microscopy through calculations of photoionization differential cross sections for a Na atom in the presence of a uniform electric field. The photoionization process studied is a resonant two-photon process where the laser field is assumed to be π polarized. The excellent agreement between the LFT and the full velocity mapping calculation has been conclusively demonstrated, and the large discrepancies claimed by Ref. [29] in the case of m int = 0 → m f = 0 are not confirmed by our calculations.
These findings suggest that the LFT theory passes the stringent tests of wave function microscopy, and can be relied upon both to provide powerful physical insight and quantitatively accurate observables, even for a complicated observable such as the differential photoionization cross section in the atomic Stark effect.
1 F 1 (a; b; x) diverges when b = −1, −2, −3, ... Moreover, we observe that when λ acquires half-integer values, ie λ = λ c the nominator and denominator of gǭ λ in Eq. (A4) both vanish. Therefore, employing the de l' Hospital's theorem on gǭ λ in Eq. (A4) we obtain the following expression: Hence, Eqs. (A3) and (A6) correspond to the regular and irregular Coulomb functions for non-positive halfintegers at negative energies, respectively. This particular set of solutions possess π/2-relative phase at short distances and they used as boundary conditions in the eigenchannel R-matrix calculations. A similar construction is possible with the help of Ref. [43] for positive energies but it is straightforward and not presented here. | 2014-10-27T15:01:50.000Z | 2014-10-20T00:00:00.000 | {
"year": 2015,
"sha1": "021ca4e8438361b9f8f746616b52155850559997",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.91.043424",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "021ca4e8438361b9f8f746616b52155850559997",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
7471044 | pes2o/s2orc | v3-fos-license | Spike-interval Triggered Averaging Reveals a Quasi- Periodic Spiking Alternative for Stochastic Resonance in Catfish Electroreceptors
Catfish detect and identify invisible prey by sensing their ultra-weak electric fields with electroreceptors. Any neuron that deals with small-amplitude input has to overcome sensitivity limitations arising from inherent threshold non-linearities in spike-generation mechanisms. Many sensory cells solve this issue with stochastic resonance, in which a moderate amount of intrinsic noise causes irregular spontaneous spiking activity with a probability that is modulated by the input signal. Here we show that catfish electroreceptors have adopted a fundamentally different strategy. Using a reverse correlation technique in which we take spike interval durations into account, we show that the electroreceptors generate a supra-threshold bias current that results in quasi-periodically produced spikes. In this regime stimuli modulate the interval between successive spikes rather than the instantaneous probability for a spike. This alternative for stochastic resonance combines threshold-free sensitivity for weak stimuli with similar sensitivity for excitations and inhibitions based on single interspike intervals. Copyright: ß 2012 Lankheet et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Introduction
The generation of neural action potentials involves a fundamental threshold nonlinearity that often interferes with processing small-amplitude stimuli. Although in some cases thresholds could help to suppress unwanted noise, they often limit sensitivity in sensory systems by blocking sub-threshold modulations of activity. Comparable problems are observed in many fields of science and technology and great progress has been made in understanding how systems with an inherent threshold can be optimized to provide optimal differential sensitivity. The solution adopted by a wide range of systems consists of exploiting stochastic resonance, i.e., the addition of an optimized amount of noise that induces a moderate, highly irregular, spontaneous background activity. Stimulus-evoked modulations of this spontaneous activity then provide thresholdfree detection [1,2,3]. Stochastic resonance theory explains that noise is essential for linearization and actually helps rather than hinders detection [4]. Most neurons in the central nervous system operate as predicted by this theory [4,5]. Here we show that electroreceptors of the passively electric Brown Bullhead catfish (Ictalurus nebulosus) have adopted a radically different strategy: The spike generation mechanism is set to produce high-rate quasiregular spiking which not only prevents the underlying nonlinearity from hindering small-signal processing but also implies that stimuli do not modulate the probability of spike occurrence, but primarily the duration of interspike intervals.
Catfish electroreceptors consist of 10-20 sensory cells in the lumen of an ampul, converging onto one or two afferents with excitatory synapses [6]. Catfish live in murky waters and use their electroreceptors to detect electric fields generated by potential prey. These electric fields are extremely weak [7] and steeply decline in strength with increasing distance to the source. Electroreceptor performance therefore directly limits the distance over which prey can be detected, suggesting that they should be optimally adapted to sensing, encoding and processing signals relevant to this task. Whereas e.g. the electroreception ampullae of Lorenzini in sharks operate in accordance with stochastic resonance theory [8], the ampullary electroreceptors of Ictalurus nebulosus employ a different type of behavior. They exhibit a high spontaneous activity (about 50 spikes/s) that is far more regular than the typical random Poisson process one would expect for noise-driven spontaneous activity [9]. Furthermore, they show no signs of the sub-threshold modulations encountered in ampullae of Lorenzini [10].
The presence of highly regular spontaneous activity contradicts stochastic resonance as a solution to overcome threshold nonlinearities. At first sight, this might suggests that the spikegeneration nonlinearities could hamper linear processing of realistically small signals, but this is contradicted by earlier experiments that characterized the electroreceptor's near linear filter properties [11]. This raises the question of how electroreceptors achieve their highly effective detection of prey by means of weak electrical stimuli [12].
To solve this problem we tested the hypothesis that the electroreceptor's spontaneous activity is not noise-driven, but caused by a DC bias current [13] that repetitively and deterministically drives the afferent neuron to threshold. We introduce an analysis technique that allows us to distinguish such quasi-periodic spiking from the typical behavior expected from stochastic resonance theory. Our analysis is a crucially modified version of a common reverse correlation technique that uses spike triggered averaging to estimate filter properties. Spike triggered averaging (STA) [14,15] recovers the average stimulus profile that precedes a spike. Using Gaussian noise stimuli, STA reveals the system's transfer properties and it provides a good estimate of sensory filter properties preceding spike generation for neurons that operate in a stochastic resonance regime [16]. In such a regime, each spike conveys a similar message and the presence and timing of individual spikes encodes relevant information.
In the quasi-periodic spiking regime, on the other hand, spike presence is determined by the spike generator itself rather than by the input signals. Under these conditions, input signals act by modulating the duration of interspike intervals. To characterize signaling in this regime we calculate Spike Interval Triggered Averages (SITAs), a reverse correlation technique that triggers on pairs of spikes separated by specific intervals, rather than on single spikes. For neurons operating in the stochastic resonance regime, SITA curves for different interspike interval durations match the classic spike triggered average. For neurons operating in the quasiperiodic firing regime, we show that the SITA curves depend heavily on spike interval. For very long and short interspike intervals, the SITA curves become almost sign-inverted copies of each other, suggesting that the message that a spike conveys varies with the duration of the preceding interspike interval.
Numerous studies, starting with de de Ruyter van Steveninck and Bialek [17], have shown that different spike patterns may be correlated with different stimulus features (see also [18,19,20]). Oswald et al. [19,20] for example revealed different feature triggered averages in neurons operating in a spike-bursting regime. Neiman and Russell [21,22] studied the effect of stochastic oscillations on coding in paddlefish electroreceptors, and there is ample evidence for functional consequences of nonrenewal spike train statistics on neural coding [23,24]. For catfish electroreceptors, using spike-interval triggered averages we find a pattern of results that clearly differs from these and other effects described previously. SITA analysis reveals major differences in spike generation from that of a Poisson spike generator or of a neuron operating in stochastic resonance mode. Using a simple leaky integrate and fire (LIF) model [25,26,27,28,29] in combination with linear filters, we show that this electroreceptor behavior can be explained by an interaction between a linear pre-filter and a dynamic spike generator operating in the quasi-periodic regime. This straightforward model reproduces the measured SITA curves accurately, while at the same time reproducing the near perfect linear behavior for sinusoidal stimuli [11]. Figure 1 illustrates how SITAs are constructed in a reverse correlation experiment. We recorded spikes from electroreceptors in response to Gaussian white noise stimulation (Fig. 1A) and constructed STAs of the stimulus-shape preceding spikes (Fig. 1B) as a function of interval duration. To this end, recorded spikes were divided in five classes based on the duration of their preceding interspike interval (Fig. 1C). Since the total number of spikes was about 75,000 and we divided spikes in 5 equally sized classes, each class consisted of approximately 15,000 spikes. SITA analysis is a generalization of the standard STA (which is the average of all SITA curves), and reveals to what extent stimulus patterns correlate with interval durations, rather than spike timings. Differences between SITA curves for different spikeinterval durations imply additional structure in the spiketriggered ensemble that is not picked up by conventional STA analysis. Figure 1D shows group results of SITA analysis for 26 electroreceptors recorded in 20 catfish. It is immediately clear that curves for different interval durations are very different. In fact, curves for short and long spike-intervals are roughly of opposite polarity. Excitations due to positive preceding stimuli correspond to short intervals whereas inhibitions due to negative stimuli correspond to long intervals. The fact that these effects are of opposite sign means that they largely cancel out in the classic, overall, STA (black curve). Confidence intervals and distributions of peak latencies and peak amplitudes (Fig. 1E) confirm that the results are highly consistent across recordings. Since the shape and frequency content of the different SITA curves is highly similar (Fig. 2), the observed differences between SITA curves cannot be related to different stimulus frequencies. Instead, they reflect a general effect across all frequencies: Model simulations will show that the observed effects can be explained by a single linear filter in combination with a spike generator operating in a quasi-periodic regime. Figure 2 also shows the nonlinear behaviour of catfish electroreceptors. For a linear system the STA reflects the system's impulse response, and its Fourier transform should reflect the linear transfer properties as measured with sinusoids. The example in Figure 2 shows that for catfish electroreceptors these two measures may yield rather different estimates of the filter properties.
Results
The overall STA amplitude is, on average, a factor of 0.26 (60.05 SD) smaller than the SITA amplitude for the shortest intervals. Because SITAs for long and short intervals are not only polarity inverted but also slightly shifted in time (Fig. 1E, time to peak plot) the shape and timing of the overall STA may differ substantially from that of its separate SITA components. Peak latency for the overall STA is, for example, on average 5.8 ms shorter than that for the shortest interspike intervals (Fig. 1E in red) and about 20 ms shorter than that for the longest interspike interval ( Fig. 1E in blue). Amplitude, timing and shape of the STA are therefore to a large extent determined by opposite SITA shapes for long and short interspike intervals.
Differences between the overall STA and the SITA curves are even more evident if we compare the additional power (sum of squared signals) in the SITA curves relative to the power in the overall STA. For a SITA based on five classes the total mean variance across the five classes, in a time window of 100 ms preceding spikes, is about a factor of 10 larger than the total power in 5 random subdivisions of the STA. For subdivisions into a larger number of classes the difference grows asymptotically to a slightly higher value (about a factor of 12 for 16 classes) because it allows for more accurate estimates of the variation with interval duration. The large additional power in SITAs relative to the STA clearly supports the notion that the overall STA cancels out most of the interval-related variance in the spike-triggered ensemble. A description in terms of a single impulse response that matches the overall STA therefore misses contributions from different SITA components.
To illustrate how SITA analysis distinguishes spike generation within the usual stochastic resonance regime from that within the quasi-periodic firing regime, we adopted a simple Leaky-Integrateand-Fire (LIF) spike generation model [25,26,27,28,29] in combination with a linear pre-filter (Fig. 3A, see Methods for details). The LIF spike generator is a simplification of the Hodgkin-Huxley model [30] and incorporates only the mere basics of spike generation dynamics. In the absence of external stimulation the membrane potential exponentially recovers to a resting level. Whenever the membrane potential crosses the spiking-threshold an action potential is generated and the membrane potential is reset to a fixed, low level. The model operates in stimulus-driven (stochastic resonance) mode for resting levels below spiking-threshold (Fig. 3C) and in quasi-periodic firing mode for resting levels above the spiking-threshold (Fig. 3B). In the quasi-periodic setting the model produces results very similar to the electroreceptor measurements. In this regime, recovery of the membrane potential after a reset is sufficient for repetitive firing, which in turn causes the same clear reversals of SITAs that were also evident in the experimentally obtained SITAs. Modulatory effects of stimuli on quasi-periodic spike generation can thus explain SITA reversals: positive stimuli accelerate spike generation whereas negative stimuli (temporarily) postpone spike generation. SITAs of opposite polarity for long and short intervals therefore reflect one and the same linear pre-filter. In the stimulus driven regime (Fig. 3C) the SITA-curve polarity inversions are absent, and individual traces largely resemble the classic STA. The higher noise levels in the simulated SITA curves of this regime are a consequence of the generally lower spike rate. We used the same linear filter properties, stimulus durations and stimulus dynamics for simulations with the two regimes, which naturally results in a larger number of spikes for quasi-periodic spiking than for purely stimulus-driven spiking.
While electroreceptors illustrate the surprising consequences of a quasi-periodic spike-generator, we also verified whether SITA analysis correctly picks up the more standard stochastic resonance type of spike-generation. To this end, we applied the technique to recordings from cat retinal ganglion cells, known to operate in the stimulus-driven regime [16,31]. The random pixel arrays used in these studies were broadband in both the spatial and temporal domain to provide accurate estimates of linear response properties [16,31]. The example curves in Figure S1 show great similarity to the model profiles in Figure 3C and lack the polarity inversions characteristic observed for the quasi-periodically spiking electroreceptors. Thus, in retinal ganglion cells each spike conveys the same type of information about the driving input, irrespective of spike-interval duration. In this case, the conventional STA provides a good estimate of the neuronal filter properties that precede spike generation [31,32].
The LIF model reproduces both types of spiking behavior, depending on the threshold level relative to the resting level. Extensive model simulations in which we varied linear filter properties, thresholds, stimulus amplitudes and noise amplitudes revealed that the behavior in Figure 3B can only be obtained for deterministic, repetitively firing neurons with spiking-thresholds below the resting level. The LIF model in the stochastic resonance regime cannot reproduce this behavior. In the stochastic resonance regime, spikes occur when excitations drive the potential to threshold; inhibitions therefore remain invisible, unless followed by an excitation. On average, SITAs will thus show a short latency excitation, with a longer latency inhibition that is more pronounced for longer intervals (dark blue curve in 3C). Only the model in the quasi-periodic regime can reproduce the SITA results for electroreceptors. The interaction of linear prefiltering with the dynamics of such a deterministic spike generator implies that stimuli primarily modulate the duration of interspike intervals, whereas the instantaneous probability for a spike is determined by spike generation dynamics rather than stimuli. Conventional STAs cancel out the variations with spike interval duration and are therefore blind to variations in the spiketriggered ensemble due to quasi-periodic spiking.
Our Spike Interval Triggered Average is fundamentally different from the Spike-Triggered Covariances (STCs) that are commonly used to recover multiple response components that might become superimposed or cancel out in the STA [14,33]. Both STAs and STCs are based on the timing of single spikes and do not take interval durations explicitly into account. In contrast, the variation we describe depends critically on these spikeintervals. As a control, we also calculated STCs for our data, but we did not observe relevant eigenvectors beside the first eigenvector (the STA). This is in line with the main effect we observe; a sign reversal, which is irrelevant in an analysis of variance. While the Volterra kernel approach put forward by Marmarelis and co-workers [34,35,36] is especially sensitive to nonlinear summation of response contributions from different stimulus components, it makes no reference to the effects of spike history and interval duration either. Moreover these analyses provide a black-box type approach for filtering plus spike generation together without any reference to underlying mechanisms. Here we show that our SITA analysis actually reveals how spike generation interacts with linear pre-filtering to yield different types of behaviour.
For fly H1-cells, which are sensory neurons involved in optic flow perception, it has been shown that additional information can be extracted from spike trains when more complex spike patterns are taken into account [17,37]. The question therefore arises whether additional information involving multiple spike-intervals could still be hidden in the SITA curves for electroreceptors. To examine the possibility of SITAs reflecting an even more complex combination of multiple interval effects, we extended our analysis to two consecutive interspike intervals. We subdivided each interval class into another 5 subclasses according to the duration of the secondary, preceding spike-interval. With this subdivision, each separate curve is based on the combination of two spike intervals and the question is whether these two contributions simply add up linearly, independent of interval combination, or whether they show interactions. The results (Figure 4) demonstrate that the obtained patterns are nearly perfectly explained by linear combination of SITAs for consecutive intervals. The thick lines show SITAs for different combinations of interval durations, as indicated by the insets: The top panel shows data for a short interval preceded by different interval classes (color coded). Horizontal lines in the insets represent the mean interval durations. Thin lines in the graph show predictions for linear combinations of SITAs, constructed as the average of two SITAs, with time shifts equal to the mean duration of the interval preceding the trigger spike. The match might even further improve if we would use actual spike intervals rather then mean spike intervals for a class. The absence of interval-specific interactions demonstrates that extending the analysis to more than a single inter-spike interval yields no additional information. This is in line with the complete reset that follows the generation of a spike in our LIF model.
Based on the strong dynamic interactions between pre-filtering and spike generation one would expect clearly nonlinear behavior [38,39]. This is indeed what we found for reverse correlation experiments. Different settings for stimulus amplitude greatly affected the shape of the reverse correlation functions, with curves becoming increasingly asymmetric for increasing stimulus amplitudes ( Figure 5). For low amplitude stimuli the timing of different SITA components is quite similar, causing effective cancellation in the overall STA. For increasing stimulus amplitude the time to peak for the excitatory SITA (red curves in figure 5) becomes smaller whereas the peak latency of the inhibitory component consistently increases. Consequently, the relative amplitude of the overall STA, for instance, grows by a factor of more than two. Also, oscillations that are clearly present at high stimulus amplitudes are virtually absent at low stimulus amplitudes. At the highest stimulus amplitude the inhibitory curve for long interspike intervals (dark blue) shows a strong excitatory component at short latencies, which is absent for low stimulus amplitudes. Model simulations, in which parameters were fitted to a single stimulus amplitude (middle row) and held constant for F(t)), taking the stimulus (S(t)) plus added noise (N 1 (t)) as its input, followed by a standard LIF spike generation mechanism. The LIF spike generator performs a leaky integration of the filter output plus a second noise source (N 2 (t)). This second noise source corresponds to a high frequency noise on the spike threshold. If the integrated signal crosses the threshold level (h), a spike is generated and the integrator is reset to a value of 2100. (B) Model behavior for quasi-periodic spiking. (C) The behavior in a regime where spike generation is strongly driven by noise and external input signals. The linear filter stages were the same for both simulations, except for a gain factor. The top panels show examples of output signals from the linear filters. The second row displays the course of the 'membrane potential', including reset and post-spike recovery. The two regimes differ in the setting for the threshold (blue line) relative to the resting level (green line) and reset level (red line). For deterministic firing (B) the threshold is set well below the resting level, whereas for input-driven firing (C) it is set above the resting level. The bottom row panels represent the SITAs, with interval classes corresponding to the colors in Fig. 1 other amplitudes, correctly predicted this type of nonlinearity and showed similar shifts in peak latency, oscillatory behavior and variation in relative STA amplitude.
Quasi-periodic spiking combined with linear pre-filtering also consolidates inherent nonlinearities due to spike generation with nearly perfect linear behavior observed with sine wave stimuli [11]. The frequency transfer properties that we measured with sinusoids were highly similar to those reported by e.g. Bretschneider et al [11]. Low frequency slopes roughly corresponded to a half-order characteristic (mean slope 3.3460.63 db/octave, for a frequency range of 0.5-3 Hz), low pass filtering was close to that of a third order filter (214.7866.4 db/octave, frequency range 20-40 Hz), and optimal frequencies were close to 10 Hz. Figure 6 shows that the model correctly reproduces the linearity for sine wave responses. Here, we fitted the model to SITA curves from a reverse correlation experiment ( Figure 6A) and then simulated the responses to sine wave stimuli without adjusting any model parameters. The model (thin lines in Figure 6A) reproduces the recorded SITA curves quite well. Figure 6B shows recorded response amplitudes for sinusoidal stimuli that grow linearly with stimulus amplitudes for all temporal frequencies. The dashed horizontal lines show the corresponding mean spike rates, which are independent of stimulus amplitude and frequency. Model simulations (Fig. 6C) accurately reproduce this pattern of results. Indeed, within the quasi-regular spiking regime, the strong nonlinearity inherent in spike generation does not interfere with the nearly perfect linear behavior for sinusoids.
Discussion
The combination of SITA analysis and model simulations demonstrates that electroreceptor afferents generate action potentials well outside the range of stochastic resonance. Their quasi-periodic spike-generating mechanism leads to a very different transformation of stimuli into spike trains. Specific temporal stimulus patterns (shape and polarity) correspond to different inter-spike interval durations, whereas the timing of spikes primarily depends on spike generation dynamics. For neurons operating in stochastic resonance mode, each single spike is informative about the degree to which a stimulus matched the filter-properties of the neuron [16,18]. This (classic) assumption is only true for systems with Poisson type spike generation, for which separate SITA curves match the overall STA. The direct relation between inter-spike interval duration and stimulus statistics, as it is shown in the different SITA curves for catfish electroreceptors, generalizes this notion for quasi-periodic spike generation.
Our objective and approach in the current study is different from recent decoding studies that employ, for instance, the GLM framework with optimized post-spike feedback [31,32]. These studies have been highly successful in reproducing and predicting response properties of retinal ganglion cells, including the details of spike patterns and complex neuronal network effects. Our objective was different and two-fold: 1) to develop a simple analysis technique that allows us to estimate the impact of spike generation on Spike-Triggered Ensemble (STE) data, and 2) to explain the observed complex response behavior with a simple, though physiologically realistic, model. In the case of catfish electroreceptors this provides a valuable new insight in their functional architecture. In catfish electroreceptors the correlation between spikes and preceding stimuli is to a large extent determined by the interspike interval rather than just by the timing of spikes. It reveals the quasi-periodic nature of spike generation, which does not conform to a simple rate transformation. Applying SITA analysis thus reveals a major nonlinearity and greatly helps in elucidating the mechanisms underlying electroreceptor response properties.
It is often thought that strong interactions between linear input filters and spike-generation dynamics hamper the extraction of useful information from spike trains. However, rather than presenting a nuisance that hinders the decoding of spike trains, a quasi-periodic spike generator might actually offer several important advantages to the animal. Firstly, in contrast to the stochastic resonance mode it requires no additional noise to allow Figure 1. Thick lines correspond to measured data, thin lines to predictions based on linear summation of separate and independent SITAs for the two consecutive intervals. In calculating the linear sum of the SITAs for the first of the two intervals we used a time shift equal to the mean interval duration for the second interval (black horizontal line in insets). Linear predictions and actual measurements are highly similar, indicating that adding a second interval to the analysis provides no information that was not already present in the single interval analysis. doi:10.1371/journal.pone.0032786.g004 for threshold-free detection. Since intrinsic properties drive the afferent to cross the threshold, noise levels can be minimized. Secondly, because no excitations are required to reach threshold, quasi-periodic spiking allows for a detection mechanism that is equally efficient for excitations and inhibitions. Finally, it does not require estimating spike occurrence probabilities. In contrast to a standard rate code [40] it provides information at the shortest possible time delay of a single interspike interval. As such, it provides a continuous and instantaneous estimate of how much the input signal resembles the shape and polarity of a specific temporal stimulus pattern. It remains an open question, however, to what extent and how such information is used in generating representations at higher processing levels.
In the vestibular system, Sadeghi et al. [29] have studied information transmission by regular and irregular afferents. Despite lower gains, regular afferents transmitted more information than irregular afferents. This may very well correspond to a different neural code, comparable to what we demonstrate for regularly firing electroreceptors. Information transmission in regular vestibular afferents was found to be highly sensitive to jittering the timing of spikes. At first thought, this may seem to contradict the importance of spike interval duration over mere spike timing, but jittering individual spikes of course also affects spike interval durations, especially in regularly firing units. An affect of jittering spike timings is therefore not incompatible with spike interval coding as suggested by SITA analysis.
The LIF model includes both increments and decrements of the current driving the afferent membrane potential to threshold. Since there are no indications of inhibitory synapses [41] in catfish electroreceptors, we must assume that the synapse is continuously active. Positive stimuli then increase and negative stimuli decrease the rate of neurotransmitter release. Model simulations, however, show that these modulations are relatively small compared to the currents that are responsible for recovery of the afferent membrane potential after a reset. Tonic neurotransmitter release is therefore unlikely the main driving force for spontaneous . Model predictions were based on simulations with model parameters that were obtained by fitting the model to data from a standard reverse correlation experiment (third row of data, amplitude of 1). Noise amplitudes were varied by a factor of two between successive rows. Model simulations and actual measurements show very similar effects. At small stimulus amplitudes, SITAs for long and short intervals have similar shapes and comparable latencies. At higher stimulus amplitudes, shapes and latencies for different interval classes change drastically. Typically, inhibitory deflections become delayed relative to excitatory deflections and they may generate a short latency excitatory peak. doi:10.1371/journal.pone.0032786.g005 activity. Instead, spontaneous activity mainly results from intrinsic dynamics of the spike generator. This would explain why sensitivity and spontaneous activity are not directly related [12]. Our model suggests that intrinsic properties of the spike generator account for spontaneous activity while synaptic activity modulates the speed of recovery after a spike.
Our model analysis further reveals that for regularly firing neurons the spike triggered average may not provide an accurate estimate of receptor filter properties. SITA curves and their resulting overall STA are significantly affected by spike generation dynamics. Because the STA is the average of SITA components that are polarity inverted and slightly shifted in time, STA shape and latency do not reflect filter properties in a simple, straightforward manner. Our analysis also reveals that neuronal filter properties cannot be recovered by selecting only the long interspike intervals [42]. In the quasi-periodic spike generation regime this would result in extreme errors. Also in the noise driven regime (see Figure S1) the selection of a subset of interspike intervals may drastically alter estimates of filter properties. The extensive SITA oscillations for long interspike intervals in particular are not related to any filter properties (see model simulations). Separating STAs into contributions from long and short spike intervals reveals how, and to what extent, spike generation mechanisms affect the spike-triggered response ensemble. SITA analysis thus provides the additional information required to separate filter kernels from spike generation dynamics.
For catfish electroreceptors the spike history effects are huge. Similar effects probably play a significant role in spike-history dependent variations of average current trajectories preceding spikes in rat motoneurons [43]. Large effects, albeit of a very different nature may also be observed for neurons operating in, for example, a spike-bursting regime [19,20]. For other systems, as illustrated here with cat retinal ganglion cells the implications of spike generation may be less surprising. Data for cat retinal ganglion cells ( Figure S1) do not show the pronounced reversals that we saw for electroreceptors, indicating that for these cells and conditions, spike generation is primarily stimulus(/noise)-driven. We simulated this type of behavior with a large input gain, and a threshold above the equilibrium potential. It should be noted though that for high input gains the exact threshold level becomes quite irrelevant because, relatively speaking, stimulus-induced fluctuations are much larger. In this regime the Filter-LIF model's behavior does not substantially differ from a Linear-Nonlinear Poisson (LNP) model. This is in line with the high predictive value of the latter type of model, or the GLM framework that includes history dependence [31,32] and network effects for e.g. ganglion cells [31]. SITA analysis may however still be very useful for identifying effects of spike generation. In combination with model simulations it provides insight into the mechanisms that cause variations in the STE. The SITA analysis therefore provides additional information that cannot easily be recovered by e.g. STC analysis [14,33,44] or nonlinear Volterra or Wiener kernel analyses [34,35,36,45,46,47].
The SITA technique may elucidate operating modes in a wide range of nonlinear dynamical systems. It provides a simple analysis method to distinguish quasi-periodic transitions from stimulusand noise-driven transitions. Moreover, it reveals when and how recent events play an important role in the generation of a future event, a question central to many systems ranging from low level sensors to high level mechanisms underlying e.g. binocular rivalry [48,49].
Recordings
We recorded from ampullary electroreceptors in the skin of the Brown Bullhead Catfish (Ictalurus nebulosus), a passively electric fish that uses electroreceptors to sense electrical fields such as those generated by potential prey [7]. The electroreceptors consist of a group of sensory cells (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) in the lumen of an ampul, which make excitatory synapses onto one or two afferents [6]. Spikes from single afferents were recorded by placing the tip of a tungsten microelectrode near the opening of an ampul. The electroreceptors have a maintained discharge rate of approximately 50 spikes/s and respond nearly linearly, and with band-pass characteristics, to small-amplitude sinusoidal stimuli [11].
Anesthesia was induced by 4 mg/l Ethomidate, dissolved in water, and maintained by half of this concentration in the experimental setup. Ethomidate blocks central processing and conveniently immobilizes the animal, without blocking responses of the peripheral nervous system. Animals were artificially respirated with a water flow of about 100 ml/min. Experiments lasted up to 6 hours, after which the animals were transferred to a recovery chamber where artificial respiration was maintained during recovery. All experimental procedures and animal handling were in line with University regulations and approved by the University's animal experiment review committee (Approval ID 2008.I.06.043, Dierexperimentencommissie Academisch Biomedisch Centrum, Utrecht University). Recordings were obtained in 20 catfish, weighing 200-750 g. During the recordings, the fish were held in a perspex tray in which rubber clamps gently pushed their head up onto a nose-rest through which aerated water was supplied. An adjustable overflow was used to control the water level and return excess water to the aquarium. Experiments were performed at a water temperature of 20 degrees Celsius. To avoid spike-sorting problems we only selected electroreceptors with single afferents for our recordings.
An Apple G4 computer with a National Instruments PCI 1200 data-acquisition board controlled the experiments. Customwritten software (in C) was used to simultaneously generate stimuli, record spikes and stimuli, provide online data analysis, and store all information for further offline analysis. Spike-times were obtained at a temporal resolution of 2,000 Hz.
Stimuli and Stimulus protocol
Stimuli consisted of low frequency (0.1-100 Hz) fluctuations of electrical potential and were generated at a sample rate of 1,000 Hz. Computer-generated stimuli ranged in amplitude between 25 and 5 Volts at 12 bits resolution and were reduced to a suitable amplitude range at the site of the electroreceptor by means of attenuating resistor circuits adjustable from 0 to 80 dB in steps of 1 dB. A voltage-to-current converter was used to render stimulation currents independent of resistance. Stimulation currents passed through a 1 cm 2 area of skin surface, located at the dorsal head region. They were applied by means of a circular stimulation electrode, placed about 1.5 mm above the skin and surrounded by an insulating rubber ring that prevented leakage of stimulation current directly to ground (surrounding medium). The rubber ring prevents any direct contact of the stimulation electrode with the skin, mimicking natural stimulation as good as possible. To assure perfect correspondence of stimulus and response timings, computer generated stimuli were re-recorded by feeding output signals back into the AD converter.
Once a stable recording of sufficient signal-to-noise ratio was obtained, we first adjusted the attenuator box to a level where fullrange sinusoidal modulation produced a response amplitude roughly equal to the mean firing rate of the cell (about 50 spikes/s). Within this range response amplitudes vary linearly with stimulus amplitude. No attempt was made to calibrate the absolute strength of the stimulus for each electroreceptor and stimulus strengths are therefore reported in arbitrary units (a.u.). For each electroreceptor we measured frequency transfer properties with sine wave stimuli, reverse correlation functions with white noise, and, if time allowed, several additional experimental variations. Frequency transfer properties with sinusoidal stimulus modulations were measured in the range of 0.5 to 50 Hz, in 20 or 30 logarithmically spaced steps. Trials lasted 2 seconds and were separated by 0.2 s inter-stimulus periods without stimulation. The order of stimulus presentations was randomized within repetitions and we typically recorded stimuli for a total duration of at least 30 s. In some of the experiments, we extended the frequency range to 0.1-100 Hz and increased trial durations to 10 seconds. Comparison of data obtained for single receptors in different protocols showed no differences due to this increased frequency range. In a subset of the recordings, we also measured frequency transfer properties at a range of sinusoidal amplitudes.
In reverse correlation experiments the stimuli consisted of white noise, generated at 1,000 Hz and passed though a single first order low pass filter (in software) with a corner frequency of 50 Hz. Filtering increased the power in the appropriate frequency range and transformed white noise into Gaussian white noise. The 50 Hz high frequency limit for the noise stimulus roughly corresponds to the high frequency limit for electroreceptor responses. As a control, we measured reverse correlations at different cut-off frequencies for the low pass stimulus filter. At cutoff frequencies below 50 Hz, we observed significant changes in reverse correlation functions. Data for 50 Hz were however identical to those obtained for 100 Hz. White noise responses were recorded in trials of 30 seconds. We typically used 10 different seeds and repeated trials 5 times. This resulted in a total recording time of 25 minutes, yielding roughly 75,000 spikes.
Data analysis
Mean spike rates, response amplitudes and phases for sinusoidal stimulation were obtained by fitting a sinusoidal function to the Post Stimulus Time Histogram (PSTH) at the frequency of stimulation. For phase calculations we also fitted a sinusoid to the recorded stimulus and subtracted the resulting phase shift (if any) from the response phase shift. For analyzing the spike-triggered ensemble, spikes were grouped in 5 classes containing equal numbers of spikes, based on the cumulative inter-spike interval distribution (Fig. 1c). Spike Triggered Averages (STAs) were then calculated for each interval class separately, thus creating Spike Interval Triggered Averages (SITAs). Obviously, the mean value of these individual SITAs is the conventional STA. We divided spikes in 5 equally sized classes, each consisting of about 15,000 spikes. Our choice for five classes is rather arbitrary. As a control we also calculated SITAs for larger numbers of classes. Increasing the number of classes results in a higher resolution for the variation with interval duration, but the main effects remain similar. Initial data-analysis, including the construction of PSTHs and SITAs, was done in C. Further analysis and comparisons of experimental and modeling data was done in MATLAB (The MathWorks, Natick, MA).
Modeling dynamic interactions
To show how the observed SITAs may arise from dynamic interactions between linear filtering and a nonlinear dynamic spike generator we combined these effects in a simple, quantitative model ( Fig. 2A). The model consists of a linear filter in combination with a spike generator of the leaky-integrate and fire (LIF) type, which is a common simplification of the Hodgkin-Huxley model for excitable membranes [26,27,28]. In contrast to a Poisson spike generator, a LIF model-neuron includes the essential dynamics of spike generation that we wish to incorporate. The dynamics of the membrane potential V(t) are expressed in Equations 1 and 2 and include a membrane time constant t and a spike generation threshold h.
The N 2 (t) term describes a small uniform noise with a fixed RMS amplitude (610). This noise source was introduced to obtain realistically smooth firing-rate functions. The main drive term I(t) is the filter output, defined formally as the convolution of the sensor's filter impulse response F(t) with mixture of the stimulus S(t), with gain g S and a front-end Gaussian noise term N 1 (t), with gain g N (Equation 3).
This noise term (N 1 ) proved essential for reproducing proper interval distributions for both spontaneous and input-driven activity, as well as for scaling SITAs independent of the mean firing rate and specific SITA shape. We modeled the system's band-pass filter properties with a series of first order high-pass and low-pass filters, representing both filtering in receptor cells and in the synapse onto the afferent axon. The high frequency fall-off is modeled with three first-order lowpass filters with the same corner frequency: The high-pass part, formally just a single 'fractional-order' stage [11,50], can in our case be approximated by five in-parallel firstorder stages: dY i (t) dt~{ Finally, the model includes a time delay in the order of several milliseconds that allows for frequency independent delays. These include any delays between spike initiation and registration, e.g. due to the use of a window discriminator and/or threshold detection in the software.
Model simulations were run at 2,000 Hz, similar to the resolution of experimentally recorded responses. For numerical simulations, we used first order Euler integration with a time step of 0.5 ms, which proved sufficiently accurate. Model simulations were run using the same software, the same stimuli and the same procedures as for the physiological experiments. Model data were saved in the same format as the recorded data and analyzed using the same analysis routines.
We used the nonlinear fit procedure STEPIT [51] to fit the model to the data. The LIF spike generator had two free parameters: the time constant t and the threshold level h. The formal parameters in F(t) were all tied to just 3 free parameters: the low-pass timescale, and two parameters that effectively determine the high-pass timescale and spectral slope (see Equations 6 and 7). The fit error was calculated as the sum of squared differences between experimental and model SITAs for 5 different interval classes, similar to the curves plotted in figure 1b. In addition, we added a small error term based on the difference in mean spike rates between model and experimental data. This assured that both the mean spike rate and SITAs were fitted correctly. Initial parameter values for the model fit were first estimated by trial-anderror. Correct optimization was checked by restarting the fitprocedure with different starting values. We did not analyze the reliability or confidence intervals for estimated parameters, because we were interested in the model's dynamic behavior rather than parameter quantification. Figure S1 SITAs for cat retinal ganglion cells. Responses of ganglion cells were recorded in the optic tract of anesthetized and paralyzed cats. Stimuli consisted of binary, dark-light pixel arrays. The array measured 16616 pixels and fully covered the cell's receptive field. Each pixel was modulated between light and dark levels in a unique random order. Stimuli were updated every second frame on a 100 Hz CRT display in front of the cat. Experimental and surgical procedures have been described in detail in previous publications [52,53] and were in line with national and international guidelines. Reverse correlations were constructed for each pixel separately. Dark values were represented by a value of zero, light pixels by a value of 1 and correlograms describe the mean value at different stimulus-spike intervals, aligned with all spikes at time 0. The examples given correspond to a single pixel in the center of the receptive field of 8 Off center cells and 2 ON center cells. Apart from an inversion, due to the excitation by light off for an Off center cell, the pattern of results strongly resembles that seen in Fig. 3C. Notice that no attempt was made to fit model parameters in Fig. 3C to actual data. In Fig. 3C all parameter values were chosen equal to those in 3B, except for the spike threshold value and stimulus amplitude. It can be seen that SITAs for cat retinal ganglion cells show behavior similar to what is predicted by the stimulus-driven regime: a lack of inversions at short stimulus-spike intervals, combined with large oscillations for long inter-spike intervals. We refer to this regime as the stimulus-driven regime, because spike timings are largely determined by the filter output and to a lesser extent by the dynamics of spike generation. The present findings mainly concern the alternative, quasi-periodic regime, where spike generation is the dominant factor. (TIF) | 2018-05-08T18:30:25.534Z | 2012-06-12T00:00:00.000 | {
"year": 2012,
"sha1": "46d33572f9bebbcff5d94a8c56bffab950cfe392",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/annotation/1997d0fd-63cf-4593-877b-604c3e7b524f",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "f338a0de244e3d6d7028e123cc8485eae3a9a428",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
28928629 | pes2o/s2orc | v3-fos-license | Influence of muscle fiber type composition on early fat accumulation under high-fat diet challenge
Objective To investigate whether differences in muscle fiber types affect early-stage fat accumulation, under high fat diet challenge in mice. Methods Twelve healthy male C57BL/6 mice experienced with short-term (6 weeks) diet treatment for the evaluation of early pattern changes in muscular fat. The mice were randomly divided into two groups: high fat diet (n = 8) and normal control diet (n = 4). Extra- and intra-myocellular lipid (EMCL and IMCL) in lumbar muscles (type I fiber predominant) and tibialis anterior (TA) muscle (type II fiber predominant) were determined using magnetic resonance spectroscopy (MRS). Correlation of EMCL, IMCL and their ratio between TA and lumbar muscles was evaluated. Results EMCL increased greatly in both muscle types after high fat diet. IMCL in TA and lumbar muscles increased to a much lower extent, with a slightly greater increase in TA muscles. EMCLs in the 2 muscles were positively correlated (r = 0.84, p = 0.01), but IMCLs showed a negative relationship (r = -0.84, p = 0.01). In lumbar muscles, high fat diet significantly decreased type I fiber while it increased type II fiber (all p≤0.001). In TA muscle, there was no significant fiber type shifting (p>0.05). Conclusions Under short-time high fat diet challenge, lipid tends to initially accumulate extra-cellularly. In addition, compared to type II dominant muscle, Type I dominant muscle was less susceptible to IMCL accumulation but more to fiber type shifting. These phenomena might reflect compensative responses of skeletal muscle to dietary lipid overload in order to regulate metabolic homeostasis.
Introduction
Obesity is characterized by fat accumulation in many sites such as the liver, heart and skeletal muscles [1]. It is well studied that regional fat accumulation within skeletal muscle correlates with insulin resistance independently of visceral fat accumulation and total body fat mass in humans [2,3]. Therefore, exploring patterns of regional fat accumulation may help better stratify obesity into sub-types and predict their corresponding metabolic consequences. The buildup of skeletal muscle adiposity is known to be linked to excess dietary lipids [6], but little is understood about the impact and complications from fiber type composition, which is characterized by its myosin heavy chain isoforms [7]. Type I fibers twitch slowly and are predominantly red tonic muscles. Type II fibers are predominantly white muscles and optimized for fast movements. The two different fiber types have different oxidative abilities. As fat plays an important role in providing energy through oxidation, the generalization of two fiber types as a signal word "muscle" may hinder the understanding of regional lipid accumulation patterns in response to diet. Moreover, these muscles lipids can be further classified into two major pools: (i) EMCL, which resides in adipocytes among the muscle fibers and (ii) IMCL, the intra-myocyte lipid content [4]. It has been suggested that the two lipid pools may have different metabolic roles [5]. Therefore, by separating EMCL and IMCL, a better understanding of "muscle lipids" and its accumulation patterns could be achieved.
MRS can differentiate IMCL and EMCL by their characteristic chemical shifts. Lipids inside the cytosol (IMCL) form small droplets and resonate at 1.3ppm, independent of the relative orientation to the external magnetic field B 0 . Lipids in extracellular adipocytes (EMCL) experience slightly different local magnetism, which shifts their resonance. Their chemical shift is orientation dependent, and is maximal when the muscle fiber is placed parallel to the direction of B 0 ; the protons on the methylene groups resonate at 1.5ppm for EMCL (+0.2ppm from IMCL). With adequate resolution, this separation allows quantification of the two different lipid pools [4].
To better understand the role of fiber types in fat accumulation, we examined the lipid deposition in skeletal muscles under short-term high fat diet (HFD) in C57BL/6 mice, a wellaccepted model for many aspects of human obesity and metabolic syndrome. Because of the size limitation in skeletal muscles of mice, lumbar muscles and tibialis anterior (TA) were chosen to generate enough MRS signals. It is also technically challenging to find skeletal muscle with a pure fiber type. Therefore, we focused on the effects of fiber composition at the early stage of HFD challenge. Lumbar muscles refer to muscles around the lower spine, which consist of type I fiber predominant muscles including multifidus muscle, interspinales muscle and rotatores muscle [8]. TA muscles are located on the lateral side of the tibia in the leg, and >95% of muscle fibers of TA are type II [9]. In this work, we demonstrate the first MRS application in IMCL/EMCL quantification to mouse spinal lumbar muscles, and present novel comparisons to TA muscles. We hypothesized that muscle fiber composition may affect lipid accumulation patterns in response to diet.
Material and methods
Healthy male C57BL/6 mice (n = 12) entered the study at 4-weeks old. Mice were randomly divided into two groups, one (n = 8) treated with 60 kcal% HFD for 6 weeks, and the rest (n = 4) remained on normal control diet (NCD) for the same amount of time. The purpose of NCD group is to demonstrate the effectiveness of HFD, and served as controls to clarify that lipid accumulation was induced by diet. In this work, we did not intend to explore the original lipid distribution pattern in the NCD group. At the end of dietary treatment, MRS was performed in mouse liver, TA and lumbar muscles in accordance with guidelines approved by the Institutional Animal Care and Use Committee of Boston University. Then mice were sacrificed, and the weights of their whole body, liver, perigonadal fat, TA and soleus muscles were recorded.
MRS experiment
MRS experiments were performed with a Bruker 11.7 T Avance spectrometer (Billerica, MA). The mice were anesthetized with 0.5-2% isoflurane and carefully stabilized to achieve the parallel alignment of investigated muscles to the B 0 field. Scout images in the top row ( Fig 1A, Fig 1B) were acquired using RARE sequence with: repetition time (TR) = 2500ms, echo time (TE) = 6.5ms, rare factor = 8, slice thickness = 0.5mm. In TA and lumbar muscles: matrix = 192x192, in-plane resolution = 0.156x0.156mm 2 , and in the liver: matrix = 128x128, in-plane resolution = 0.234x0.234mm 2 . In the top row of Fig 1, the axial and sagittal views of the leg ( Fig 1A) and the abdominal region ( Fig 1B) are used for the geometric planning of MRS acquisition. MRS acquisition voxel indicated by the purple boxes in the top row ( Fig 1A, Fig 1B) was carefully placed in the target regions to avoid signal contamination from large fat depots or major vessels. A local shimming was performed before data acquisition. Spectroscopy data was acquired using the PRESS sequence: TR = 2500ms; TE = 8.671ms; Representative spectra of TA (a) and lumbar (b) muscles are shown. In both 1a and 1b, scout images (top row) indicate the location of voxels (purple boxes) in both the sagittal and axial view of leg or spinal muscles. Representative spectra (2 nd row, raw data) were presented for mice in both normal-control-diet (NCD, left) and high-fat-diet (HFD, right) groups; Raw spectra were analyzed in j-MRUI software to obtain individual fitted component (3 rd row, fitted data). The differences between raw data and fitted data in the 4 th row (residue); Cr, indicates creatine peak (3.02ppm), and used as reference to measure intramyocellular lipid (IMCL, (1.3ppm) and extramyocellular lipid (EMCL,1.5ppm). (c) The comparison of fat accumulation with and without HFD* Indicates p<0.05. The amount of IMCL/EMCL were recorded as a ratio to Cr-peak, hence it does not associate with an actual unit (yaxis). Representative histology (d) verifies the increases of EMCLs in both TA and lumbar muscles in HFD group. The pale spaces (indicated by "*") between red stained cell marks the presences of EMCLs, whereas IMCL is not sensitive to the histological staining. bandwidth = 5kHz; sample number = 2048. The voxel sizes are 0.8x0.8x3.5mm 3 , 1.2x1.2x3mm 3 , and 2.5x2.5x2.5mm 3 for TA, lumbar muscles and liver respectively. For muscles, the water signal was suppressed using VAPOR and 1024 average was used; whereas for liver the average was 128 and without water suppression.
Spectroscopic data was processed using j-MRUI [10]. In muscles, the spectra in the 2 nd row raw data (Fig 1A, Fig 1B) were filtered using the HLSVD algorithm to remove residual water signal. The individual spectral components in the 3 rd row fitted data (Fig 1A, Fig 1B) were computed through AMARES method [11]. EMCL/IMCL concentrations were expressed relatively as methylene protons to the creatine peak (3.02ppm) [12] for cross sample comparison. Therefore, the amount of EMCL/IMCL reported as unitless. In liver, ICL fat was expressed as protons in-CH2 relative to the water proton peak at 4.7ppm. The ratio of extra/intra-myocellular fat (EIR) was calculated by EMCL divided by corresponding IMCL in the same voxel.
Histology
Samples of TA/lumbar muscles from each mouse were fixed in a 10% formaldehyde for 3 hours, and stored in 70% ethanol at 4˚C until further processing. Tissues were then processed and embedded in paraffin. Five micrometer-thick tissue sections were stained with hematoxylin/eosin. Immunohistochemistry were performed as previously described using Anti-Fast Myosin Skeletal Heavy chain antibody (abcom91506) and Anti-Slow Skeletal Myosin Heavy chain antibody (abcom11083) [13]. Individual positive fiber type in the entire section was counted and quantified with ImageJ [14]. Triglyceride in muscle was biochemically determined using kit assays (Sigma).
Statistical analysis
Data are presented as mean±SD (standard deviation). Student T-test was performed to compare between HFD and NCD groups. Pearson correlation was used to determine the relationship between parameters. The criteria for statistical significance is p<0.05.
Results
All 12 mice were successfully examined by MRS. However, one data point from the EMCL in lumbar muscles was excluded due to outlier analysis. Representative spectra of TA/lumbar muscles were shown in Fig 1A and 1B. In all spectra, the IMCL peaks were determined by the methylene proton peak at 1.3ppm, and EMCL at 1.5ppm. In the liver, the ICL peak was determined by a methylene proton peak at 1.3ppm.
HFD significantly increased body and liver weight, as well as perigonadal fat (all p<0.001). TA muscle of HFD (40.7±2.6 g) was lighter than NCD mice (55.3±2.4g, p<0.01) whereas soleus muscle weight was preserved (p = 0.36). Characteristics of the mice and the average amount of fat depots are summarized in Table 1. Among all three intracellular fat depots, only ICL (liver) showed a significant increase with the HFD compared to the NCD (p = 0.01). The amount of IMCL in TA increased~2 fold but was not statistically different (p>0.05) between the two groups ( Fig 1C). In lumbar muscles (Fig 1C), the IMCLs were similar with (2.0±0.4) or without (1.7±0.3) HFD. The EMCLs were significantly elevated with HFD in both types of muscles. In TA, the average EMCL in NCD group was 0.4±0.1, and in the HFD group increased~12 fold to 5.0±1.0 with p<0.01 (Fig 1C, Fig 1D). In lumbar muscles, the EMCL increased even more~66 fold with p<0.001 (Fig 1C, Fig 1D). The total fat (IMCL+EMCL) showed significant increases (all p<0.01) comparing NCD and HCD groups in both muscle types. As for EIR, there was no significant difference between NCD and HFD for TA muscles, whereas in lumbar muscles, EIR significant increased after HFD (p<0.01). Representative histological data confirmed that EMCL accumulated to a greater extent in lumbar muscles than in TA muscles (Fig 1D). MRS data also made it possible to assess relationships between different fat depots. Anatomically adjacent IMCL and EMCL were not correlated with each other (all p>0.05). However, the results for distant fat depots were intriguing, especially in the HFD group. EMCLs in both muscles were highly correlated with r = 0.84 and p = 0.01 (Fig 2A), but we found an unexpected negative correlation with r = -0.84 and p = 0.01 (Fig 2B) of IMCLs between TA and lumbar muscles. Another interesting finding is that EIR of lumbar muscles showed a stronger correlation to distant intracellular fat (TA IMCL: r = 0.96, p<0.001; liver ICL: r = 0.77, p = 0.03) (Fig 2C, Fig 2D), as compared to that of EIR in TA muscle (lumbar IMCL: r = 0.75, p = 0.04; liver: p = 0.61).
Skeletal muscle may shift fiber composition in response to physiological changes such as HFD and exercise training [15][16][17]. We discovered that fiber shifting may relate to original fiber composition. In lumbar muscles, HFD significantly decreased type I fiber while it increased type II fiber (Fig 3A, Fig 3B). In TA muscle, there was no significant fiber type shifting (all p>0.05). Interestingly, the increased type II fiber percentage significantly correlated to IMCL in lumbar muscle of HFD mice with r = 0.80 and p = 0.03 (Fig 3C).
Discussion
Recent studies have shown that lipid distribution plays an important role in health consequences [18][19][20]. In this study, we focused on the impact of fiber compositions on the early development of muscular adiposity. In the same muscle, EMCL and IMCL are anatomically close to each other, yet these distinct pools showed different initial responses to HFD. Similar to previous studies with mice [21,22], our short-term HFD treatment increased EMCL in both TA and lumbar muscles; yet surprisingly had limited effects on IMCL, which suggests that fat tends to accumulate first into extramyocellular adipocytes. The strong correlation between EMCLs in distant muscles suggests that extra-cellular adipocytes share similar properties and work as lipid storage buffers independent from muscle fiber types and location in the early phase of metabolic disorder. It is interesting, that IMCL is initially unaffected by HFD. Short-term HFD increases the gene expression of proteins involved in the β-oxidation pathway [23]. Hence, it is possible that skeletal muscles may resist to lipid stress through lipid consumption, yet this ability may vary depending on fiber compositions. We discovered that IMCL in lumbar muscles remained at a similar level after the short HFD while TA showed~2 fold elevation. Compared to TA muscle, the majority of lumbar muscles, including multifidus muscle, interspinales muscle and rotatores muscle, are dominated by type I fibers [8], which contain large quantities of mitochondria, and utilize fatty acids through β -oxidation as a major energy source. Higher β-oxidation capacity of lumbar muscles may contribute to the tolerance for IMCL accumulation in the early phase of obesity. However, it is also evident that lumbar muscles start shifting towards less oxidative fiber after HFD. Therefore its tolerance to lipid flux may be mitigated later. In fact, we observed that its level of fiber shifting significantly relates to IMCL. However, further work is needed to clarify whether fiber shifting is the result or precursor of impaired βoxidation. Fiber composition effects on EMCL/IMCL in muscles:Quantification by MRI and MRS Ideally, skeletal muscles in anatomic proximity, for example, soleus (type I preference) vs TA (type II preference) muscles, should also be investigated to mitigate the location influences of fat accumulation. However, highly obese mice are too large to fit into the probe and hence, the size of mice limited our choices of muscles. Another limitation of our study is that MRS was acquired in only one voxel for each targeted muscle per mouse. Hence, with mice, a selection bias may exist and future studies in large animals or human will be valuable.
In conclusion, our study demonstrated fiber composition might affect skeletal muscles responses to excess dietary lipid at early stage. At this stage, oxidative type I muscles showed more resistance to lipid stress, yet were more susceptible to fiber shifting. This may suggest muscle might compensate the systemic metabolic overload with regulation of fat depot and fiber type shift.
Author Contributions
Conceptualization: Ning Hua, Hirokazu Takahashi, James A. Hamilton. Comparison of fiber type percentage in lumbar muscles between normal-control-diet (NCD) and high-fat-diet (HFD) mice. Type I fiber significantly decreased (p<0.001) and type II fiber significantly increased (p<0.001) in HFD mice (C) Correlation between intramyocellular lipid (IMCL) and percentage of type II fiber in lumbar muscle of HFD mice. IMCL positively correlates to percentage of type II fiber in HFD mice (r = 0.80, p = 0.03). https://doi.org/10.1371/journal.pone.0182430.g003 Fiber composition effects on EMCL/IMCL in muscles:Quantification by MRI and MRS | 2018-04-03T01:17:22.186Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "ab9c9b92443ba6f8774c5c239c9ca53c86330427",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182430&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab9c9b92443ba6f8774c5c239c9ca53c86330427",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
259743128 | pes2o/s2orc | v3-fos-license | DESIGN BUILD COMPETITION STRATEGY TO IMPROVE PROJECT EXECUTION IN PT KILANG PERTAMINA INTERNATIONAL
ABSTRACT
INTRODUCTION
The operation of Refining and Petrochemical of PT. Pertamina (Persero) is managed by Sub Holding PT KPI. It is also responsible for running the activities of oil refining process into petroleum and petrochemical products in its operating oil refineries as well as new and ongoing projects (Prabatha & Handoyo, 2022). PT. KPI has prepared several priority projects, namely NGRR, RDMP and other development project which are the answer to strategic challenges to maintain the sustainability of the processing business going forward to make Pertamina's refinery a world class competitive refinery with top quartile performance (Kemenko Bidang Perekonomian, 2022).
Based on PT. KPI previous experiences, change order and cost overruns may occur due to unclear/undefined scope of work, inadequate FEED documents, updated licensor documents, etc. In terms of projecy delay, the delay occur due to to additional time for EPC bidding, project overrun during EPC, etc. Within the construction industry there are several project delivery systems that owners may choose to complete their project (DBIA, Design Build Done Right, April 2015), they are Traditional design-bid-build, Design-build, Construction management at risk, Job Ordering Contracting, Multiple-prime contractors. Considering the nature of the oil & gas industry and benchmarking to other smiliar comapny, above project delivery were adjusted and below are the project delivery system which can be choosen; Design Bid Build (DBB), Dual FEED Competition (DFC), Design Build Competition (DBC), Lump Sump Turn Key (LSTK). DBB is is a conventional or traditional technique that entails engaging an architect or engineer to create the project's plans and specifications, then advertising the project for bids and choosing a contractor to carry out the project in accordance with the plans and specifications (Ghadamsi & Braimah, 2016;Gransberg & Maraqa, 2022). Design, bidding, contract award, building, and closeout are the traditional five steps of the DBB process (Papajohn et al., 2020;Shrestha et al., 2020;Touran et al., 2009). The owner engages an architect or engineer to develop the project's designs, specifications, and construction papers during the design phase (Porwal & Hewage, 2013). The design team and owner collaborate intensively at this stage to create a thorough plan that satisfies the owner's requirements and financial constraints (Zhao et al., 2015). The design group produces a collection of plans and specifications that serve as the basis for the project. The owner evaluates the offers and chooses the contractor to complete the work during the contract award phase. With the chosen contractor, the owner signs a contract. The project's scope, budget, and timeframe are up for discussion at this phase between the owner and the contractor.
Design Build Competition Strategy to Improve Project Execution in PT Kilang
DFC combines Front End Engineering Design work (FEED) with engineering detail work, Procurement, Construction and Installation (EPCI) (de Ligny et al., 2021). This FPCI work covers all engineering design work (FEED & Detail Engineering), procurement, implementation construction work and covers installation (EPCI). In 2015, SKK Migas has issued Work Guideline 007 revisi ke 3 (PTK) 007 revision 3 as a guideline management of the Work Supply Chain in activities upstream oil and gas business, where there are several things newly regulated in this PTK book. One of the topic is regarding the type of construction work Integrated FPCI. Previously, PTK stipulates that the executor of the Contract Preliminary Design Development Services can not take part in the bidding package for EPCI work. However, in the third revision of PTK in 2015, this clause was revised in the form of an FPCI employment contract, which is regulated with several criteria points. Approach this new at first applied especially on the LNG mega project, which is demanding optimization of the design process as well regarding the selection of process technology to generate the most project cost effective and is expected to shorten project schedule and allow for applied to other EPCI projects.
DBC is one of strategy in project delivery system among other traditional design bid build, multiple prime contractors and construction management at risk (Hall et al., 2020;Hansen et al., 2021). In DBC, two contractors enter the competition on providing design solution with scope from design and construction (Kim, 2020). Stage gate is required at the end of design phase to evaluate the commercial proposal and award the contractor to proceed the construction phase (Sommer et al., 2015). Different with traditional design bid build where in every phase of the project there is bidding process to award a contractor and the owner is responsible for overseeing two distinct contracts-one for design and one for construction, in DBC bidding process only at the early stage of the project to award two contractor. In DBC the role and responsibilities of designer and construction are under one roof. One contractor drives one unified flow of work from initial concept through completion. LSTK A Lump Sum Turnkey (LSTK) is a combination of Lump Sum (LS) contract and Turnkey (TK) contract. It is a contract with a single lump sum price for all of the works, and the contractor is responsible for completing the project within the agreed fixed cost set forth in the contract. If the contractor completes the project under the fixed total cost, then the contractor makes additional profits from the project. The Lump Sum Contract is normally used in the construction industry to reduce the contract administration costs. The Lump Sum Contract is the most recognised agreement form on simple and small projects, and generally appropriate where the project is already well defined in scopes and responsibilities of both parties, and changes are unlikely (Harrison & Lock, 2017). Therefore, the owner must have sufficiently detailed and complete drawings and specifications, and construction documents at the time of the bid to allow the bidders to properly estimate the cost of labour and materials. The Lump Sum Contract can include incentives or benefits for early completion, or can also have penalties, called liquidated damages, for a late completion.
Design Build Competition Strategy to Improve Project Execution in PT Kilang
Choosing the right project delivery system is a critical part of the strategic acquisition plan every owner must develop when embarking on a new project. By deliberately and pro-actively planning from the start of the project, owners may influence the desired results from everyone involved.
The business issue that this study is going to evaluate is to propose the alternative execution strategy to improve the quality of the project to meet the company goal. With this proposed alternative, it is expected that it can reduce significantly cost overruns and delay by implementing project execution strategy sole source responsibility/single contract namely Design Build Competition where Project Owner can mitigate the problem being the middleperson and potentially becomes engaged in disputes between the designer and the contractor(s). The research would like to contribute more knowledge to DBC implementation in different sectors, especially in PT Pertamina.
METHOD
The research design used for this study was a cross-sectional survey where the data is collected from primary source and secondary source, with mixed method. After that, the collected data was analysed using Analytical Hierarchy Process (AHP) which is a structured technique for organizing and analyzing complex decisions, based on mathematics and psychology. The AHP itself was developed by Thomas L. Saaty in the 1970s. Saaty partnered with Ernest Forman to develop Expert Choice software in 1983, and AHP has been extensively studied and refined since then. It represents an accurate approach to quantifying the weights of decision criteria. Individual experts' experiences are utilized to estimate the relative magnitudes of factors through pair-wise comparisons. Each of the respondents compares the relative importance of each pair of items using a specially designed questionnaire.
In this study, the primary data was collected through Focus Group Discussion (FGD) and interview with experts which has more than 10 years working experiences in project to have broad and deep view of the problems and solutions. To have alternatif or option solution to the problem and factors affecting it, author generated Value-Focused Thinking (VFT) as follow:
Figure 1. Value-Focused Thinking
The secondary data that author will collect and utilize are data coming from article, journal, company regulations and other online source data. When conducting secondary data collection, it is essential to properly cite and attribute the original sources of the data to ensure ethical research practices. Additionally, researchers should consider the relevance and limitations of the data in relation to their research objectives and critically evaluate its suitability for their study.
As a first step, authors defined structure decision hierarchy as a result of dicsussion with epxerts who were involved directyly to project execution at PT KPI with good amount of experience. I n d o n e s i a n J o u r n a l o f M u l t i d i s c i p l i n a r y S c i e n c e , 2 ( 9 ) , J u n e , 2023
Figure 2. Structure Decision Hierarchy
Following the definition of the hierarchy, the next step is to collect data from subject respondents regarding the relative weighting of each criterion in relation to the others and each alternative in relation to the others based on attributes. Each subject-matter expert is surveyed as part of the data collection process. The author met with them and went through each alternative's résumé before demonstrating how to fill out the survey data.
To analyse the data, author used software Expert Choice Comparion Version 6.11.003.45797 to have better consistency, accuration and also minimze error on calcluation. In software Expert Choice Comparion, the first step need to be done to develop and run AHP model simulation create new model: I n d o n e s i a n J o u r n a l o f M u l t i d i s c i p l i n a r y S c i e n c e , 2 ( 9 After that, this interface appear an it can be seen that the new model was created. The next step is to input decision alternatives (DBB, DFC, DBC, LSTK) based on structure decision hierarchy to the model.
Figure 7. Comparion Decision Alternative Key In
Next is adding criteria in provided Objectives column (Minimize changes on project scope, Minimize change order, Minimize project duration, minimize project risk).
Figure 8. Comparion objective key in
Next step is register the experts so that they can start filling the questionnaire in order for the author to collect tha datas. The epxerts registration can be done in the following step. Send emails to experts in order for them to be able to fill ini the questionnaire.
Figure 10. Comparion collective input interface
Next step is comparative judgment which is a strategy used in decision-making to rank or compare several options or criteria according to their relative value or effectiveness. Pairwise comparisons are used by decision-makers to assess the relative merits or preferences of the alternatives or criteria. Scales or numerical values that illustrate the relative differences between the options or criteria are frequently used in these comparisons. In Expert Choice Comparion, comparative judgement was done by experts on survey page provided on sent link.
RESULT AND DISCUSSION
This part show the results of experts respond to provided questionnaire. Software Expert Choice Comparion calculate the data collected from experts and the results can be seen in menu "Synthesize" below. From two figure above, majority of experts agree that changes on project scope play important role on selecting the best project execution strategy, followed by change order, project cost, project risk and project duration. Changes in project scope can have a significant impact on selecting the best project execution strategy because the scope defines the objectives, deliverables, and boundaries of the project. When the scope changes, it alters the project's requirements, timeline, resources, and overall goals. By considering the impact of scope changes, project owner can evaluate different execution strategies and select the one that is most suitable for the revised project requirements, ensuring efficient utilization of resources, timely delivery, risk mitigation, stakeholder satisfaction, and effective cost management.
CONCLUSION
Based on the data and analysis above, it can be concluded that the main and critical factor that actively contribute to problems on project execution strategy in PT. KPI are (1).Changes to the project's scope, (2).Inadequate planning can result in change order and delays, including poor scope definition, inaccurate cost estimation, and unrealistic project schedules, (3).Inefficient | 2023-07-12T06:22:15.185Z | 2023-06-28T00:00:00.000 | {
"year": 2023,
"sha1": "e9d3382e0ca3d49196f5c5dfdb3fde605e1655c8",
"oa_license": "CCBYSA",
"oa_url": "https://ijoms.internationaljournallabs.com/index.php/ijoms/article/download/540/788",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d20a3a18f4e11f946926ae3bd5940143adec8db1",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
220304537 | pes2o/s2orc | v3-fos-license | Identification of Differentially Expressed lncRNAs in a CpG ODN-Activated Macrophage
A macrophage is an important component of innate immunity which can be activated by infection. A series of inflammatory cytokines are produced and released to eliminate pathogens. CpG DNA is an immune stimulator recognized by TLR9, subsequently inducing inflammatory responses in macrophages. Long noncoding RNA (lncRNA) is a novel class of noncoding RNA, whose length is more than 200 nt, but without protein-coding capacity. lncRNAs are involved in many physiological and pathological processes, including inflammatory responses. In our study, a lncRNA microarray assay was performed to identify differentially expressed lncRNAs and mRNAs in RAW264.7 cells at different time points following CpG ODN stimulation. The results revealed that expression levels of 734 lncRNAs and 734 mRNAs were altered at all time points. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) biological pathway analyses were performed to predict the functions of dysregulated genes. Coexpression networks of lncRNA-mRNA were constructed based on the correlation analysis between differentially expressed lncRNAs and 10 selected upregulated mRNAs, which have been reported to be involved in CpG DNA-induced inflammatory responses. In addition, we selected 8 dysregulated lncRNAs for further validation by quantitative real-time PCR. The present study provided a systematic perspective on the potential functions of lncRNAs in CpG ODN-induced macrophage activation.
Introduction
Innate immunity is the first line of host defense against the pathogen threats [1]. A macrophage is an important component of innate immunity and plays a crucial role in the inflammatory responses [2]. Recognition of microbial molecules including lipopeptides, lipopolysaccharides, and DNA by pattern recognition receptors of macrophages such as Toll-like receptors (TLRs) will trigger the intracellular signaling cascades. The process promotes the production of inflammatory cytokines, reactive oxygen, nitrogen species, and antimicrobial peptides for anti-infection. The phagocytosis of macrophages is also enhanced to remove the pathogen [2,3]. However, excessive inflammation in macrophages can cause host damage and even lead to chronic inflammatory diseases, such as obesity, cardiovascular disease, inflammatory bowel disease, and cancer [3]. Thus, it is required for us to better understand the regulatory mechanisms that limit the excessive inflammatory mediators in macrophages.
A great majority of the human and mouse genome is transcribed as noncoding RNAs (ncRNAs), among which microRNAs (miRNAs) are a well-known posttranscriptional regulator of gene expression [4,5]. Recently, another novel class of ncRNA, long noncoding RNAs (lncRNAs), has also been identified as important regulators of gene expression [6]. lncRNAs are arbitrarily defined as having 200 or more nucleotides to discriminate them from small noncoding RNAs (sncRNAs). On the basis of their genomic localization relative to protein-coding genes, lncRNAs are further categorized as intergenic, overlapping, intronic, and exonic. lncRNAs can control gene expression at the levels of epigenetic control, transcription, RNA processing, and translation [6,7]. lncRNAs play a significant role in many physiological and pathological processes [6,7]. In recent years, a series of lncRNAs have been identified in inflammatory responses, and their functions are also being clarified [8][9][10]. For instance, lincRNA-Tnfaip3 can act as a coregulator of NF-κB to modulate inflammatory gene transcription in mouse macrophages [11]; lincRNA-Cox2 can promote late inflammatory gene transcription in macrophages through modulating SWI/SNF-mediated chromatin remodeling [12].
Bacterial genomic DNA contains unmethylated cytosine followed by guanine (CpG ODN), which can be recognized by TLR9 in macrophages [2,3]. Studies have shown that both bacterial CpG DNA and synthetic CpG ODN can induce macrophage activation, with the release of proinflammatory cytokines, such as IL-6, IL-12, and TNF-α, the production of NO and iNOS, and macrophage polarization [13,14]. The CpG ODN-induced macrophage activation is involved with a series of inflammatory molecules and pathways. However, the changes of the lncRNA expression pattern and their roles in CpG ODN-activated macrophages have not been clarified.
In the present study, we employed a lncRNA microarraybased profiling assay to detect changes of lncRNAs at different stages of CpG ODN-induced macrophage activation. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses were performed based on the function of mRNAs. Coexpression and network potential targeting relationship were constructed according to the microarray results and bioinformatics predictions. Our findings revealed the involvement of lncRNAs in the process of CpG ODN-induced macrophage activation. 2.2. RNA Extraction and q-PCR. Total RNA from the macrophage cells was extracted by using TRIzol reagents (Thermo Fisher, Waltham, MA, USA), and cDNA was synthesized from 1 μg of total RNA by using a reverse reaction kit, according to the manufacturer's instructions (Promega, Madison, WI, USA). The quantitative real-time PCR analyses for TNF-α, IL-6, and IL-1β mRNA expression or lncRNA expression were performed in a Roche qRT-PCR detection system (Roche, Basel, Switzerland). β-Actin was used as an internal control. The mRNA or lncRNA expression level was calculated using the 2 −ΔΔCt method. We compared all other groups with the average ΔCt value of the control group in one PCR experiment and then used 2 −ΔΔCt for data analysis. All the results are the average ratios of three different independent experiments. Then, the same group in the different experiment was averaged and further calculated. The primer sequences for qRT-PCR are listed in Table 1. 2.3. Measurement of Cytokine Levels by ELISA. The cell-free supernatants were harvested at indicated times. TNF-α, IL-1β, and IL-6 levels in the supernatants were assessed by ELISA according to the manufacturer's protocol (Cusabio, Wuhan, China).
NO Quantification.
The production of NO was determined by detecting the quantity of nitrite in the supernatant from the cells cultured by the Griess method, using a standard curve constructed with nitrite ranging from 5 to 40 μM.
Western
Blot. The cells were lysed by 1 × SDS lysis buffer containing protease inhibitors. The proteins were subjected to electrophoresis on 8% SDS/PAGE gels and then transferred into the polyvinylidene difluoride (PVDF) membrane. Subsequently, they were determined with antibodies against iNOS (Cell Signaling Technology, Boston, MA, USA) and GAPDH (Millipore, Bedford, MA, USA).
2.6. lncRNA Microarray Analysis. Total RNA was extracted from RAW264.7 cells stimulated with ODN1826 (1 μM) at different time points according to the study design. The RNA quantity and quality were assessed by NanoDrop. And RNA integrity was detected by capillary electrophoresis using an RNA 6000 Nano Lab-on-a-Chip kit and the Bioanalyzer 2100. The GPL22782-Agilent-074512 Mouse LncRNA Microarray V4.0 was adopted for the detection of lncRNA and mRNA expression, and 40,825 lncRNAs and 30,680 mRNAs were detected. The lncRNA microarray was conducted by CapitalBio Technology (Beijing, China).
Differential lncRNA and mRNA Screening and Clustering
Analysis. The raw data of each array result was normalized and then subjected to GeneSpring software (v. 13.0, Agilent). Differentially expressed lncRNA and mRNA were screened with a p value < 0.05 and fold change > 2:0. Cluster software (v. 3.0) was employed to analyze differentially expressed lncRNAs and mRNAs. The normalized expression level of each RNA type was further analyzed with hierarchical clustering (HCL). The results were presented by using TreeView software (v. 1.5). The color green-black represents low expression, while red represents high expression. Difference integration analysis (Venn analysis) was also done. The common elements between the stimulated cells were determined by Venn analysis. Often up-and downregulated RNAs were shown in pies with different colors.
GO and Pathway
Analyses. Differentially expressed mRNAs were selected for target prediction. GO analysis and pathway analysis were used to determine the roles of these dysregulated mRNAs in biological pathways or GO terms. We uploaded all differentially expressed mRNAs at the different time points into the Database for Annotation, Visualization and Integrated Discovery (DAVID) for annotation and functional analysis, including gene set enrichment analysis and mapping gene sets to the KEGG pathway. GO terms with p value less than 0.05 were selected. The top 10 enriched GO terms associated with upregulated or downregulated mRNAs were presented. KEGG pathway analysis was also performed to determine the involvement of differentially expressed genes in different biological pathways.
2.9. lncRNA-mRNA Coexpression Network. To predict the functions of differentially expressed lncRNAs, we constructed the lncRNA-mRNA coexpression network. We chose 10 significantly upregulated genes involved in inflammatory signaling pathways to build the CNC network based on the degree of correlation. Pearson's correlation coefficient value was calculated for lncRNA-mRNA pairs, and strong correlated pairs (0.99 or greater) were included (either positive or negative) in the coexpression network. A p value < 0.05 was considered statistically significant. We drew the coexpression networks using Cytoscape software (v. 3.2.1).
2.10. Statistics. All statistical analyses were conducted by using the SPSS 16.0 software (Chicago, IL, USA) and Graph-Pad Prism 5.0. Results are expressed as mean ± SD from at least three experiments. Student's t-test was used to determine the significance of difference between different groups. p < 0:05 was considered significant.
CpG ODN Induced Inflammatory Responses in
Macrophages. CpG DNA can be recognized by TLR9 in immune cells, consequently inducing the release of proinflammatory cytokines, as well as the production of NO and iNOS. In our study, we firstly observed inflammatory responses in CpG ODN-activated macrophages. The mouse macrophage cell line RAW264.7 was stimulated with an active ODN type, CpG ODN1826. Then, at the different time points of 4 h, 8 h, and 16 h after the treatment of ODN1826, we observed the morphological changes of the cells ( Figure 1(a)). In addition, we detected the mRNA and protein levels of three critical proinflammatory cytokines, TNF-α, IL-1β, and IL-6, by qRT-PCR and ELISA, respectively. As shown in Figures 1(b) and 1(c), after the treatment of ODN1826, TNF-α, IL-1β, and IL-6 expression increased. The expression of lncRNAs and mRNAs was detected by GPL22782-Agilent-074512 Mouse LncRNA Microarray v4.0, which includes 40,825 lncRNAs and 30,680 mRNAs. The gene microarray analysis for CpG ODN-activated macrophages was consistent with TLR9 signaling activation, which showed that most inflammatory related-gene expression was upregulated (Figure 1(d)). Meanwhile, we observed the effects of ODN1826 on M1 inflammatory responses, as manifested by the significantly increased NO and iNOS. RAW264.7 macrophages were treated with ODN1826. The production of NO and iNOS in the macrophages was determined. In RAW264.7 macrophages, ODN1826 stimulation could promote the production of NO and iNOS (Figures 1(e) and 1(f)). These data indicated that ODN1826 could induce a remarkably stronger inflammatory response. A cluster was generated and analyzed with hierarchical clustering (HCL) for the 734 differentially upregulated lncRNAs and 1,067 downregulated lncRNAs (Figure 2(d)).
Changed Expression Profiles of lncRNAs and mRNAs in
In the same way, a cluster was generated and analyzed with HCL for the 734 upregulated mRNAs and 632 downregulated mRNAs (Figure 2(h)). The information on the data was submitted to the Gene Expression Omnibus, and the accession number is GSE120417.
Validation for the Expression of Significant Transcripts by
qRT-PCR. We selected 8 lncRNAs to verify the microarray results by qRT-PCR assays. The results showed that upon the stimulation with CpG ODN, the expression of lncRNA lnc pvt1, lincRNA-Cox2, Meg9, and Braveheart in CpG DNA-stimulated macrophages was upregulated, whereas Cyrano, NR_015555.1, NR_002854.2, and BACE1AS were downregulated ( Figure 3). The result is consistent with the microarray assay, which verified the veracity of microarray results. The finding provided evidence that these lncRNAs could be involved in CpG ODN-induced macrophage activation.
Delineation of Gene Ontology (GO) and KEGG Pathway
Analyses. Next, all differentially expressed mRNAs as described in Figure 2 were further analyzed by DAVID Bioinformatics Resources 6.7. The GO enrichment analysis was conducted mainly on three domains, namely, biological process (BP), cellular component (CC), and molecular function (MF) for upregulated and downregulated mRNAs, respectively. Interestingly, for upregulated mRNAs, the most enriched and meaningful terms belonged to the BP category, most of which were related to immunity, while GO terms associated with downregulated mRNAs were binding, protein binding, mitotic cell cycle, etc. (Figure 4(a)). Moreover, KEGG pathway enrichment analysis was also made. Our data showed 10 pathways associated with upregulated mRNAs and downregulated mRNAs, respectively. Similarly, the top pathways in upregulated protein-coding genes were involved with the TNF signaling pathway, NOD-like receptor signaling pathway, and NF-kappa B signaling pathway (Figure 4(b)). These results revealed that these pathways might be implicated in CpG ODN-induced macrophage activation.
Construction of the lncRNA-mRNA Coexpression
Network. Up to now, the roles of most lncRNAs have not been annotated. So the functional prediction of lncRNAs is partially dependent on the coexpressed mRNA function. Herein, we chose 10 significantly upregulated mRNAs to build the CNC network ( Figure 5). These 10 mRNAs, including Ctsk, Nfkb2, IL-1β, and IL-6 are involved in inflammatory responses and play important roles in the regulation of inflammatory signals. The network is based on Pearson's correlation coefficient (the absolute value of PCC ≥ 0:99, p value < 0.01, and FDR < 0:01). From the network, we observed that these important molecules were intimately related to a number of lncRNAs. For example, upregulated NR_033616.1 was positively associated with Ctsk, and lincRNA-Cox2 was positively related to Nfkb2.
Discussion
Bacterial genomic DNA contains unmethylated CpG DNA, which is a well-known immunostimulator, and triggers innate immunity against infection as well as an adaptive immune response [2,3]. Furthermore, a short synthetic oligonucleotide containing a CpG motif (CpG ODN) can also function as an agnostic to activate immune signals. For instance, both bacterial DNA and synthetic CpG ODN can induce the release of inflammatory cytokines, including TNF-α, IL-1β, and IL-6, as well as the production of NO and iNOS [13,14]. In the present study, we observed the details of CpG ODN-induced inflammatory responses in macrophages. The results showed that CpG ODN1826 stimulation could cause the morphological change of macrophage cells and induce the change of inflammatory gene expression, subsequently promoting the release of inflammatory cytokines and the production of NO and iNOS. Differing from LPS recognized by TLR4, CpG DNA is recognized by TLR9 in immune cells, including macrophages. Upon CpG DNA stimulation, TLR9 recruits MyD88 that activates p38 and c-Jun, consequently leading to the activation of transcription factors including AP-1 and NF-κB, and the production of inflammatory mediators [13,14]. CpG ODN-induced inflammatory response is a fairly complex process, which is involved with many molecules, such as inflammatory mediators, transcription factors, Color key and histogram 7 Journal of Immunology Research and regulatory factors. Long noncoding RNAs (lncRNAs) are a subgroup of noncoding RNAs (ncRNAs) with the length more than 200 nt, but without protein-coding potential [6]. Recently, lncRNAs have been reported to be implicated in inflammatory responses [8,9]. lncRNA FIRRE regulates inflammatory gene expression through interacting with hnRNPU in macrophages [15]. A natural antisense transcript, AS-IL1α, controls inducible transcription of the proinflammatory cytokine IL-1α [16]. Mao et al. identified differentially expressed lncRNAs in the process of TLR4 signaling activation in mouse macrophages [17], while another study performed by Huang et al. reported differentially expressed lncRNAs in polarized macrophages followed by the stimulation with IFN-γ+LPS or IL-4 [18]. In addition, Dou et al. also identified changed expression profiles of lncRNAs, mRNAs, circRNAs, and miRNAs during osteoclastogenesis [19]. In our study, the expression profiles of lncRNAs and mRNAs in mouse macrophage cells were detected at different time points following CpG ODN1826 stimulation. From the results, we figured out that during the process of CpG ODN-induced macrophage activation, thousands of lncRNAs were differentially expressed compared with the control group. It is very interesting to notice that the expression pattern of lncRNAs was consistent with mRNAs.
The annotation results of the most significant Gene Ontology showed that the top 10 increased biological processes belong to or are associated with immunity. Meanwhile, KEGG pathway analysis for the differentially expressed mRNAs revealed that top 10 pathways associated with upregulated mRNAs were the TNF signaling pathway, NOD-like receptor signaling pathway, NF-kappa B signaling pathway, influenza A, measles, herpes simplex infection, C-type lectin receptor signaling pathway, osteoclast differentiation, Tolllike receptor signaling pathway, and IL-17 signaling pathway, most of which are involved in inflammatory responses.
From the lncRNA-mRNA coexpression network, we found that functional molecules and inflammatory cytokines involved in CpG DNA-induced inflammatory response, including Ctsk, Nfkb2, IL-1β, and IL-6, were coexpressed with multiple lncRNAs, which formed a complex network. The CNC analysis result indicated that the change of lncRNA expression might be implicated in CpG ODN-induced macrophage activation by regulating mRNAs. Interestingly in our study, most of the lncRNAs in the coexpression network have not been annotated yet. It will be much worthy to Journal of Immunology Research perform further study in exploring the underlying mechanisms of these lncRNAs.
Conclusion
Innate immunity is the first line of host defense. Despite the benefits of innate immunity, it can be a double-edge sword as excessive inflammation will cause host damage. So it is important to better understand the regulatory mechanisms that control inflammatory responses. Up to date, many molecules and signaling pathways involved in macrophage polarization have been reported [20][21][22][23][24]. Recently, a number of studies in the immune system have provided us accumulating evidence that lncRNAs also play an important role in regulating inflammatory responses. In the present study, we identified a profile of dysregulated lncRNAs that might be involved in CpG ODN-induced macrophage activation. Our data provides a perspective for further functional research of lncRNAs in CpG ODN-induced inflammatory responses and helps to clarify the mechanisms of inflammation.
Data Availability
All the data except microarray data used to support the findings of this study are included within the article. The microarray data had been deposited in Gene Expression Omnibus (GSE120417).
Conflicts of Interest
The authors declare no conflict of interest. | 2020-06-18T09:03:26.095Z | 2020-06-13T00:00:00.000 | {
"year": 2020,
"sha1": "566befdc7fd3875dd6da2dd5208169d794ddc546",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/1407654",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "146e25965ba5c083e753961ed6e02111c2b502c2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
264896886 | pes2o/s2orc | v3-fos-license | Cullin-3 intervenes in muscle atrophy in the elderly by mediating the degradation of nAchRs ubiquitination
involves in the loss of muscle
Introduction
Sarcopenia is currently the main cause of functional decline in the elderly, its main manifestation is muscle atrophy, which can significantly increase the risk of falls, fractures, and hospitalization in older adults, diminishing quality of life and adding mortality risk (Yeung et al., 2019).It is conservatively estimated that >50 million people are affected by sarcopenia today, and it is expected that >200 million people will be affected by it in the next 40 years (Dhillon and Hasni, 2017).However, so far, there is a lack of effective drugs for the treatment of sarcopenia (Liu and Yue, 2022).therefore, it is still a hot topic to explore the pathogenesis of sarcopenia and to find effective intervention targets in the progression of sarcopenia.
The pathogenesis of sarcopenia is multifactorial, with various systemic factors, such as the endocrine system, central nervous system, and immune system, being related to its development (Wiedmer et al., 2021).However, compared with these systemic factors, the local neuromuscular environment has been confirmed with significant factor in the pathogenesis of sarcopenia.Substantial researches have revealed that the stability of the structure and function of NMJ is very important for maintaining muscle mass (Tintignac et al., 2015).Baehr et al. found no significant difference in muscle protein synthesis between 9 month old and 29 month old rats in the recovery process of skeletal muscle disuse atrophy, but the difference in muscle function recovery was mainly related to the process of neuromuscular transmission (Baehr et al., 2016).NMJ instability in aged muscle led to muscle fiber remodeling, affecting the recovery of muscle function (Baehr et al., 2016).It is well known that the nutritional factors required to maintain function of skeletal muscle need to be innervated by motor neurons.When innervation of muscles deteriorates, insufficient nerve input may lead to muscle weakness or even atrophy (Choi et al., 2018).Therefore, numerous studies have suggested that denervation and NMJ degradation are the key determinants of sarcopenia (Larsson et al., 2019;Gonzalez-Freire et al., 2014;Curcio et al., 2016).NMJ is a highly specialized synapse, which acts as an "excitation-contraction coupling" through the binding of acetylcholine released by presynaptic membrane to nAChRs, so as to complete free movement and maintain muscle mass and strength (Punga and Ruegg, 2012).With the increase of age, the NMJ function and structure may deteriorate, resulting in the local loss of innervation of muscle fibers, thus destroying the precise overlap between the nerve endings of NMJ and the clusters of nAChRs on muscle cells.This makes the neuromuscular junction unstable, eventually leading to muscle atrophy (Punga and Ruegg, 2012).
In the previous work, we carried out transcriptional sequencing and bioinformatics analysis of extensor digitorum longus (fast muscle fiber) and soleus muscle (slow muscle fiber) of 29 months and 2 months aged mice.The results showed significant differences in Cul3 gene expression between fast muscle fibers and slow muscle fibers of aged and young mice (Xu et al., 2022).Further analysis using STRING database shows that there is a close interaction between Cul3 and nAChRs (Xu et al., 2022).Cul3 is a pivotal member of the ubiquitin ligase E3 family, which plays a vital role in connecting ubiquitin to substrate proteins and promoting ubiquitin degradation (Petroski and Deshaies, 2005).The developmental maturation and stability of NMJ are closely related to nicotinic acetylcholine receptor (nAChR) aggregation in the postoutburst membrane.The decrease of nAChR density caused by various factors is the main reason of NMJ degeneration (Oda, 1984).Therefore, we speculate that Cul3 may mediate the ubiquitin degradation of nAChRs in NMJ, resulting in a decrease in the level of nAChRs on the postsynaptic membrane, leading to NMJ degeneration and accelerating the progression of fast muscle fiber atrophy and sarcopenia in the elderly.We investigated the Cul3 expression of aged muscles and its potential connection with nAChRs utilizing in this study.These findings offer a preliminary support for farther clarifying the internal mechanism of Cul3 and muscle atrophy in the future.
Establishment of animal models
Both C57BL/6 aged 6 months and 18 months mice were purchased from the Cancer Institute (Cavins Laboratories, China).Based on age and through motor ability tests such as grasp, stride, and stick-turn test sieve, we screened two well-characterized groups of mice as model (18-monthold old mice) and control (6-month-old young mice), respectively.We executed the mice using the cervical dislocation surgical method and then isolated the toe long extensor muscles (fast muscle fibers, EDL) for subsequent experiments.
Isolation and culture of primary mouse muscle cells
The isolation and culture of primary mouse muscle cell was according to the previous study (Gnocchi et al., 2009).Mice were executed using the cervical dislocation surgical method and subsequently disinfected with 70 % ethanol.EDL was separated and the fat tissue was removed as much as possible from muscle surface.The hind limb muscles of the mouse were then separated from the skeleton under a dissecting microscope.The separated muscle was put into cell culture dishes and digested with a collagenase/dispase/calcium chloride solution (500 μL).The muscle was finely chopped with sterile scissors and blades.The chopped muscle was then transferred to a new sterile tube and digested with a collagenase/dispase/calcium chloride solution (1 mL).The tube was placed on a shaker at 37 • C for 1.0-1.5 h until large muscle pieces were no longer visible to the naked eye.The cell strainers were used filtering the digested tissue, then centrifuged at 300 g for min.Discarded the supernatant, and added 5 mL of culture medium (F10 medium supplemented adding 10 % fetal bovine serum, 2.5 μg/L bFGF, 100 U/mL penicillin and streptomycin, 5 % chicken embryo extract) was added to the extracted cells according to the number of extracted cells for incubation, then the cells were suspended and transferred to 37 • C,5 % carbon dioxide incubator for culturing.Fibroblasts were removed based on the difference in adhesion rate between different cell types.Every half hour, the floating cells were transferred to a new cell culture dish, and this process was repeated five times until non-muscle cells such as fibroblasts were no longer visible under a microscope.Primary muscle cells (PMC) were then divided into a control group (PMC from young mice) and a model group (PMC from aged mice) according to the pre-set grouping.
Hematoxylin and eosin staining of muscle tissue
The hind limb muscle tissues of young and old mice were selected respectively, and after fixing the tissues with 10 % formalin overnight, the tissues were routinely dehydrated, embedded, sectioned, HE stained, and transparently sealed, and then the sections were observed under an optical microscope (Fig. S1).
RNA isolation and expression analysis
The RNA isolation and expression analysis steps of this experiment were operated in accordance with the kit instructions.Total RNA was abstracted from muscle tissue or cells by TRIZOL reagent and was reverse transcribed into cDNA for quantitative analysis of mRNA expression, and then amplified with primers with TB GreenTM Premix Ex TaqTM II kit and the thermal cycler (Applied Biosystems, 7500 Fast Real-Time PCR System).The conditions of Real-time quantitative reverse transcription polymerase chain reaction are as follows: 45 • C min, 95 • C 5 min, 95 • C 10 s, 40 cycles, the last 60 • C 1 min.Under the condition of β-actin as internal reference, the relative expression of target mRNA was tested by means of Ct value by 2-ΔΔCt Ct method.The reagents and kits used in the experiment were obtained from Beijing Baori Biological Technology Co., Ltd.And the primers were designed and obtained from Shanghai Biological Engineering Technology & Services Co., Ltd.The primer sequences utilized are listed below: Cul3: Forward GACAGCTCACACTCCAGCATCAC Reverse ACCGCCAACACCAACCTCAGA GAPDH: Forward AAGGTCGGTGTGAACGGATT Reverse TGAGTGGAGTCATACTGGAACAT nAChR: Forward AAGGTCGGTGTGAACGGATT Reverse TGAGTGGAGTCATACTGGAACAT
Protein extraction, electrophoresis, and western blotting
The muscle tissue was crushed in liquid nitrogen, and then added to the lysate consisting of stainless steel beads and 60 mm Tris pH 6.8, 12 % glycerol, and 1 % SDS, and homogenized with a bead mixer (BBX24B Bullet Blue blender, Next Advances).After centrifugation, the supernatant was taken (in case of cell samples, the supernatant was obtained by centrifugation after cell lysis).The protein concentration was measured with Bio-Rad DC Protein Assay Kit (Hercules, CA, USA).The sample amount of each group of proteins was 30-50 μg, which was transferred onto a PVDF membrane after undergoing 12 % SDS-PAGE gel electrophoresis.The PVDF membrane was sealed in 5 % skimmed milk powder for 2 h, followed by immunoblotting using specific antibodies.The membrane was hatched overnight at 4 • C. Next day, the membrane was flushed 3 times with PBS and hatched with secondary antibodies at 37 • C for 1 h.Then, flushed 3 times with PBS, and the Western blot bands were obtained by ECL development.The grayscale value of Western blotting was calculated by Image J software.
Immunofluorescence staining for NMJ and nAChRs in muscle tissue
EDL of the mouse was taken and imbedded in tissue cryopreservation medium (Triangle Biomedical Sciences, Inc.), and frozen with liquid nitrogen.Stored the Tissue blocks at − 80 • C until sectioning, sections were fasten to 4 % formaldehyde solution for 15 min under usual temperature, seeped with 1 % Triton X-100 for 10 min, closed with 5 % goat serum at 37 • C for 1 h, and then hatched with α-Bungarotoxin@Alexa Fluor 594 rabbit monoclonal antibody (Molecular Probes, dilution ratio 1:500) and Cul3 rabbit monoclonal antibody (Beijing Bioss Biotechnology Co., Ltd., dilution ratio 1:200), respectively, at 4 • C overnight.After be rinsed with PBS for 3 times, the sections were hatched with fluorescent FITC-Cul3 goat anti-rabbit antibodies (Beijing Bioss Biotechnology Co., Ltd., dilution ratio 1:1000) for 1 h at 37 • C.After be cleaned with PBS for 3 times, the samples were stained with DAPI under dark conditions for 5 min for nucleo-labeling.Samples were observed utilizing a laser confocal microscope (Olympus FV3000).nAChRs was stained red, Cul3 was stained green, and DAPI was stained blue.
Validation of protein-protein interactions by co-immunoprecipitation
To verify the protein interaction between Cul3 and nAChR, the experimental can be summarized as follows: Using RIPA lysis buffer to prepare skeletal muscle lysate.The immunoprecipitation protocol has been slightly modified based on reference (Yeung et al., 2019).In short, after supplementing the complete mixture of mini protease inhibitors (Roche) × Radioimmunoprecipitation analysis buffer in phosphate buffered saline (PBS) [0.5 % Nonidet P40, 50 mM Hepes pH 7.5, 150 mM NaCl, 1 mM EDTA, and protease inhibitor] for 15 min, centrifuged at 4 • C for 15 min, and aspirated for 20 min μL supernatant was used as the Input group for immunoprecipitation.Protein A agarose beads and Cul3 antibodies were shaken at 4 • C for 2-4 h, and 14,000 g was centrifuged for 15 min to remove non-specific proteins.Discard the supernatant, use a sampling gun to suck out the residue, add 15ul 2× loading buffer and cook for 5 min as the Co-IP group sample, and add 10ul 2× loading buffer and cook the remaining sediment once again as the IP group.Subsequently, Western Blot assay was performed to detect its immunoprecipitation efficiency using anti Cul3 and anti nAChR antibodies.
Protein correlation analysis
The association between the expression of the two proteins was analyzed by scatter plot and normal distribution.
Statistical analysis
The data is represented in the form of mean ± standard deviation (mean ± SD).One-way analysis of variance was used for comparing if there were statistical significance between the two groups, with P ≤ 0.05 (*P ≤ 0.05, **P ≤ 0.01) indicating statistical significance.GraphPad Prism 7.0 software was set aside for data analysis.
Significant upregulation of Cul3 expression and downregulation of nAChRs expression in fast-twitch fibers and primary muscle cells in aged mice
To detect whether there were differences in Cul3 and nAChRs expression levels in young (6 months) and old (18 months) mice, we performed western blotting and reverse transcriptase polymerase reaction analysis on fast muscle fibers and PMC, separately.The results demonstrated that the levels of gene and protein of Cul3 were significantly elevated in both fast muscle fibers and PMC of aged mice compared to young mice (Fig. 1A, B, p < 0.01), whereas the gene and protein levels of nAChRs were significantly decreased (Fig. 1C, D, p < 0.01).The association between nAChRs and Cul3 expression was analyzed using scatter plots and normal distribution, and the results showed a linear negative correlation between these two proteins (Fig. 1E).
Co-localization of Cul3 and NMJ exists in fast muscle fibers of aged mice, and the expression trends of both are opposite
By conducting immunofluorescence staining of nAChRs in the EDL of mice, we observed that the morphology of nicotinic acetylcholine receptor clusters in fast muscle fibers of old mice was less regular (Fig. 2: Indicated by arrows), with blurred edges and significantly reduced numbers compared to young mice.In contrast, nicotinic acetylcholine receptor clusters in the EDL of young mice were evenly distributed and well organized.Through immunofluorescence staining of Cul3 in the EDL of mice, we found that the expression signal of Cul3 was remarkable elevated and widely distributed in the older mouse EDL, whereas the expression level was lower and sporadically distributed in younger mice.The merged staining results of nAChRs and Cul3 showed that the signals of Cul3 and nAChRs were overlapped (Fig. 2).This suggested that there was co-localization of Cul3 and nAChRs in EDL of aged mice.In addition, the expression level of Cul3 specifically increased at the NMJ site of EDL in aged mice, while on the contrary, the expression level of nAChRs decreased.
Ubiquitin degradation of nAChRs protein occurred in fast muscle fibers and native muscle cells of aged mice
To verify whether the decreased expression level of nAChR protein in the older mouse EDL is associated with ubiquitination, we performed ubiquitination assays on nAChR protein in mouse fast muscle fibers and PMC.The results showed that the ubiquitination level of nAChR protein in EDL and primary muscle cells of the aged group showed a remarkable raise (Fig. 3A, p < 0.01), while the nAChR protein expression was significantly reduced (Fig. 3B, p < 0.01) compared to the control group.This confirmed that the nAChR protein underwent ubiquitinated degradation.
Existence of interactions between Cul3 and nAChRs in fast muscle fibers of aged mice
We performed co-immunoprecipitation assay using the EDL from aged mice, and the results showed Cul3 and nAChRs were simultaneously expressed in EDL (upper and lower band on the left side of Fig. 4), and they combined and interacted with each other (lower band on the right of Fig. 4).
Discussion
Muscle atrophy is a direct presentation of sarcopenia, mainly caused by excessive protein degradation.The balance between protein synthesis and protein degradation pathways is maintained by muscle mass, and this balance is considered to change toward protein degradation in the process of atrophy (Jagoe and Goldberg, 2001).Protein degradation systems have been extensively known.There exists several protein degradation systems, consisting of the autophagy-lysosome system, the ubiquitin-proteasome system (UPS), and the calmodulin system in muscle cells (Sandri, 2013;Sandri et al., 2013;Sorimachi and Ono, 2012).These systems abtain a vital role in the removal of damaged proteins and coordinate protein turnover.Among them, the UPS is the main degradation system, responsible for 80 % of muscle protein degradation (Blondelle et al., 2019a).This system consists of three major types of enzymes: ubiquitin-activating enzymes (E1), ubiquitin-binding enzymes (E2), and ubiquitin-protein ligases (E3) (Bonaldo and Sandri, 2013).Through a cascade reaction of multiple enzymes, proteins are ubiquitinated, labeled, and transferred to the proteasome for degradation (Bonaldo and Sandri, 2013).It is well known that UPS gives an important role in participating in protein degradation, maintaining protein dynamic balance, and other cellular functions (Bachiller et al., 2020).Studies have shown that the ATP dependent ubiquitin proteasome pathway is the main cause of deepened protein hydrolysis in miscellaneous types of muscle atrophy (Jagoe and Goldberg, 2001).
In this study, our data showed that Cul3 expression was crucial upregulated in fast muscle fibers and primary muscle cells of aged mice, which may be related to muscle atrophy in aged mice.Previous studies based on gene expression and structural analysis have confirmed the association of Cul3 with muscle atrophy (Shafiee et al., 2018).Cul3 is a core member of the ubiquitin ligase E3 family, and as an important part of the UPS, is involved in regulating various cellular functions, including cell cycle, signal transduction, and protein transport (Dong et al., 2020;Chen et al., 2015).Recent researches have revealed that Cul3 is enriched extremely in muscle tissue and is essential for myoblast differentiation and maturation (Blondelle et al., 2017).In addition, there is growing evidence that Cul3 and its substrate aptamers play a critical role in muscle diseases.Some studies have suggested that mutations in Cul3's substrate aptamer gene Kelch-like 9 (Klhl9) may lead to early-onset autosomal dominant distal myopathy (Cirak et al., 2010).Mutations in Klhl41 and Klhl40 may cause nemaline myopathy in mice and humans (Gupta et al., 2013;Garg et al., 2014).
To further explore the role played by Cul3 in muscle wasting in aged mice, we performed immunofluorescence staining on the EDL of mice, and the results demonstrated that the Cul3 expression level showed a specific increase at NMJ of EDL in aged mice, in contrast to a decreasing trend of nAChRs expression.It has been demonstrated that UPSmediated impairment of the dynamic balance of proteins in neuromuscular synapses triggers muscle diseases (Bachiller et al., 2020).For example, overexpression of the E3 ubiquitin ligase PDZRN3 in skeletal muscle results in defective NMJ growth and maturation (Lu et al., 2007).In the nemaline myopathy, the E3 ubiquitin ligase Cullin-3 deficiency leads to the aggregation of non-muscle α-actinin ACTN1 in muscle fibers, which is the consequence of inefficient degradation of UPS affecting normal NMJ development (Blondelle et al., 2019b).The nAChRs is a pentameric ion channel composed of α and β subunits, which mainly accumulates in the fold of the postsynaptic membrane at NMJ.As a key receptor at NMJ, the aggregation of nAChRs is critical for maintaining the stability and functional maturation of NMJ (Zelada et al., 2021;Garcia et al., 2019).However, as we age, nAChRs damage occurs, leading to NMJ degeneration.Although the exact mechanism of nAChRs damage is still unclear, studies have shown that aging may cause abnormal expression of the Agrin-LRP4 (Low-density lipoprotein receptor-associated protein 4)-MuSK (Muscle-particular Kinase) signaling pathway, which interferes with nAChRs localization and aggregation, leading to NMJ degeneration (Ibebunjo et al., 2013).PKA may affect NMJ maturation by delaying nAChRs cluster aggregation through phosphorylation (Garcia et al., 2019).In addition, morphologically, AChE staining showed that the postsynaptic membrane region of NMJ was enlarged and broken, synaptic fold degeneration, nAChRs dispersed distribution, and mismatched with the active region of the presynaptic membrane (Molotsky et al., 2022).In combination with our experimental results, the number of acetylcholine receptor clusters in the NMJ of the EDL of aged mice was reduced, accompanied by a significant growth in the expression of Cul3.We concluded that Cul3 may mediate the ubiquitinated degradation of nAChRs protein in the NMJ of aged mouse skeletal muscle, resulting in the degeneration of the NMJ and thus participating in the aging process of muscle tissue.
Previous studies have revealed that Cul3 can inhibit nAChRs aggregation at the posterior membrane of NMJ by affecting the remodeling of the tubulin cytoskeleton in skeletal muscle cells (Blondelle et al., 2017).Cul3 deficiency can alter normal NMJ development (Blondelle et al., 2019b).However, few data are available on Cul3-mediated degradation of nAChRs ubiquitination and thus destabilization of NMJ.In this study, as expected, our subsequent experiments examined the ubiquitination of nAChRs protein in mouse fast muscle fibers and PMC, and the results demonstrated that compareing to the young group, the ubiquitination level of nAChRs was measurable ascension in the old group.Meanwhile, in immunoprecipitation experiments, we found the interaction between Cul3 and nAChR in fast muscle fibers of aged mice.All of these results further supported our conclusion that Cul3 may mediate the ubiquitindependent degradation of nAChRs proteins at the NMJ site in aged mouse skeletal muscle, thereby leading to NMJ instability and accelerated atrophy fast muscle fibers.
In conclusion, although we could not completely exclude the involvement of lysosomal degradation mechanisms, this study provided a comprehensive chain of evidence for a potential mechanism of Cul3mediated ubiquitinated degradation of nAChRs protein at NMJ sites involved in muscle atrophy.We considered that Cul3 proved to be pivotal in the aging process of muscle tissue.Cul3 is an essential factor in maintaining the stability of NMJ and is expected to be one of the intervention targets for age-related muscle atrophy.However, it is worth noting that this study used cell and animal models, and while this provides us with valuable information about the role of Cul3 in muscle atrophy, whether these findings apply to humans needs further investigation.Future studies could consider using human muscle tissue samples to verify the role of Cul3 in human sarcopenia.
Author statement
In this study, our aim was to investigate the role and underlying mechanisms of Cullin-3 (Cul3) in age-related muscle atrophy, focusing on its impact on the neuromuscular junction (NMJ) stability.We observed that sarcopenia, characterized by the loss of muscle mass associated with aging, leads to progressive muscle weakness and deterioration in older adults, greatly affecting their quality of life.To understand the molecular basis of muscle atrophy in aging, we conducted transcriptome sequencing and bioinformatics analyses of aged muscle.Based on these findings, we utilized an 18-month-old aged mouse model and a 6-month-old young mouse model to study the expression and function of Cul3 in age-related muscle atrophy.
Our results, obtained through reverse transcriptase polymerase chain reaction (RT-PCR) and immunoblotting analysis, revealed an increase in the expression of Cul3 in aged muscle tissue.Interestingly, we observed a significant decrease in the expression level of postsynaptic membrane nicotinic acetylcholine receptors (nAChRs), indicating a negative correlation between Cul3 and nAChRs.Furthermore, immunofluorescence analysis demonstrated high expression of Cul3 in senile muscle NMJ.To explore the relationship between Cul3 and nAChRs, we assessed the ubiquitin level of aged muscle nAChRs.Our results indicated a significant increase in ubiquitin levels, suggesting the involvement of Cul3 in the ubiquitination degradation of nAChRs protein at the NMJ site in aged mice.This degenerative change in NMJ may contribute to the accelerated atrophy of fast-twitch muscle fibers in aged muscle.
Overall, our findings suggest that Cul3 plays a crucial role in agerelated muscle atrophy by mediating the degradation of nAChRs at the NMJ site.Given its importance in maintaining NMJ stability, Cul3 emerges as a potential intervention target for sarcopenia.These findings contribute to our understanding of the molecular mechanisms underlying age-related muscle atrophy and provide insights into potential therapeutic strategies for combating sarcopenia.
Fig. 1 .
Fig. 1. (A): Gene expression levels of Cul3 in EDL and PMC models of mice, measured by qPCR.(B): Cul3 protein expression levels in EDL and PMC models of mice, measured by Western blotting.(C): Gene expression levels of nAChRs in EDL and PMC models of mice, measured by qPCR.(D): Protein expression levels of nAChRs in EDL and PMC of mice, measured by Western blotting.(E): Simulated correlation curve of the protein expression of Cul3 and nAChRs.(Control: young mice, Model: old mice; EDL: extensor digitorum longus muscle, PMC: primary muscle cells).
Fig. 4 .
Fig. 4. Co-immunoprecipitation and Western blotting of Cul3 and nAChR proteins in the extensor digitorum longus muscle of old mice. | 2023-11-02T13:57:34.595Z | 2023-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "60ccfa5ee1b8ec7476dc434cdbc331fcdba1b4c9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.exger.2023.112318",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f770498d200225d36c04f4894cd4ae51b15d2729",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241641146 | pes2o/s2orc | v3-fos-license | Provision of Quality Education in Public Early Childhood Education Centres in West Pokot County, Kenya
There is a global emphasis on the importance of providing quality early childhood education and care to all children. The issue of provision of quality early childhood education for arid and semi-arid regions of Kenya has received less attention as it contributes to the status of literacy of the whole population. This paper looks at the provision of quality education in public early childhood education centres also known as pre-primary schools in West Pokot County, Kenya. The study methodology incorporated qualitative and quantitative methods of research. This study took pragmatism as its philosophical paradigm. The study used a descriptive survey research design. The target population for the research involved 365 headteachers and 682 teachers. The sample size for the study was arrived at by taking 10-30% of the target population to act as the sample size that is 10% for headteachers and 30% for ECDE teachers. The research instruments used were; questionnaires and interview guides. The data collected was analysed using qualitative and quantitative methods. Quantitative data was analysed using descriptive statistics such as frequencies, percentages, means and standard deviation. Statistical Package for Social Science (Version 20.0) helped in coding, entry and analysis. Qualitative data was analysed using content analysis method. The study found out that provision of quality education was at average level (M=2.87, SD=1.13) in public pre-primary schools in West Pokot County. This means that half of the pupils in public pre-primary schools in the county are not receiving quality education and hence affecting their development in schools. Therefore, the paper recommends that adequate measures need to be put in place by the County Government of West Pokot County to ensure that public ECDE centres provide quality education through building more ECDE classrooms, employment of teachers on permanent and pensionable terms, provision of adequate instructional materials and meals programmes for remote schools to increase learners retention
Early Childhood Development Education (ECDE)
is the education given to young children (aged between 4 to 6 years) before the age of joining primary level education (6 years) (Obiweluozor, 2015). At the ECDE stage, children learn to share and co-operate with others (Chepkonga, 2017). They are taught how to understand and manage their emotions. They are also taught reading and writing tasks, are engaged in play activities for their psychomotor development among other activities stated in the pre-primary curriculum (Andiema, 2020). Statistical research has shown that children who have experienced early childhood are more likely to excel, remain in primary school and achieve good results than other children who have not gone through the same (UNESCO, 2013). This paper looks at the provision of quality education to ECDE children to ensure that they grow physically, emotionally and cognitively. Sattar (2013) viewed quality as the appropriateness of resources available to education. It is also known that quality is the baseline standard in education that can be measured on a scale of reference. It is an expression of standard by which set standards in ECDE education can be achieved. Quality refers to the degree of excellence as measured against agreed-upon standards (Mwende, 2014). According to Kenya Sessional Paper No. 14 of 2012, the focus of quality in education is found in the learners admitted, the learning adopted and the academic staff in an institution (RoK, 2012). Quality education would refer to the degree of excellence in matters of learning and instruction reflected through the academic achievement of learners in schools. In many countries in the Sub-Saharan region, ECDE education is not part of the formal education system, governments do appreciate the value of quality ECDE and the sector is contained in official policy documents (UNESCO, 2013). An examination of official policy documents from many countries reveals that goals and objectives are clearly stated and are generally similar. They define goals and objectives of ECDE centres as offering non-formal education that aims at not only providing an allrounded development of children but also to prepare them for formal education (Mutuma, 2015). Research by Sifuna and Sawamura (2010) observed that despite efforts to promote ECDE in Sub-Saharan countries, access continues to be a significant challenge. Enrolment rates in ECDEs in the region are often below 10 percent, mainly due to limited facilities being available and the effect of poverty.
The Kenyan Constitution holds that every child has a right to free and compulsory basic education; and access to affordable tertiary education, training and skills development (RoK, 2010). Basic education covers two years of ECDE (Pre-Primary 1 to II), 8 years of primary education, 4 years of secondary education and 4 years of basic university degree (RoK, 2012). It can be seen that ECDE forms the foundation of education in Kenya, a focus of this study. The existing system of providing for early childhood education had been community-based for 9 | This work is licensed under a Creative Commons Attribution 4.0 International License many years before 2013; it was managed by communities through their committees (Mwamba, 2013). Before the year 2013, 75% of the ECDEs in Kenya were community-owned. However, the reforms in how ECDE was managed came into being through the enactment of the Kenyan Constitution in the year 2013. The Fourth Schedule, on the Distribution of Functions in the Constitution of Kenya (2010), allocated the responsibility of planning and development of pre-primary schools to County Governments. According to the schedule, the National Government shall retain responsibilities for policy, standards and curriculum development whereas the County Governments undertake implementation (RoK, 2010). The constitution required that making of curriculum to be done at the national level under KICD while implementation of ECDE curriculum to be done at the County level. This means that the effort of ensuring quality education is provided to ECDE learners rests on what happens at school level. Therefore, the issue of efforts made on ensuring quality education provision for ECDE learners is the focus of this study.
PROBLEM STATEMENT
The study was conducted in light of the challenges faced by public ECDE centres in the area that had affected the transition of pupils from one level (lower primary) to upper primary in West Pokot County which is low. Research reports show that majority of learners in West Pokot County transiting from ECDE to primary school level recorded low competencies in reading and writing tasks (Uwezo Kenya, 2015). Uwezo Kenya (2015) report noted that 41.31% of pupils in class three were able to read Hadithi (story) while more than half 68.69% were not able to read and write. Therefore, the research sought information from ECDE teachers and headteachers on quality ECDE provision in public ECDE centres in West Pokot County, Kenya.
REVIEW OF LITERATURE
The quality of education is an important issue all over the World (Sattar, 2013;Mwende, 2014). The term quality has been used interchangeably with the following nouns: efficiency, effectiveness, equity and quality have often been used interchangeably. Okebukola (2005) defines quality as the fitness of purpose. Ngware, Oketch and Ezeh (2011) contend that quality education comprises three interrelated aspects: Quality of human and material resources available for teaching (inputs); quality of teaching practice (process); and the quality of results (outputs and outcomes). On his part, Odhiambo (2008) opined that quality education is determined by the inputs such as curriculum content, instructional materials and equipment, school culture, teacherpupil ratio, costs and guiding policies, quality assurance, learning duration and above all the quality of the teachers and management functions. From these definitions, there was no consensus on the appropriate definition of quality education. However, there exists substantial agreement on dimensions of quality education. According to UNICEF (2000), this includes healthy, wellnourished and supporting learning facilities, environmental factors must be safe, protective and gender-sensitive, trained teachers and good methods for imparting knowledge and skill and adequate classroom and content, relevant curricula and skilful assessment.
In terms of goodness in performance, quality of education refers to excellence in performance through established acceptable criteria and standards of good performance (Mosha, 2000 as cited by Mwamba, 2013). However, standards are social and dynamic, they change with time and societies and hence they become value-laden (Sallis, 2002). In schools, achievements in examinations (Malekela, 2000) and learners' capabilities (Haki Elimu, 2007) are used as standards to assess the quality of education. High achievements refer to excellence in a program, a school or individual learners improvement (Manyanga, 2007). In other words, high achievements are used as standards to improve or upgrade the performance of individuals, both teachers and students in an institution (Lomas, 2007). This is evidenced in empirical studies. Abiero's (2013) study was to establish the overall parents' satisfaction with the quality of pre-primary education provided to their children in Bondo district, Kenya. Expectancy-Disconfirmation Theory of Customer Satisfaction was used as the basic theory of the study. The study employed a descriptive survey design. Results showed that most parents were somewhat satisfied with the quality of pre-primary education. T-test of independent samples found no significant difference in parents' satisfaction with the quality of pre-primary education by school type and parents' gender. One way Analysis of Variance found a significant difference in parents' satisfaction with the quality of pre-primary education by various levels of parents' education and income. This research looked at how quality education was viewed in West Pokot County.
MATERIALS AND METHODS
This research was conducted in West Pokot County, Kenya. In terms of research design, the study adopted a descriptive survey research design which enabled the collection of data using various instruments of qualitative and quantitative measures. The target population included all pubic primary headteachers (365) and ECDE teachers (682). In sampling, 30.0% of teachers were to be the sample resulting in 205 while 10% of the headteachers representing 37 were sampled. Headteachers and teachers were selected through stratified random sampling method. Data was collected via questionnaire and interview. Quantitative data collected was coded, recorded and analysed with the help of Statistical Package for the Social Sciences (SPSS) computer software (Version 23.0). Quantitative data was analysed by use of means; frequencies, percentages, means and standard deviation. Qualitative data from the interview guide was transcribed first and coded using numerals. Themes were merged with quantitative data in the results sections and presented after quantitative analysis.
RESULTS AND DISCUSSIONS
The main objective of this paper was to determine the level of provision of quality education in ECDE centres. The Government of Kenya and the County Governments have clear policy frameworks to guide the provision of quality education in ECDE. The policy has standards for quality and inclusive ECD services (Boke, 2014). It was important to determine how teachers rated the pupils' acquisition of several competencies in ECDE centres in West Pokot County in order to measure the provision of quality education. The teachers were supposed to rate ECDE pupils' competencies as very high, average, low, and very low. The results of the analysis are presented in Table 1. Jepleting's (2013), who found out that children performance in mathematics was low in ECDE centres. This result shows that pupils' level of arithmetic skills in the County is low.
On the pupils' acquisition of reading and writing skills, 90 (43.9%) indicated pupils' competency as moderate and only 64 (31.2%) said it was very high. From the findings, it can be deduced that pre-school children level of reading and writing was moderate (M=3.33, SD=0.98) in public ECDE centres in West Pokot County. The research findings are in agreement with research by Abiero (2013) which established that parents in Bondo Sub County had confidence with the quality of their teachers, how children are taught and achievement; they were happier with the achievement of their children in reading than achievement in arithmetic and social skills. The results obtained suggest that since Uwezo report (2015), there has been significant improvement in investments in ECDE by the County Government raising the literacy level in the county to average level for the past four years (from 2015). Moreover, the above statistics can be explained by the fact that the environment (climatic and development) of the four Sub Counties under study are not similar. West and South Pokot are less arid while North and Central Pokot are semi-arid regions and have less development.
When asked to indicate pupils' ability to solve problems on their own, 69 (33.7%) rated pupils' competency in problem-solving as low and 60 (29.3%) rated them as high. From these results, it is seen that a significant number of pupils in preschool centres in West Pokot County cannot solve problems on their own while others can as evidence by the mean of 2.85 and standard deviation of 1.23. This could be due to the environment they receive instruction. The findings also show that 79 (38.5%) of teachers indicated that their pupils' level of creativity and imagination was low and only 18 (8.8%) said it was high. From the findings, it can be deduced that pupils' level of creativity and imagination in class is low (M=2.13, SD=1.10). This could be due to the teaching approach being used in their classrooms and due to the lack of adequate instructional materials which are critical to pupils' development of creativity and imagination.
On pupils' school attendance resulting from their interest in learning, 76 (37.1%) of teachers said it was high and only 22 (10.7%) reported that school attendance was very low. This implies that the majority of pupils in Public ECDE centres interest in learning is high (M=3.55, SD=1.24) and rarely miss coming to school. On the transition rate, 97 (47.3%) indicated average transition while 24 (11.7%) said it was very low. Based on the findings, the transition rate of pupils from one level to another in ECDE centres is average (M=2.64, SD=0.92) suggesting that there exist challenges that affect the issue in public schools. The result coincides with responses made by some headteachers who gave different perceptions on Interview Question No. 2. For Instance, Headteacher thirteen (HT13) said that: "The transition has been upward every year, there is an increase." Another Headteacher (HT2) indicated that: "The transition rate has been fluctuating due to the upcoming school in the neighbourhood." Similarly, another headteacher (HT25) also said that: "There has been a drop in transition over the years in my school." Still, another headteacher (HT10) had a different opinion compared to the other two (HT2 & HT25) by indicating that: "There has been upward enrolment in the ECDE due to the centre around the school." The responses made by headteachers confirm that transition challenges are prevalent in the majority of ECDE centres in West Pokot County. The transition challenges are mainly experienced when pupils enrol for standard one class. The method of instruction and environment is somehow different from what they used to experience in pre-school. For instance, desks and tables arrangement in preschools was round but in standard one, desks are arranged in rows. This setting in lower primary may affect pupils' transition. In addition, the availability of learning resources may also influence pupils' transition from pre-primary to primary school (Chepkonga, 2020). With regard to academic performance in exams and assessments, 67 (32.7%) said the performance was average. It can be deduced that the performance of pupils in examinations is average (M=2.89, SD=1.18) in a majority of schools while others perform poorly (30.8%) and others (36.6%) said it was high. Mwamba (2013) established that majority of learners were not satisfied with the administration of tests in their schools. This implies that irregular conduction of continuous assessment tests (CATs) in their schools affected their performance in examinations.
When asked the level at which pupils were able to work in teams and groups, 60 (29.3%) said it was high. The results show that teachers have made significant efforts (M=3.54, SD=1.23) to ensure cooperative learning takes place by allowing pupils to work together in teams and groups. Only 20 (9.8%) teachers said that the quality of learning in their school was high, most 159 (77.6%) said the quality was average and 26 (12.7%) mentioned that quality was low from the data generated from SPSS on combined respondents' response. In conclusion to these findings, statistics show that provision of quality education in public ECDE centres was average (M=2.92, SD=1.10) in West Pokot County. The findings concur with a study conducted by Abiero (2013) that showed that the level of parents' satisfaction with the quality of pre-primary education in Bondo District was low (M=1.86, SD=0.38). This implies that a majority of preprimary school parents in Bondo district were somewhat satisfied with the quality of pre-primary education provided to their children. However, results are different from Olaleye and Omotayo (2009) study in Nigeria that showed that learning activities in the pre-primary schools selected was averagely high. This shows that disparity exists between Kenya and Nigeria in the provision of a conducive learning environment for learning in ECDE.
CONCLUSION AND RECOMMENDATIONS
The study has found out that the provision of quality education to ECDE pupils in West Pokot County is at average level. From the results presented, out of the eight constructs of quality education provision, only two were recorded to be high (M>3.5); pupils' interest in learning [coming to school daily] and pupils' ability to work together in groups. Four indicators of quality were rated to be on average level (M<3.5>2.5); pupils' ability to solve problems on their own, pupils transition rate from one level to another, pupils' acquisition of reading and writing skills and pupils performance in exams and assessments. The least quality indicators that were lowly ranked (M<2.5) were; pupils' acquisition of numeracy skills and pupils' level of creativity and imagination in class. The above findings show that the achievement of quality education objectives is on average and therefore appropriate measures need to be taken by the county government, school management and other stakeholders to address issues that pre-primary schools are struggling with.
It is recommended that: more teachers should be employed on permanent and pensionable terms in the county; the county government should ensure that required resources and facilities for learning are provided in adequate quantity; teacher should be trained on instructional methodology and multistakeholder involvement in addressing quality issues facing ECDE schools in the area. | 2021-10-15T16:21:00.736Z | 2021-10-07T00:00:00.000 | {
"year": 2021,
"sha1": "3f78e501f162c801079c1201e531ae97f5fabe65",
"oa_license": "CCBY",
"oa_url": "https://journals.eanso.org/index.php/eajes/article/download/432/473",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc46ced107413acce7c041e2d35d745db1209e74",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
23843527 | pes2o/s2orc | v3-fos-license | Divergent regeneration‐competent cells adopt a common mechanism for callus initiation in angiosperms
Abstract In tissue culture, the formation of callus from detached explants is a key step in plant regeneration; however, the regenerative abilities in different species are variable. While nearly all parts of organs of the dicot Arabidopsis thaliana are ready for callus formation, mature regions of organs in monocot rice (Oryza sativa) and other cereals are extremely unresponsive to tissue culture. Whether there is a common molecular mechanism beyond these different regenerative phenomena is unclear. Here we show that the Arabidopsis and rice use different regeneration‐competent cells to initiate callus, whereas the cells all adopt WUSCHEL‐RELATED HOMEOBOX 11 (WOX11) and WOX5 during cell fate transition. Different from Arabidopsis which maintains regeneration‐competent cells in mature organs, rice exhausts those cells during organ maturation, resulting in regenerative inability in mature organs. Our study not only explains this old perplexity in agricultural biotechnology, but also provides common molecular markers for tissue culture of different angiosperm species.
A
B C E F D F I G U R E 1 Cellular analysis of callus formation in rice. (A) Wild-type rice leaf (7-mm long) used as a source of explants for tissue culture on CIM for 2 weeks. (B), (C) Thin sections from time-0 (B) and 5-DAC (C) rice leaf explants cultured on CIM at leaf base. (C) Callus formed primarily from the outer sheath: *, the outer sheath cell that underwent division to form two callus cells; #, elongated outer sheath cell before cell division to form callus cells. Note that some inner sheath cells also underwent division. (D) Wild-type rice root explants from 5-day-old seedling cultured on CIM for 2 weeks. Callus formed from the root tip region and the lateral root formation region. (E), (F) Thin sectioning of rice root explants cultured on CIM at time 0 (E) and 4 DAC (F). Note that callus formed from the phloem-pole pericycle in (F); asterisks indicate phloem-pole pericycle cells that underwent cell division to form two callus cells. Cell lineage in rice leaf and root tissue formation was described previously (Zeng et al., 2016). Scale bars: (A), (D) 1 mm; (B), (C), (E), (F) 50 m usually form a group of fast dividing cell mass, termed callus, in different regenerative systems (Ikeuchi, Sugimoto, & Iwase, 2013). In tissue culture, de novo organogenesis could occur on a type of callus which has a high pluripotency for adventitious root and shoot regeneration (Ikeuchi et al., 2013;Kareem et al., 2016;Sugimoto et al., 2011;. Studies of Arabidopsis thaliana suggested that the callus formation on callus-inducing medium (CIM) in tissue culture follows the rooting developmental pathway (Atta et al., 2009;Che, Lall, & Howell, 2007;He, Chen, Huang, & Xu, 2012;Liu et al., 2014;Sugimoto, Jiao, & Meyerowitz, 2010). Two steps of cell fate transition occurred in callus formation. Arabidopsis thaliana WUSCHEL-RELATED HOMEOBOX 11 (AtWOX11) is activated in the first step of cell fate transition from regeneration-competent cells to founder cells; and cell division occurs in the second step of cell fate transition from founder cells to callus cells which are marked by AtWOX5 (Liu et al., 2014).
The application of plant regeneration in tissue culture has occurred for more than half a century; however, a key obstacle in this biotechnology is that the ability for callus initiation upon hormone induction is highly diverse in different species. For example, while almost all organs of Arabidopsis, the typical dicot plant, have the ability to produce callus during their whole life (He et al., 2012;Sugimoto et al., 2010), mature regions of organs in many monocot cereal species are extremely unresponsive to in vitro culture techniques (Bhojwani, Evans, & Cocking, 1977;Cutler, Saleem, & Wang, 1991), perplexing the agricultural applications of tissue culture in cereal species for a long time. Whether there is a common molecular discipline for callus formation of different species is unclear, and thus how to explain the cereal problem in tissue culture is so far unanswered. In this study, we revealed this common discipline at the molecular level in angiosperms and this might be a marker to monitor tissue culture in future agricultural technologies.
In addition, we discuss the regenerative inability in mature organs of cereals.
Identification of regeneration-competent cells for callus initiation in rice
Dicots and monocots are two major branches of angiosperms, but the cellular and molecular mechanisms of regeneration in monocots are largely unclear. To analyze the cell lineage of callus formation in the monocot model plant rice (Oryza sativa), we cultured leaf and root explants on CIM. The mature rice leaf is unable to form callus in tissue culture (Bhojwani et al., 1977;Cutler et al., 1991). However, the base region of young leaves formed callus in our culture conditions (Fig. 1A) (Cutler et al., 1991), and this region is at the immature stage (Zeng et al., Callus initiated primarily from the outer sheath at 5 days after culture (DAC) (Fig. 1C). Cell division could also be occasionally observed from the inner sheath (Fig. 1C). In the root, callus can be formed at the root tip region and the lateral root formation region (Fig. 1D). Callus initiated from the phloem-pole pericycle (Fig. 1E, F), where lateral roots usually initiate during root development (Zeng et al., 2016). Therefore, bundle sheath cells in leaves and phloem-pole pericycle cells in roots serve as regeneration-competent cells for callus initiation in rice. It is possible that other immature cells in the vasculature may also be competent for callus initiation (see the analysis for maize, below).
OsWOX11/12B and OsWOX5 are involved in callus formation in rice
To explore the molecular mechanism that confers the ability to regenerate on certain tissues in rice, we performed an RNA sequence experiment. We identified candidate genes that were highly upregulated during callus formation from the basal part of young leaf explants about 7 mm in length ( Fig. S1; Table S1). Oryza sativa WOX11 (OsWOX11) and OsWOX12B, which are intermediate-clade WOX genes, and OsWOX5 belonging to the WUS clade were among the highly upregulated genes To test whether OsWOX11 is involved in rice regeneration, we analyzed the callus formation ability of the rice Oswox11-1 mutant (Zhao, Hu, Dai, Huang, & Zhou, 2009). We dissected the 7-mm young leaf explant into three segments: two 1-mm segments (segments 1 and 2) at the leaf base and the remaining 5 mm as the third segment (segment 3) ( Fig. 2A). In the wild type, callus formed in segments 1 and 2 but not in segment 3 (Fig. 2B). In the Oswox11-1 mutant, both segments 1 and 2 showed decreased callus formation rates compared with those of wild type (Fig. 2C, D). These data suggested that OsWOX11 could be involved in callus formation in rice, and the partially reduced regenerative ability of the Oswox11-1 mutant may be due to the redundant function of OsWOX12B.
In addition, overexpression of OsWOX11 in Arabidopsis resulted in dramatically rapid callus formation. In the Arabidopsis wild-type and OsWOX5 may serve as molecular markers in cell fate transition during callus formation.
Angiosperms may have a common mechanism for callus initiation
The diversification of monocots and dicots was estimated to occur in the Jurassic (Zeng et al., 2014). In the dicot Arabidopsis, procam- Fig. S5).
To test whether this molecular pathway is generally involved in regeneration, we carried out quantitative reverse transcription polymerase chain reaction (qRT-PCR) analyses to quantify the transcript levels of WOX11 and WOX5 homologs in the dicot poplar and the monocot maize during callus formation in leaf explants. Our data showed that expression levels of WOX11 and WOX5 homologs were dramatically induced on CIM in poplar and maize (Fig. S6). Therefore, it is possible that the molecular mechanism in regeneration could be conserved among angiosperms.
Overall, activation of WOX5 is the marker of callus cell formation.
We do not exclude the possibility that some non-WOX11/12-mediated pathways may also be able to activate WOX5 for callus formation (Liu et al., 2014;Sheng et al., 2017).
Rice and Arabidopsis have different strategies for maintenance of regeneration-competent cells during organ maturation
It is well known that the dicot Arabidopsis and monocot cereals have different regenerative abilities. Almost all organs of Arabidopsis are able to produce callus during the entire life of the plant (He et al., 2012;Sugimoto et al., 2010). We dissected mature Arabidopsis leaves into four segments (segments 1 to 4, from the base to the tip), and all dissected segments of leaf explants were able to form callus (Fig. 3A). In contrast, mature organs of many monocot cereal species are extremely unresponsive to in vitro culture techniques (Bhojwani et al., 1977;Cutler et al., 1991). We also dissected mature leaves from rice and maize into four segments from the base to the tip, and only the base region in segment 1 (immature region) was able to form callus on CIM (Figs. 3C, D and S7A). This is consistent with the results of previous studies showing that only the very base region of leaves from many monocot cereals, including barley, rice, wheat, oat, and maize, can form callus (Ahmadabadi, Ruf, & Bock, 2007;Becher, Haberland, & Koop, 1992;Chen, Xu, Loschke, Tomaska, & Rolfe, 1995;Chen, Zhuge, & Sundqvist, 1995;Cutler et al., 1991;Wernicke & Milkovits, 1984;Wernicke, Brettell, Wakizuka, & Potrykus, 1981;Zamora & Scott, 1983).
We hypothesize that the diversification of regenerative abilities in
Arabidopsis and cereals is due to different strategies of maintenance of regeneration-competent cells in mature organs. Regenerationcompetent cells, such as procambium cells, were maintained in Arabidopsis leaves as they matured. However, regeneration-competent cells have differentiated into specific cell types (Zeng et al., 2016) and thus have lost their competence upon maturation from the tip region to the base region of the leaf in cereals. For example, the outer sheath in the rice mature leaf differentiated into large parenchyma cells (Zeng et al., 2016) (Fig. S7B) leaves, the bundle sheath differentiated into Kranz anatomy (Fig. S7F) and therefore lost the ability to form callus.
To test this hypothesis at the molecular level, we analyzed expression patterns of WOX11 in mature leaves of Arabidopsis and rice in tissue culture. We observed that AtWOX11 was induced in the vasculature of all dissected segments of the mature Arabidopsis leaf on CIM (Fig. 3B); this may have rendered the whole Arabidopsis leaf competent to form callus at leaf maturity (see the model in Fig. 3F). In contrast, the GUS signal from the OsWOX11 pro :GUS line was present at the base part of segment 1 (immature region), but was barely detected in segments 2−4 (mature region) of the dissected rice leaf explant on CIM (Fig. 3E). Therefore, as the rice leaves matured, the differentiation of regeneration-competent cells resulted in the loss of their molecular competence for callus initiation (see the model in Fig. 3F).
Next, we tested the callus formation ability in rice leaf explants overexpressing OsWOX11. The data showed that overexpression of OsWOX11 in rice cannot reverse the fate of mature and differentiated vascular cells to be competent for callus formation (Fig. S8). Callus formation requires not only WOX11 but also many other molecular pathways such as LATERAL ORGAN BOUNDARIES DOMAIN genes (Fan, Xu, Xu, & Hu, 2012), WOUND INDUCED DEDIFFERENTIATION 1 (Iwase et al., 2011;Iwase et al., 2017), and some epigenetic factors (He et al., 2012;Li et al., 2011) in Arabidopsis. PLETHORAs contribute to the pluripotency of callus (Kareem et al., 2015). It will be interesting to analyze these pathways during callus formation in rice and to test whether it is possible to endow differentiated cells with competence for callus formation in the future.
CONCLUSION AND PERSPECTIVE
In this study, we have provided cellular and molecular frameworks of callus formation in angiosperms. Regeneration-competent cells differ between the dicot Arabidopsis and the monocot rice, whereas those diverse cells adopt a common mechanism involving WOX11 and Previous studies indicate that callus formation follows the rooting pathway (He et al., 2012;Liu et al., 2014;Sugimoto et al., 2010) and callus is a group of root primordium-like cells (Liu et al., 2014). In addition, the regeneration-competent cells for callus initiation could also initiate roots (Liu et al., 2014). During adventitious rooting in Arabidopsis, AtWOX11 controls root founder cell establishment (Liu et al., 2014) and AtWOX5 is required for root primordium formation . Based on these studies, we hypothesize that the WOX11−WOX5-mediated root initiation mechanism in the common ancestor of angiosperms was borrowed and developed for callus initiation in regeneration-competent cells of dicots and monocots, although the morphology of these cells has changed during evolution. Understanding of the regeneration-competent cell behaviour in different plant species is the basis to utilize and improve the regenerative abilities in tissue culture. Shan-Xin) were used as wild types in this study unless otherwise noted.
To produce OsWOX11 pro :GUS transgenic plants, the 4-kb promoter of OsWOX11 was PCR amplified and inserted into pBImUB (modified from pBI101). AtWOX11 pro :GUS and AtWOX5 pro :GUS transgenic plants were produced as described previously (Liu et al., 2014
Thin sectioning, in situ hybridization and dual luciferase assay
Thin sectioning was performed as previously described (Zeng et al., 2016). For in situ hybridization, gene fragments used to prepare probes were subcloned into pGEM-T Easy. In situ hybridization analyses were performed as reported previously (Zeng et al., 2016). To construct OsWOX5 pro :LUC, the promoter of OsWOX5 was PCR amplified and inserted into the pGreenII-0800 vector (Hellens et al., 2005). The dual luciferase assay was performed using the Dual-Luciferase Reporter Assay System (Promega, Madison, WI). The primers used for plasmid construction are listed in Table S2.
qRT-PCR and RNA-sequencing analyses
RNA extraction and qRT-PCR were performed as previously described (He et al., 2012), using gene-specific primers. The qRT-PCR results are shown as relative transcript levels, which were normalized against that of ACTIN. The primers used for real-time PCR are listed in Table S2.
For RNA-sequencing analyses, RNA was isolated from the base region of time-0, 2-DAC and 5-DAC rice leaf explants (Fig. S1). Deep sequencing was carried out using the Illumina HiSeq3000 platform following the manufacturer's instructions (Illumina, San Diego, CA).
Library construction and deep sequencing were performed by Genergy Biotechnology Co. Ltd (Shanghai, China). Raw data comprised 100-bp paired-end sequences. Raw sequences were aligned to the rice genome with TopHat software (Trapnell et al., 2012), and differential expressed gene analysis was performed using DESeq (Anders & Huber, 2010).
Highly upregulated genes were defined as fold change >10.0 and p value <0.05. The analyzed data are shown in Table S1.
ACKNOWLEDGMENTS
We thank Gynheung An (Kyung Hee University), Hongxia Zhang and Hongtao Liu (Institute of Plant Physiology and Ecology) for seeds of rice mutant, wild-type poplar and wild-type maize, respectively. This work was supported by grants from the National Natural Science | 2018-04-03T00:55:07.176Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "f6ac9b5f6d35e7c86846d21308ed85850026a897",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/reg2.82",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6ac9b5f6d35e7c86846d21308ed85850026a897",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
247158580 | pes2o/s2orc | v3-fos-license | A nonhydrostatic atmospheric dynamical core on cubed sphere using hybrid multi-moment finite-volume/finite difference methods: formulations and preliminary tests
A nonhydrostatic dynamical core has been developed by using the multi-moment finite volume method that ensures the rigorous numerical conservation. To represent the spherical geometry free of polar problems, the cubed-sphere grid is adopted. A fourth-order multi-moment discretization formulation is applied to solve the governing equations cast in the local curvilinear coordinates on each patch of cubed sphere through a gnomonic projection. In vertical direction, the height-based terrain-following grid is used to deal with the topography and a conservative finite difference scheme is adopted for the spatial discretization. The dynamical core adopts the nonhydrostatic governing equations. To get around the CFL stability restriction imposed by sound wave propagation and relatively small grid spacing in the vertical direction, the dimensional-splitting time integration algorithm using the HEVI (horizontally-explicit and vertically-implicit) strategy is implemented by applying the IMEX (implicit-explicit) Runge-Kutta method. The proposed model was checked by the widely-used benchmark tests in this study. The numerical results show that the multi-moment model has superior solution quality and great practical potential as a numerical platform for development of the atmospheric general circulation models.
Introduction
The multi-moment methods were proposed by introducing two or more kinds of moments, which are quantities used to describe the spatial distributions of physical fields, such as the pointwise values, the volume (surface or line) integrated averages, the derivatives of different orders and so on. The different moments can be used as the model variables directly [1,2] or the constraint conditions [3] to derive the updating formulations of the unknowns. With more local degrees of freedom (DOFs), the multi-moment schemes can accomplish the high-order spatial reconstructions within the compact stencils. As a result, they have better flexibility in dealing with the various grid topologies with the unified numerical framework and is promising to develop the highly scalable models running on the massive parallel clusters, like discontinuous Galerkin scheme, spectral element schemes among others. Furthermore, the moments defined in our schemes have clear physical meanings and the resulting discretization procedures are flexible in achieving the different numerical properties and simple to be implemented in various applications [4].
To develop the numerical models for atmospheric dynamics in spherical geometry, the computational meshes with quasi-uniform grid spacing, such as cubed-sphere grid [5], icosahedral geodesic grid [6,7] and Yin-Yang grid [8], gain more and more attentions in the past several decades [9] due to the continuously increasing demands on refining the grid resolutions of global models. With the applications of multi-moment finite volume method, the unified high-order formulation for global shallow-water equations (SWEs) has been proposed on these three grids [10]. The numerical results of spherical shallow-water test cases verified that the compact spatial reconstructions realized by using multimoment concept are helpful to suppress the extra numerical errors around the patch edges on cubed sphere and build the rigorous third-order model on icosahedral grid with the hexagonal and pentagonal elements. Following our previous studies, a three-dimensional nonhydrostatic model for atmospheric dynamics using multi-moment finite volume scheme is reported in this paper. The cubed-sphere grid is adopted in this study because the high-order schemes designed on Cartesian grid can be straightforwardly implemented on the structured-quadrilateral grid and the resulting model is more computationally efficient than those on the icosahedral-hexagonal grid. Additionally, in comparison with the overset Yin-Yang grid the numerical model on cubed sphere can naturally assure the numerical conservation without complex corrections if the adopted scheme is conservative and the flux-form governing equations are solved.
To extend the global shallow-water model to three-dimensional dynamical core, attentions should be paid in developing the proper vertical discretization scheme and the efficient time marching scheme to assure the available time step suited for the practical applications.
The multi-moment scheme can also be applied in vertical direction and a completely 4 th -order model are achieved as we have reported in [11]. However, updating the DOF defined at the surface introduces some difficulties for practical models with the physical processes, e.g. the evaluation of derivatives of flux functions in vertical direction at surface requires finer grid resolution to assure the stability within the boundary layer and the physical parameterizations should to be modified to calculate the values of physical sources at cell center as well as its endpoints. Thus, a conservative three-point finite-difference scheme is designed in this study, where the DOFs are defined as pointwise values of unknowns only at cell centers in vertical direction. As no DOF is defined at surface, only the flux functions, excluding their derivatives, are evaluated there. Additionally, it is helpful to implement the dynamics-physics coupling in practice as what has been done in many existing models.
With very fine horizontal resolution, the hydrostatic approximation, widely used in many existing models, is no longer valid. Since the nonhydrostatic governing equations are adopted in this study, the propagation of sound wave in vertical direction is directly solved and should be carefully considered in designing the time marching scheme. The explicit model in three dimensions can only run with a very restrictive time step in comparison with the time scale of dominant phenomena due to the very large ratio between the grid spacings in horizontal and vertical directions. A dimensional-splitting scheme using horizontally-explicit and vertically-implicit (HEVI) strategy is adopted here to design a time marching scheme with an acceptable time integration step in the practical model. In this study, the implicit-explicit (IMEX) Runge-Kutta method [12,13] is adopted. All terms related to the partial derivatives with respect to the vertical direction and the stiff source terms, e.g. the gravity force term, are treated in the implicit way. While all numerical operations requiring the data exchanging between the subdomains decomposed in horizontal directions for parallel computations are treated explicitly to preserve the high scalability. The resulting model can run with a time step determined by the CFL stability condition in horizontal directions. Since the high-order spatial and temporal discretization schemes are applied in vertical direction, it is expected that the proposed model is robust and accurate not only in simulating the quasi-hydrostatic large-scale atmospheric dynamics, but also in the non-hydrostatic multi-scale ones.
The rest of this paper is organized as follows. In section 2, the numerical formulations of the multi-moment nonhydrostatic dynamical core are described in details. Some widely-used benchmark tests are then checked to verify the performance of the proposed numerical model in section 3. And a short summary is finally given in section 4.
Governing equations
On each patch of cubed sphere, the nonhydrostatic governing equations for atmospheric dynamics with shallowatmosphere assumption are written in the flux-form as [14,15] where (ξ, η) are local horizontal coordinates on each patch of the cubed sphere, ζ is a height-based terrain-following coordinate in vertical direction, q are dependent variables (predicted variables), e (q), f (q) and h (q) are flux functions in ξ, η and ζ directions, respectively and s (q) denotes all source terms. The detailed expressions of governing equations is described with a brief introduction of transformation laws of curvilinear coordinates as follows.
In the horizontal directions, the coordinates are ξ = Rα and η = Rβ, where R is radius of the Earth and α, β are central angles for a gnomonic projection varying within − π 4 , π 4 for each patch (details can be referred to [16]). In the vertical direction, ζ ∈ [0, z t ] is a uniform grid, where z t is altitude of model top. A non-uniform grid can be generated by a transformationζ = T (ζ), which has smaller grid spacing near the surface to better represent the effects of the topography and the atmospheric boundary layer. The formulations used to generated the non-uniform grid adopted in this study are described as follows. The smallest grid spacing of coordinateζ at surface is ∆ζ min and the largest one is ∆ζ max at model top. In the region close to the surface 0, ζ 1 or model top ζ 2 , z t , several layers of uniform cells may be arranged. In the region ζ ∈ ζ 1 , ζ 2 , the grid spacing is gradually increasing from ∆ζ min to ∆ζ max . Additionally, we require the 2 nd -order derivatives of transformation T are zero at ζ = ζ 1 and ζ = ζ 2 to make the transformation C2-continuous.Thus, the grid transformation can be derived aŝ where the coefficients c r (r = 0 to 5) are determined with constraint conditions as Considering the surface topography z s (ξ, η), the terrain-following coordinate is then built as [17] z =ζ + z s (ξ, η) where z is altitude and the scale height S = 5000 m is adopted in this study. The horizontal transformation laws between the longitude-latitude (λ−φ) grid and the local curvilinear coordinates on each patch of cubed sphere are defined as follows.
The contravariant base vectors a ξ and a η are Above base vectors have different expressions on different patches and can be derived from the projection relations [18].
The horizontal contravariant metric tensor is where X = tan α, Y = tan β and δ = The Jacobian of the horizontal transformation is The contravariant velocity components are obtained by where v = (u s , v s ) is the velocity vector on longitude-latitude grid.
The details of projection relations and transformation laws on cubed sphere can be referred to [18,19,16,14].
In vertical direction, the governing equations in the height-based terrain-following coordinates can be derived through the chain rules [15]. The Jacobian of vertical transform is J V = ∂z ∂ζ , which can be directly obtained through Eq. (4). The components of contravariant metric tensor related to vertical transformation are G 13 V = ∂ζ ∂ξ | z=constant and For the idealized test cases in this study, they can be analytically evaluated. The dependent variables adopted in this study are [20] q = Jρ, Jρũ, Jρṽ, Jρw, Jρθ T , where J is the Jacobian of the transformation J = J H J V , ρ is density,ũ andṽ are contravariant velocity components in horizontal directions, w is vertical velocity, θ is potential temperature and the superscript prime denotes the deviation with respect to the hydrostatic reference state as The reference state is derived through the hydrostatic balance in vertical direction as It usually has an analytic expression from the initial condition in the benchmark tests.
The flux functions are written in three directions as e = J ρũ, ρũ 2 + G 11 H p , ρũṽ + G 12 H p , ρũw, ρũθ T , and The source term is written as s H1 includes the derivatives of reference pressure p as 4 s H2 includes the derivatives of reference potential temperature θ as s P is the source term due to the horizontal grid transformation as [14] s P = 2J and s C is the source term representing the Coriolis force, having the form of [14] s on patch one to four, on patch five and on patch six, where Ω is rotational speed of the Earth, and s G is the source term representing the gravity force as where g is gravitation constant. s R is the source term to introducing Rayleigh friction near model top, having the form of where coefficient τ R determines the strength of Rayleigh friction and u f , v f , 0 denotes a reference velocity field. Rayleigh friction is adopted to absorb the reflected waves from top boundary where a solid wall boundary condition is applied to assure the numerical conservation.
Definition of degrees of freedom
The 3-point multi-moment constrained finite volume (MCV) method [3] is adopted to implement the spatial discretization in horizontal directions. Nine pointwise values are defined as local DOFs within each cell to construct the 3-point MCV scheme in two dimensions, as shown in Fig. 1 for cell C i jkp , where superscripts i, j, k denote the indices in ξ, η (i, j = 1 to N h ) and ζ (k = 1 to N v ) directions and p = 1 to 6 the number of the patch. The solution points are equidistantly distributed within the cell and the DOFs defined at the cell surfaces are shared by adjacent cells. All local DOFs are defined at the centers of line segments in vertical direction, where the conservative finite difference scheme is adopted to accomplish the spatial discretizations.
The total number of computational cells adopted by the proposed model is 6N h 2 N v . The resolution in horizontal direction along the equator is 90 • N h in terms of number of computational cells and 45 • N h in terms of number of DOFs. In vertical directions, total number of layers is N v . Hereafter, we denote the computational mesh by its resolution N h × N v .
Spatial discretizations
At solution point P i jkp mn , where the superscripts denote the indices of cell, the subscripts m, n = 1 to 3 are local indices of DOFs within the corresponding computational cell, the local DOF is updated through a differential-form formulation as where e ξ , f η and h ζ are numerical approximations of derivatives of flux functions in different directions at solution point.
Spatial discretizations in horizontal directions
The MCV scheme in multi-dimensional case can be implemented by applying the one-dimensional formulations in different directions one-by-one [3]. Thus, we describe the numerical procedure of spatial discretization in ξ-direction as follows. Similar formulations can be derived in η-direction and the details of multi-dimensional MCV discretization can be referred to [3].
Considering the one dimensional governing equations in ξ-direction as Three local DOFs are defined within line segment L i jkp n as shown in Fig. 2 (one of 3 line segments along ξdirection in Fig. 1), i.e., q i jkp 1n , q i jkp 3n at cell interfaces (solid triangles) and q i jkp 2n at cell center (solid square). Hereafter we use only the indices in ξ-direction for the sake of brevity. As shown in Eq. (26), the semi-discrete formulation for each DOF is written as Different formulations are used to evaluated the derivatives of flux functions e at cell interfaces and center, as shown in Fig. 3 (a) and (b) respectively.
• Derivatives of flux functions e at cell interface (ξ i1 ) At cell interface, the derivatives of flux functions can be evaluated in two adjacent cells, i.e. L i−1 and L i as shown in Fig. 3 (a). Generally, two different results are obtained. We then solve a derivative Riemann problem (DRP) to derive an upwind formulation as where Q and E are piecewise spatial reconstruction of predicted variables q and flux functions e, matrix a ξ is determined by selected approximate Riemann solver in ξ-direction.
Using the multi-moment concept, several interpolation profiles [16,3,21,22,23,24] for spatial reconstruction have been developed for the schemes with different numerical properties. Considering the trade-off between the accuracy and the efficiency, the fourth-order profile developed in [16] is adopted in this study. The spatial reconstruction for line segment L i−1 is a Lagrangian interpolation polynomial using four pointwise values of predicted variables or flux functions at ξ i−1,1 , ξ i−1,2 , ξ i−1,3 and ξ i2 as constraint conditions. And the pointwise values at ξ i−1,2 , ξ i1 , ξ i2 and ξ i3 are adopted to build the spatial reconstruction within line segment L i . The resulting multi-moment scheme is of fourth-order accuracy [16].
Three approximate Riemann solvers are investigated in [25] in solving atmospheric dynamics. Considering the significance influence from the effects of the Coriolis force and the gravity force in atmospheric dynamics, specially for those large-scale atmospheric flows, the waves propagate in a different way in comparison with the Euler equations for gas dynamics. The adopted Riemann solver should be carefully considered to accurately reproduce the wave propagation in atmosphere. A modified local Lax-Friedrichs (LLF) approximate Riemann solver is used in this study for its simplicity. With the LLF solver, matrix a ξ is simplified to be the maximal absolute value of eigenvalues of Jacobian matrix, i.e. ∂e(q) ∂q , which represents the maximal propagation speed related to the sound wave. In ξ-direction, it is written as where the sound speed in the transformed coordinates is and the sound speed in physical space c = γ p ρ . In this study, the LLF solver is then modified by adopted a much smaller parameter a ξ , which is specified as where K h is a parameter to adjust the effective of numerical viscosity.
Since the physically-significant waves for large-scale atmospheric dynamics propagate much slower than the sound wave, this modification is expected to improve the accuracy of the proposed global model.
In this study, parameter K h = 0.2 is adopted.
• Derivatives of flux functions e at cell center (ξ i2 ) To guarantee the numerical conservation, the updating formulation of DOF at cell center is derived through the constraint condition based on the line-integrated average of the predicated variables, defined as which can be approximated as using above spatial reconstruction polynomial.
Thus, the updating formulation for DOF q i2 at cell center can be written as where the updating formulations of DOFs at cell interfaces have been obtained above and the line-integrated average is updated using a flux-form formulation as with the flux functions at cell interfaces estimated by known DOFs defined there directly.
The resulting scheme is conservative in terms of line-integrated average calculated through Eq. (35).
Spatial discretizations in vertical direction
In vertical direction, a conservative finite difference scheme is developed to solve the equations The key task here is again to evaluate the derivatives of flux functions h at the center of line segment L k (shown in Fig. 4) as Here, we still omit the indices in horizontal directions for the sake of brevity.
To design a conservative scheme, we define auxiliary variables g (ζ), which satisfy the relation Then the derivatives of flux functions can be calculated by a flux-form formulation as Considering the relation Eq. (41), updating formulation Eq. (39) is recast as The updating formulation Eq. (42) is of flux-form and the resulting model is numerically conservative. At endpoint ζ k+ 1 2 , the values of auxiliary variables g are determined by solving Riemann problem as where G and Q are one-dimensional piecewise polynomial for auxiliary variables g and predicted variables q, the similar modified LLF Remann solver is applied with the parameter and in this study. A three-point stencil is used for spatial reconstruction in vertical direction. Two polynomials for spatial reconstruction can be obtained, including where the coefficients are determined by following constraint conditions • and a linear polynomial as where In this study, the linear polynomial is adopted to calculate the benchmark tests since no notable improvement on the computational accuracy was observed by using the quadratic polynomial, meanwhile the higher order polynomial sometimes introduces the non-physical numerical oscillations. The parameters used to modify the LLF Riemann solver in vertical direction are selected as in the Held-Saurez long-term integration test and K B v = 1 in other hydrostatic cases. As the leading term of numerical diffusion term of above upwind finite difference scheme using linear reconstruction is proportional to the 4 th -order derivative of dependent variable, enlarging the coefficient K B v is equivalent to adding the 4 th -order vertical diffusion in vertical momentum equation (w-equation). Thus, small value of K B v is used in nonhydrostatic case to improve the accuracy, while the relatively large values are chosen for hydrostatic cases for robustness of the proposed model. In numerical experiments, we found the additional numerical diffusion is helpful in the proposed model to suppress the numerical oscillations and stabilize the proposed model in some cases, including the baroclinic wave test and Held-Saurez test.
Boundary condition
In horizontal direction, one layer of ghost cells are supplemented for each patch. With enough ghost cells, the updating procedure is applied on each patch independently. The DOFs within ghost cells are evaluated by a singlecell based polynomial over the cell in adjacent patch. Furthermore, some DOFs, which are defined along the patch boundaries, are updated in two or three patches and the different results may obtained during the simulation. A correction operation is applied by averaging the results from different patches. The construction of ghost cells in horizontal direction and the implementation of result correction along the patch boundaries can be accomplished for a three-dimensional model by applying the numerical manipulation we have developed for the global shallow water model [16] at each model layer.
In vertical direction, the one-sided formulations are applied at surface and model top for spatial reconstruction in ζ-direction (Eqs. (47) and (48)). Additionally, the slip-wall condition are applied in vertical direction, i.e.w = 0 at surface and model top. Rayleigh friction is adopted in momentum equations near model top to assure the non-reflective boundary at model top and the strength of Rayleigh friction is given as [26]
Time marching scheme
Due to the very large ratio between the horizontal and the vertical grid spacings, the very small time step of an explicit scheme will be determined by the sound speed and the smallest grid spacing in vertical direction, e.g. it has a magnitude less than one second in the practical applications with the vertical grid spacing of dozens of meters near the surface. In this study, we use the dimensional-splitting scheme based on horizontally explicit and vertically implicit (HEVI) strategy to implement an efficient time marching scheme. The terms related to the spatial discretization in vertical direction and the stiff source terms including gravity force and Rayleigh friction are implicitly integrated. To preserve the high-order accuracy, the implicit-explicit (IMEX) Runge-Kutta scheme is adopted to couple the explicit and implicit time marching. The time step of resulting scheme is decided by the stability condition in horizontal direction. With adopted 3-point 4 th -order MCV scheme and 3 rd -order Runge-Kutta scheme, the maximal CFL number is about 0.45 in two dimensions, which is calculated by whereũ max andṽ max are maximal contravariant velocity components within the computational domain. The time marching in the proposed model is accomplished from time step n t (t = n t ∆t) to n t + 1 as where q (s) = q n t + ∆t and H and V denote the explicit and implicit parts of MCV discretization. At the s th substep, a nonlinear equation set, having the form of is solved to determine q (s) by Newton's method, where B includes the known quantities at s th substep The solution is approximately determined through the iteration as The initial guess is chosen as x 0 = q n t and the linear system Eq. (55) is solved using a Gaussian elimination algorithm designed for a sparse system corresponding to the finite difference vertical discretization. Jacobian matrix of the linear system is determined by analytically calculating the derivatives of spatial discretization formulations of the implicit-part with respect to the dependent variables.
The application of various IMEX Runge-Kutta scheme in the global atmospheric modelling to accomplish HEVI time marching was recently investigated in [27,28]. In this study, a 3-stage, 3 rd -order, L-stable DIRK scheme (S = 3 in Eq. (51)) introduced in [12] is adopted. In this study, the Newton iteration is only conducted for one time in every Runge-Kutta substep, i.e. the non-linear system derived from implicit time marching is linearized. The numerical experiments show this simplification hardly alter the result and obviously save the computational overheads.
Tests and results
In this section, the widely used benchmark test cases were carried out to verify the proposed dynamical core. These test were described in detail in [29,30,31], including both hydrostatic and non-hydrostatic ones. All tests, except the Held-Saurez's long-term integration test, were conducted with horizontal resolution of 1 • (along the Equator) in terms of DOF (N h = 45), while Held-Saurez test adopted a little coarser grid with horizontal resolution of 1.5 • (N h = 30)). In vertical direction, we constructed the computational grids using the parameters shown in Table 1. In 3D Rossby-Haurwitz wave, gravity wave and nonhydrostatic mountain wave cases, the uniform grids were adopted. In other cases, we used the non-uniform vertical grids. The quality of numerical results of dynamical core is related to selected vertical grid to some extend. For the practical applications with physical processes, it is worth further investigations on designing the proper grid transformation in vertical direction to represent the behaviors of real atmosphere. The proposed dynamical core is applicable for various grid transformation formulations. In numerical experiments, the results are often displayed on the isobaric surfaces using the longitude-latitude grid in horizontal directions. The linear polynomial is used to calculate the geopotential height of the isobaric surface and interpolates other predicted variables from the height-based vertical coordinate to the prescribed isobaric surfaces. Similarly, bilinear interpolation is applied in horizontal directions to evaluate the predicted variables on longitude-latitude grid. Though the spatial interpolation based on the linear polynomial may degrade the numerical accuracy in the post-processing calculations, it is adopted in this study for it does not generate the new extrema in comparison with other high-order interpolations. The time step is ∆t = 200 s on grid N h = 45 and scaled on other grids to maintain the same value of N h ∆t.
3D Rossby-Haurwitz wave
This test case is an three-dimensional extension of Rossby-Haurwitz wave test proposed for global SWE model in [32]. The horizontal velocity components are identical on each layer in vertical direction, which have the same form as those defined in [32] and the details are described in [29]. The vertical velocity component is zero.
The initial condition preserves hydrostatic relation and the thermodynamic variables are derived from temperature profile as wherez is equivalent height, T 0 = 288 K, Γ = 0.0065 K/m. By integrating the hydrostatic relation, we have where p ref = 955 hPa is the pressure atz = 0. Equivalent height are related with altitude byz where the perturbation of geopotential can be referred to [29].
The numerical results of Rossby-Haurwitz wave are given in Fig. 5 for the test on grid N h = 45. Shown are horizontal velocity components at 850 hPa level, geopotential height at 500 hPa level and surface pressure at day 15. The surface pressure is not a predicted quantity and extrapolated using a linear polynomial based on the pressure of the first and second model layers. Both shape and phase shift of the shown quantities agree well with the results in existing literatures. The proposed model is conservative and the relative total mass error has a value of machine precision as shown in Fig. 6. This test is also checked on a coarser grid with N h = 15 and the numerical results are depicted in Fig. 7. As the high-order MCV scheme is adopted in horizontal directions in this model, the considerably large-scale wave propagation in this test is accurately reproduced on this very coarse grid and the differences in comparison with those on grid N h = 45 are less than 1%.
Gravity wave without Earth's rotation
The static atmosphere is given by specifying a horizontally uniform pressure field as where Brunt-Väisälä frequency N = 0.01 s −1 , p 0 = 1000 hPa, T 0 = 300 K and S = g 2 c p N 2 . The background potential temperature is obtained from hydrostatic relation as A perturbation of potential temperature is then added in the steady background field to trigger the hydrostatic gravity wave as where function s (λ, φ) defines a cosine bell as r is great-circle distance to bell center (π, 0), r 0 = R 3 and vertical wave length L z =20 km . 11 The numerical results of perturbations of potential temperature along the Equator at different hours are shown in Fig. 8. No interpolation operation is applied for post-processing in this test. The wave horizontally propagates in two opposite directions and the keeps symmetrical shape. As the initial perturbation is specified having a shape of cosine bell in horizontal directions, non-physical numerical oscillations exist in the results of any unlimited high-order model. Thus, the 0 contour line is replaced by 0.01 in Fig. 8. Current results are competitive to those given in [14] by 4 th -order finite volume scheme on the same cubed-sphere grid. The results reproduce more details of wave structures in comparison with those of CAM-EUL and CAM FV using artificial diffusion or divergence damping (given in Fig. 10 in [14]).
Mountain-induced Rossby wave-train
Without bottom mountain, the balanced initial condition is first specified as a steady geostrophic flow. The horizontal velocity components in longitude-latitude grid are where u 0 = 20 m/s. The thermodynamic variables are derived from hydrostatic relation considering a isothermal atmosphere with T 0 = 288 K as where p s is the pressure at surface, specified to preserve geostrophic balance as and p p = 939 hPa. A bottom topography is then involved as where h 0 = 2000 m, r is great-circle distant to the mountain center π 2 , π 6 and d = 1500 km. The Brunt-Väisälä frequency N = g 2 c p T 0 ≈ 0.0182 s −1 and the flow is hydrostatic due to the nondimensional quantity Nd u 0 >> 1. This test is first checked excluding the bottom mountain. The exact solution of this balanced flow is same as the initial condition. As the initial distribution is considerably smooth, the convergence test is conducted on a series of refining grids. The normalized l 2 errors (following the definition in [32]) of density and the convergence rates are given in Fig. 9. In this balanced test, only 10 layers of computational cells are equidistantly arranged in vertical direction. With the current test setting, the errors are dominated by the spatial discretization in horizontal directions and the 4 th -order convergence rate is achieved in spherical geometry with losing the theoretical accuracy of adopted MCV scheme.
The numerical results including the effect of the bottom mountain are shown in Figs. 10 and 11 for predicted 700 hPa geopotential height, temperature and horizontal wind fields at day 5 and day 15. The balanced state is destroyed by the topography effect, which triggers a propagation of Rossby wave-train. This test is a challenging case to verify the robustness of dynamical cores since relatively large deviations from the initial conditions are generated, specially in the horizontal wind field. The results by the proposed model are visibly identical to those given in [29], except the broken 3300 m contour line of geopotential height and some numerical oscillations found in horizontal wind field at day 15. Similar differences are also found in nonhydrostatic finite volume dynamical core [14].
To set up the initial condition in height-based grid, coordinateη is first determined at any solution point (λ, φ, z) by solving the equation This equation is solved by Newton iteration. With knownη k at k th iteration, the next guess is where according to the hydrostatic relation. The detailed procedure can be referred to Appendix D in [29]. Same as above test, the balanced initial condition is first checked. Due to the grid lines are not coincide with the wind direction, specially on two polar pathes, 4-wave errors may be observed on coarse grid [33]. Thus, we run the model on a series of refining grid to verify the grid-imprinting errors can be effectively suppressed by increasing the grid resolution. The l 2 error of predicted pressure of first layer (shown in Fig. 12) is evaluated following the definition in [33] to evaluate the quality of numerical results. At beginning, a large jump is observed on all grids due to the initial condition is not balanced in the discrete form. Then l 2 errors are gradually increasing with the time. The relative vorticity fields at day 9 on grids N h = 12 and N h = 48 are shown in Fig. 13. As expected, the 4-wave structure (error) is observed on coarse grid and visibly disappeared on fine one.
Then a perturbation is added in zonal wind to trigger the baroclinic wave, specified as where u p = 1 m/s, r 0 = R 10 and r is great-circle distance to π 9 , 2π 9 . The numerical results at day 7 and day 9 are given in Fig. 14. No analytic solution is available for this test. In comparison with numerical results of some representative models given in [33,34], our results accurately reproduce the propagation of baroclinic wave. As suggested in [33], we also calculated l 2 error of pressure field of first layer. The numerical result on high resolution grid (N h = 180) is used as the reference solution. At day 9, l 2 errors are 0.37 hPa, 0.20 hPa, 0.14 hPa and 0.07 hPa for the results on grids N h = 30, 45, 60 and 90.
Non-hydrostatic mountain waves over a Schär-type Mountain
In this test, the radius of the Earth is scaled to simulate the nonhydrostatic flow over the bottom mountain (tests 2-1 and 2-2 in [30]). Numerical model is carried out on a non-rotating reduced-size Earth with radius R = R 500 . The topography is specified as a Schär-type mountain, having the form of where r is great-circle distance to mountain center π 4 , 0 , d = 5 km is Schär-type mountain half-width and L = 4 km is Schär-type mountain wavelength.
The strength of Rayleigh friction is also specified in this test case as where τ 0 = 0.04s −1 , z D = 20 km and z t = 30 km. The hydrostatic pressure distribution is written as where u 0 = 20 m/s and T 0 =300 K. The temperature field depends on latitude and is uniform in vertical direction as and the initially balanced zonal velocity is where parameter c denotes a prescribed vertical wind shear of the zonal velocity field at the surface.
Held-Saurez Climate test
In this test, the idealized physical source terms are added, which are specified as [31] where the heating/cooling source in energy equation forces the model to a radiative equilibrium temperature field and the Rayleigh friction term in momentum equations represents the effect of boundary-layer friction near the surface. Two coefficients k v and k θ are determined by latitude and pressure of solution point as the radiative equilibrium potential temperature is where σ = p p s , p s is surface pressure and all other parameters are identical to those adopted in [31]. The source terms specified in this test are treated implicitly. The model runs for 1200 days in this test. During the first 200 days, the model spins up and reaches to a state of statistical equilibrium. The numerical results are then averaged over the followed 1000 days to check the performance of the proposed model on reproducing the long-term statistical characteristics of atmospheric dynamics. The multi-moment dynamical core are integrated on 30 × 30 grid with the time step of288 s. The details of non-uniform vertical grid are given in Table 1. The reference state is derived from hydrostatic relation using the specified radiative equilibrium temperature and the integration starts with the static atmosphere having the density and potential temperature distributions identical to the reference state.
The numerical results output once a day. The predicted quantities are then interpolated to a post-processing coordinate system (λ, φ, p), i.e. the press-based vertical coordinate and the longitude-latitude horizontal coordinates, to calculate the time-averaged zonal mean quantities. The 1000-day averages of zonal mean temperature, zonal velocity, eddy momentum flux, eddy kinetic energy, eddy heat flux and temperature variance at different isobaric surfaces are shown in Fig. 18. This test is widely checked by many dynamical cores. The results of the proposed model show good agrement with spectral transform solution given in [36].
Summary
A fourth-order non-hydrostatic dynamical core for global atmospheric model is proposed in this study by using multi-moment finite volume method. Through introducing two kinds of moments as model variables, the high-order numerical scheme is constructed over a more compact spatial stencil in comparison with the traditional finite volume method. The resulting model is very flexible in dealing with the computational meshes with complex topologies and can effectively suppress the extra grid-imprinting errors due to the discontinuous coordinates along the inner patch boundaries. Considering the practical dynamics-physics coupling, the finite difference scheme is adopted in vertical direction. The benchmark tests proposed in [29,30,31] were carefully checked, including both hydrostatic and nonhydrostatic ones. The numerical results are promising and achieve the expected accuracy in global simulations in comparison with reference solutions of existing advanced models. The proposed model is proven to be capable of accurately reproducing the atmospheric dynamics. Currently, a new high-resolution numerical weather prediction model is under development using the proposed dynamical core. | 2022-03-01T06:47:50.993Z | 2022-02-27T00:00:00.000 | {
"year": 2022,
"sha1": "6fda679f029d011a6452505b6dbb4c524041eacf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6fda679f029d011a6452505b6dbb4c524041eacf",
"s2fieldsofstudy": [
"Environmental Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
204849031 | pes2o/s2orc | v3-fos-license | Maternal Characteristics Affect Fetal Growth Response in the Women First Preconception Nutrition Trial
The objective of this secondary analysis was to identify maternal characteristics that modified the effect of maternal supplements on newborn size. Participants included 1465 maternal–newborn dyads in Guatemala, India, and Pakistan. Supplementation commenced before conception (Arm 1) or late 1st trimester (Arm 2); Arm 3 received usual care. Characteristics included body mass index (BMI), stature, anemia, age, education, socio-economic status (SES), parity, and newborn sex. Newborn outcomes were z-scores for length (LAZ), weight (WAZ), and weight to length ratio-for-age (WLRAZ). Mixed-effect regression models included treatment arm, effect modifier, and arm * effect modifier interaction as predictors, controlling for site, characteristics, and sex. Parity (para-0 vs. para ≥1), anemia (anemia/no anemia), and sex were significant effect modifiers. Effect size (95% CI) for Arm 1 vs. 3 was larger for para-0 vs. ≥1 for all outcomes (LAZ 0.56 (0.28, 0.84, p < 0.001); WAZ 0.45 (0.20, 0.07, p < 0.001); WLRAZ 0.52 (0.17, 0.88, p < 0.01) but only length for Arm 2 vs. 3. Corresponding effects for para ≥1 were >0.02. Arm 3 z-scores were all very low for para-0, but not para ≥1. Para-0 and anemia effect sizes for Arm 1 were > Arm 2 for WAZ and WLRAZ, but not LAZ. Arm 1 and 2 had higher WAZ for newborn boys vs. girls. Maternal nulliparity and anemia were associated with impaired fetal growth that was substantially improved by nutrition intervention, especially when commenced prior to conception.
Introduction
Maternal characteristics have a major role in determining placental function and consequently fetal growth [1]. Notable among these characteristics are maternal anthropometric measures of malnutrition such as underweight [2,3]. Others include parity, age, socio-economic status (SES), education, stress, inflammation, anemia, and reduced utero-placental blood flow [1]. All of these and other environmental factors may affect the placental, and hence the fetal, supply of nutrients either by impacting maternal nutrition or/and by reducing utero-placental blood flow. In low-resource populations, dietary diversity and quantity are frequently limited, and maternal anthropometric indicators of both short-and long-term under-nutrition are common [4,5]. In these circumstances, the available maternal nutrient supply to the placenta is also likely to be the primary factor determining the placental and fetal supply of nutrients. Nutritional inadequacy in utero may result in a 'fetal programming effect' with longer-term effects on offspring growth [3,6]. Maternal anthropometric measures in these populations are strongly associated with fetal growth assessed by measures of newborn size [7]. Specifically, maternal underweight is associated with multiple measures of newborn size including low birth weight. Short maternal stature also increases the risk of low birth weight and small-for-gestational-age (SGA) infants [8][9][10][11]. Maternal anemia is associated with higher risks of low birth weight and preterm birth [12]. A strong association between maternal parity and adverse newborn outcomes, including impaired fetal growth, has been well recognized in populations in Asia, Africa, and Latin America [13], with the highest odds of adverse outcomes being for nulliparous women <18 years of age. Socio-economic status (SES) also has a direct association with infant growth at one year but may only have an indirect effect on birth length through maternal anthropometry [14]. Maternal education is strongly associated with socio-economic status and newborn size [15].
However, there is less information on whether baseline maternal characteristics can modify the effects of nutrition supplements. One relatively recent meta-analysis of the effects of maternal gestational multi-micronutrient supplements in undernourished populations documented greater birth weight increases with higher maternal baseline BMI in 11 out of 12 studies [16]. An even more recent pooled analysis for 12 trials of maternal multi micronutrients in low-income sites revealed very similar results for maternal BMI <18.5 kg/m 2 vs. BMI ≥18.5 kg/m 2 , with modest reductions in relative risks for low birth weight for both subgroups [17]. Secondary analyses for the preconception food supplement trial in the slums of Mumbai indicated a progressively greater decrease in the relative risk of low birth weight with increasing maternal BMI [18] but this contrasted, for example, with effects observed with trials of protein-energy supplements [19,20]. As with maternal anthropometry, there are more data for the association of maternal anemia with newborn size than for maternal baseline anemia as a moderator of the effect of maternal nutrition supplements on fetal growth.
Similarly, documentation of the association between parity and the changes in newborn anthropometry in response to maternal nutrition interventions has been less consistent among trials. A meta-analysis including 15 trials of multi micronutrients revealed only a small pooled effect of parity on reduction of newborn low birth weight or SGA with the effect for nulliparous women slightly larger than multiparous women [17]. In a maternal food-based trial commencing preconception, mean birthweight was substantially higher for parous than for nulliparous mothers receiving food supplements [18]. The effects of young maternal age may have collinearity with those of nulliparity but do not explain the effects of the latter [21]. The effects of parity on the newborn outcomes of mothers receiving lipid-based nutrition supplements or multiple micronutrients plus a high lipid food have been quite variable. Trials in Bangladesh and Malawi have revealed no association between parity and newborn anthropometric outcomes [22,23]; however, overall effect sizes were negative or very modest in these studies. A trial in Burkina Faso found a greater effect of the maternal supplements for multiparous women [24]. The combined parity effects of maternal lipid nutrient supplement for a comparable study in Ghana were equally modest; however, the pre-designated nulliparous subgroup had major effect sizes from the maternal nutrition supplement for birth weight and length [25].
A primary focus of the Women First Preconception trial was on changes in newborn anthropometry associated with maternal nutrition supplements, especially with those commenced prior to conception [26]. Also reported have been newborn outcome data limited to those participants in the three Women First sites who had gestational age determined by first trimester ultrasound [26][27][28][29]. Maternal underweight is common in two of these three Women First sites included in this paper, specifically in those located in India and Pakistan [5]. Maternal stunting is outstandingly common in the Guatemala site [30]. Maternal stunting rates also remain high in the sites located in S. Asia where the combination of maternal stunting and underweight aggravates the risk of retarded fetal growth [5,27,29]. Maternal anemia is also a major concern in S. Asia. The purpose of this secondary analysis was to determine the extent to which maternal baseline characteristics modified the newborn length and weight responses to the interventions in the Women First Preconception trial.
Study Design
This study was a secondary analysis of data from the Women First Preconception trial [26,31]. In this trial, participants were randomized to one of three arms. Arm 1 commenced the nutrition intervention at least 3 months prior to conception; Arm 2 received the same intervention commencing at the end of the first trimester; Arm 3 (control) received no trial supplement. The participants were women of childbearing age (16-35 years) who were members of low-resource, small-town, rural communities in Chimaltenango, Guatemala, Thatta, Sindh Province, Pakistan, and Belagavi, Karnataka, India. Those enrolled were all expecting to conceive within the following eighteen months and had screening hemoglobin (Hb) >8 g/dL. Apart from the screening Hb, participating women were unselected with respect to their long-or shorter-term nutritional status as determined by stature and BMI. Included in the current analysis were the maternal-newborn dyads who had gestational age determined from crown-rump length measurements of gestational age in the first trimester and had newborn anthropometric measures completed within 48 h of delivery [26] Maternal characteristics evaluated were maternal stature, BMI, anemia status at baseline (defined as Hb <12 g/dL), parity, age, SES, and education. Newborn sex was also included in the regression analysis. Newborn outcome measures were length-for-age z-score (LAZ), weight-for-age z-score (WAZ), and weight to length ratio-for-age z-score (WLRAZ) based on INTERGROWTH-21st fetal growth standards [32,33].
Differences were documented by trial arms, and effect sizes were compared for Arm 1 vs. Arm 3; Arm 2 vs. Arm 3; and Arm 1 vs. Arm 2. The three Women First sites with first trimester measurements of gestational age were included in these analyses: Guatemala, India, and Pakistan [26]. Recruitment and enrollment began in December 2013 and the final delivery occurred in March 2017.
Ethical Approval
The project was approved by the Colorado Multiple Institutional Review Board, University of Colorado, the local or/and national ethics committees for each of the three research sites:
Statistical Analyses
Mixed-effect regression models of neonatal anthropometry (i.e., z-scores of weight, length, and weight to length ratio) were conducted using SAS PROC GLIMMIX (SAS, Cary, NC, USA). Models included treatment arm, effect modifier, and a treatment arm by effect modifier interaction to test for differential intervention impact. Each model included study site and cluster, which were included as random effects. The following variables were included as control variables: nulliparity, no formal education, age <20 years, low maternal body mass index (BMI) (BMI <18.5 kg/m 2 ), anemia (Hb <12 g/dL), low SES, and newborn sex. Maternal stature had no effect on the final model. Analyses were conducted for each site individually and with the sites combined. Data described in the manuscript will be made publicly and freely available upon request without restriction at the National Institute of Child Health and Human Development Data and Specimen Hub (DASH, accession number pending).
Results
One thousand four hundred and sixty-five maternal-newborn dyads were included in this analysis ( Figure S1 Consort Diagram). Of the potential effect modifiers listed in Table 1, maternal nulliparity and anemia emerged as significant maternal characteristic effect modifiers for the newborn outcomes for combined sites. The distribution of nulliparous women varied by site with a very low percentage for Guatemala (Table 1). Nulliparous women had higher baseline percentages of age <20 years, no formal education, low BMI, anemia, and low SES. The percentage stunted was lower in nulliparous subjects and their mean height was higher, though mean weight was lower ( Table 2).
The effect of treatment on newborn length-for-age and weight-for-age depended on parity. Significant (p < 0.05) treatment arm x parity interactions for length-and weight-for-age z-scores (p = 0.031 and p = 0.038, respectively) were found for combined sites (Table 3, Figures 2 and 3). Among nulliparous women, those in Arm 1 or 2 had newborns with significantly longer length than those receiving usual care (Arm 3). Arm 1 was associated with higher infant birth weights than either Arm 2 or 3; no significant differences were found for parous women. Although the treatment x parity interaction was not statistically significant at p < 0.05 for weight to length ratio-for-age z-score (p = 0.064), the pattern of differences appears to be similar to the findings for weight, with nulliparous women in Arm 1 having newborns with higher weight-and weight to length ratio-for-age z-scores than those in either Arm 2 or 3 ( Table 3, Figures 1 and 3). In subgroup analyses of nulliparous subjects alone, low baseline BMI vs. normal/high BMI was associated with more favorable effect sizes (Table S1. Descriptive subgroup analysis by parity and BMI). For nulliparous women in individual sites, the largest adjusted mean differences were for Arm 1 vs. Arm 3. (Table 4). Regression model controls for study arm, parity, age <20 years, anemia (hemoglobin <12), low body mass index (BMI, <18.5 kg/m 2 ), no formal education, newborn sex, and low household socioeconomic status (SES). The SES tally provides the number of indicators available from the following list: electricity, improved water source, sanitation, man-made flooring, improved cooking fuels, and household assets. Low SES is defined as having ≤2 SES indicators present. CI, confidence interval. Treatment arm x maternal baseline anemia (Hb <12 g/dL) interactions were significant for all three outcomes (Table 5). Compared with usual care (Arm 3), both Arms 1 and 2 were associated with significantly higher infant length-, weight-, and weight to length ratio-for-age z-scores among women with anemia; Arm 1 had a larger effect than Arm 2 for WAZ (p = 0.028) and WLRAZ (p = 0.022). No significant differences were observed across treatment arms among women who were not anemic. For individual sites, effect moderator (maternal anemia at baseline) x treatment arm interactions were present for LAZ in Pakistan and for all three outcomes in Guatemala with corresponding significant effect sizes for Arm 1 vs. 3 (Table 6). Treatment arm x maternal baseline anemia (Hb <12 g/dL) interactions were significant for all three outcomes (Table 5). Compared with usual care (Arm 3), both Arms 1 and 2 were associated with significantly higher infant length-, weight-, and weight to length ratio-for-age z-scores among women with anemia; Arm 1 had a larger effect than Arm 2 for WAZ (p = 0.028) and WLRAZ (p = 0.022). No significant differences were observed across treatment arms among women who were not anemic. For individual sites, effect moderator (maternal anemia at baseline) x treatment arm interactions were present for LAZ in Pakistan and for all three outcomes in Guatemala with corresponding significant effect sizes for Arm 1 vs. 3 (Table 6).
Treatment effects also varied significantly by newborn sex for newborn weight (p = 0.028) with marginally significant results for weight to length ratio-for-age z-score (p = 0.052) and length-for-age z-score (p = 0.071). Male newborns of women in Arms 1 and 2 had better outcomes for LAZ and WAZ than male newborns of control mothers (Arm 3). No significant treatment effects were observed for female newborn (Table S2. Regression results of neonatal anthropometry by newborn sex: combined sites). Newborn sex data were included in the final model.
No other maternal characteristics included in Table 1 exhibited significant moderator x treatment arm interactions. However, data for these characteristics were included in the final regression model.
p-Value
Length-for-age z-score Regression model controls for site, study arm, parity, age <20 years, anemia (hemoglobin (Hb) <12 g/dL), low body mass index (BMI, <18.5 kg/m 2 ), no formal education, newborn sex, and low household socioeconomic status (SES). The SES tally provides the number of indicators available from the following list: electricity, improved water source, sanitation, man-made flooring, improved cooking fuels, and household assets. Low SES is defined as having ≤2 SES indicators present. -Anemia includes participants with Hb ≥12 g/dL; +Anemia includes participants with Hb <12 g/dL. CI, confidence interval. Regression model controls for study arm, parity, age < 20 years, anemia (hemoglobin (Hb) <12), low body mass index (BMI, <18.5 kg/m 2 ), no formal education, newborn sex, and low household socioeconomic status (SES). The SES tally provides the number of indicators available from the following list: electricity, improved water source, sanitation, man-made flooring, improved cooking fuels, and household assets. Low SES is defined as having ≤2 SES indicators present. -Anemia includes participants with Hb ≥12 g/dL; +Anemia includes participants with Hb <12 g/dL. CI, confidence interval.
Discussion
Of the maternal characteristics examined, nulliparity and maternal anemia at baseline were the only significant effect modifiers when controlled for all other potential modifiers examined and for each other. Positive effects for newborn length-, weight-, and weight to length ratio-for-age z-score were greatest for nulliparous women who started nutrition supplements at least three months prior to conception (Arm 1 vs. 3). For Arm 1 newborns of nulliparous women, the LAZ was in excess of 1 2 SD higher than for Arm 3. For weight-and weight to length ratio-for-age z-scores ≥ −1, these positive effects were also greater for the preconception arm compared to the arm commencing the same supplements in the first trimester of pregnancy (Arm 1 vs. 2). Details of the interactions between nulliparity and treatment arm did vary by site. For India, this interaction was significant for weight to length ratio-for-age z-scores and for Pakistan for LAZ. The percentage of <20 years of age was much greater among nulliparous than among parous women and the prevalence of maternal undernutrition (BMI <18.5 kg/m 2 ) was higher in the nulliparous than parous women. However, neither the interaction of maternal age nor BMI x treatment arm was significant, though the addition of maternal BMI (but not maternal stature) had a minor impact on the final regression model.
The presence of maternal anemia at baseline modified the treatment effect for all three outcomes. For weight and weight to length ratio, these improvements were significantly greater for Arm 1 (commencing prior to conception) vs. Arm 2 (commencing early in gestation). Our results were reminiscent of those for a lipid nutrient supplement plus multiple micronutrient trial in rural Burkina Faso [24], except in that trial, the benefits of the supplement were only apparent for parous women. As in that trial, no treatment effect on birth length or weight was found for non-anemic mothers. In both studies, the interventions provided only 20 mg iron/day in contrast to a pregnancy trial of lipid nutrition supplements in Ghana in which a control arm received 60 mg iron daily [25]. In that study, maternal Hb and iron status were lower in the treatment arms [34].
The deficit in newborn length and weight for the offspring of nulliparous control mothers (Arm 3) is much greater than the minor deficits in newborn length and weight for the parous control mothers. Moreover, the greater vulnerability of fetal growth for nulliparous women was highly responsive to the nutrition intervention, especially the preconception arm. One potentially relevant relationship that has emerged from the Women First data is that that maternal nutrition supplements are most effective and perhaps only effective when fetal growth is quite severely compromised. Two indicators of impaired fetal growth responsive to maternal nutrition supplements are nulliparity and maternal baseline anemia. This relationship is illustrated in Figures 1-3, in which the deficits in newborn z-scores are much greater for the nulliparous women in Arm 3 (controls) than for the parous women in Arm 3. The relatively large effect sizes for LAZ in Arm 1 and 2, especially the former, for the nulliparous women reduced the mean deficits to better than −1 z-score for LAZ, a deficit which portends a relatively high risk of stunting at 2 years of age [35]. Likewise, the effects of the preconception intervention were sufficient to improve the mean birth weight above that for small-for-gestational age, a well-established risk factor for stunting at 2 years of age [36].
Though maternal nulliparity and anemia have both been found to be maternal phenotypes alerting to the high risk of fetal growth retardation and to the substantial benefits of early attention to adequate nutrition, this does not preclude potential fetal benefits from early attention to maternal nutrition for women who do not have these phenotypes. The failure of maternal nutrition supplements to benefit the relatively small deficits in LAZ and WAZ at birth for parous and non-anemic (prior to conception) subjects in this cohort suggest that these relatively minor deficits may not be responsive to improvement in maternal nutrition, and additional environmental factors should be considered. In this regard, it is noted that the maximal improvements in these z-scores for nulliparous subjects with maternal nutrition supplements commenced prior to conception did not result in higher mean z-scores than those of the parous women ( Figure 1). Both were comparable to mean deficits in z-scores for resource poor populations globally [37,38].
Weight to length ratio-for-age z-score is one continuous newborn outcome measure for Arm 1, in which the mean for nulliparous women did not improve to a value better than minus one. Though the magnitude of the improvement in WLRAZ for Arm 1 was comparable to that for LAZ and WAZ, the z-score deficit for Arm 3 was greater. Indeed, mean WLRAZ at −2 for Arms 2 and 3 corresponds to the cut-off for wasting. These low mean WLRAZ, even for Arm 1, suggest that further improvement in the duration, quality, or quantity of maternal supplements may still confer additional benefits on fetal growth. In this trial, further improvement in WLRAZ would be of special value in India where this z-score was outstandingly low. A corollary of this discussion is that a null or limited response to maternal nutrition interventions may be attributable to the population having only minor deficits at birth and thus having limited capacity to respond.
A weakness of this secondary analysis is the omission of the Democratic Republic of the Congo site for which gestational age data are not available. For this analysis, we concluded that the value of gestational age outweighed the inclusion of a fourth site. A further weakness is the paucity of nulliparous data for the site in Guatemala, where the relationship of newborn measures of fetal growth to maternal baseline anemia indicates that at least a subgroup of participants benefited from the maternal nutrition supplements.
Positive aspects of this analysis included the identification of maternal subgroups for which fetal growth benefited from the maternal nutrition supplements and clearly documented the considerable extent to which deficits in fetal growth can be corrected by improved maternal nutrition, especially if fetal growth deficits are relatively severe and improvements are commenced for a substantial period prior to conception. These results reinforce the need for more extensive nutrition research, policies, and interventions directed to women of childbearing age who are not mothers, with special attention to nulliparous women [39,40] and to women with anemia.
Conclusions
Maternal characteristics, as exemplified by nulliparity and by anemia in this randomized controlled trial, can have a major effect on fetal growth as measured with newborn anthropometry. A common characteristic of newborns who benefitted from maternal nutrition interventions is the severity of fetal growth deficits that, at least in this trial, could be predicted by nulliparity and by maternal anemia. The results of this secondary analysis provided further weight to the conclusion that maternal nutrition interventions are more effective in terms of birth weight if commenced sometime prior to conception.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6643/11/10/2534/s1: Figure S1: CONSORT diagram for Women First Maternal Preconception Nutrition Trial, including participants from Guatemala, India, and Pakistan. Overall screening, random assignment, gestational age availability and obtainment of primary outcome by treatment arm. 1 Percentage of those randomly assigned. Excludes women who became pregnant <3 months into the study. The women who had eligible pregnancies may have had delivery data obtained or they may have exited the study before delivery. 2 GA at birth is defined as the age at the time of the ultrasound based on the ultrasound plus time until birth if the ultrasound was done between 6 weeks + 0 day and 13 weeks + 6 days and the GA at birth was between 24 weeks + 0 day and 42 weeks + 6 days. If the ultrasound was not conducted during the GA previously mentioned, then the GA at birth is missing. 3 Primary outcome was obtained for live newborns with 3 length measurements taken within 48 h of delivery. Among women, primary outcome obtained from ≥1 infants of the woman. CONSORT, Consolidated Standards of Reporting Trials; GA, gestational age; MTP, medical termination of pregnancy.; Table S1: Descriptive subgroup analysis by parity and body mass index (BMI); Table S2: Regression results of neonatal anthropometry by newborn sex: combined sites. | 2019-10-24T09:14:33.666Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "b7931bffd097d487f6d75044d3a9ce4fdd02941a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/10/2534/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47fa5aafb785be13c87c11e2b4d6fd769c00c2cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208170061 | pes2o/s2orc | v3-fos-license | Isolated low levels of high‐density lipoprotein cholesterol and stroke incidence: JMS Cohort Study
Abstract Background The cardiovascular relevance of isolated low levels of high‐density lipoprotein cholesterol (HDL‐C) is yet to be determined. Stroke often leads to long‐term disability, and thus, not only stroke mortality but also stroke incidence is a topic of research. Although isolated low HDL‐C level has been found to be a predictor for stroke mortality previously, whether it can predict stroke incidence is unknown. Methods In the Jichi Medical School cohort study, 11 025 community‐living residents without a history of stroke were examined. Hazard ratios (HRs) for isolated and non‐isolated low HDL‐C levels were calculated relative to those for normal HDL‐C levels in stroke patients using Cox's regression models. Results During the mean follow‐up period of 10.7 years, 412 residents had their first‐ever stroke. The multivariable‐adjusted HRs for the levels of isolated and non‐isolated low HDL‐C were 1.11 (95% confidence interval, 0.85‐1.44) and 1.35 (1.01‐1.81), respectively, when compared to that for normal HDL‐C. Conclusion Low HDL‐C levels with other dyslipidemias may contribute to the incidence of stroke, not isolated low HDL‐C.
| INTRODUC TI ON
Stroke is the most common cause of disability and mortality and is a concerning health issue worldwide. 1 Although there has been a marked reduction in stroke-related mortality, its burden has been increasing for decades. 2 Research not only on stroke-related mortality but also its incidence is necessary to understand the recent trends in stroke.
Low levels of high-density lipoprotein cholesterol (HDL-C) are reported to be associated with an increased risk of stroke. [3][4][5][6][7][8] However, most studies did not adjusted other lipid abnormalities. The relevance of isolated low levels of HDL-C in stroke remains undetermined. Previous studies have reported that the levels of isolated low HDL-C do not predict stroke-related mortality, 9,10 but whether it can predict the incidence of stroke remains unknown. Approximately 90% of patients who experience a first-ever stroke survive the first month. 11 Factors associated with the incidence and mortality of stroke can be different. Herein, the predictive relationship of isolated low HDL-C levels with the incidence of first-ever stroke was investigated among general Japanese residents in a cohort study.
| MATERIAL S AND ME THODS
The study was a serial analysis of the Jichi Medical School (JMS) Cohort Study, a multiregional community-based prospective cohort study with baseline data collected between April 1992 and July 1995 from national population screening tests according to the Health and Medical Service Law for the Aged in Japan that assessed the incidence of stroke until the end of 2005. 12 Among 12 490 participants (4911 men and 7579 women), we excluded the following 1465 participants: 102 without follow-up data, 112 with a history of stroke, 1146 without data on the stroke history, and 105 with no data of HDL-C levels, triglyceride, and total cholesterol. A total of 11 025 residents (4283 men and 6742 women) aged 18-90 years without a history of stroke were analyzed. This study was approved by the Institutional Review Board of Jichi Medical University (No. [6][7][8][9][10][11]2006), and all the participants provided written informed consent.
As previously described, 12 body mass index (BMI) was calculated from the resident's height and weight. The systolic and diastolic blood pressure (BP) levels were measured twice using a sphygmomanometer and appropriate cuff size in a sitting position after resting for 5 minutes. The serum levels of total cholesterol (TC), triglyceride (TG), HDL-C, fasting and postprandial plasma glucose (PG), and creatinine (Cr) were measured enzymatically (SRL Inc, Tokyo, Japan). The questions related to lifestyle by trained interviewers included: smoking status (never, past, or current), alcohol drinking status (never, past, or current), daily alcohol consumption, medication history (current use of antihypertensive, antidiabetic, and antihyperlipidemic medications), and physical activity (the Framingham Study Questionnaire). 13 Low-density lipoprotein (LDL) was calculated using the Friedewald formula where LDL-C = TC -HDL-C + (TG/2.2). 14 Estimated glomerular filtration rate (eGFR) was calculated using the Chronic Kidney Disease Epidemiology Collaboration equations modified by a Japanese coefficient: male, Cr ≤ 0.9 mg/dL, 141 × (Cr/0.9) −0.411 × 0.993 age × 0.813; Cr > 0.9 mg/dL, 141 × (Cr/0.9) −1.209 × 0.993 age × 0.813; female, Cr ≤ 0.7 mg/dL, 144 × (Cr/0.7) −0.329 × 0.993 age × 0.813; Cr > 0.7 mg/dL, 141 × (Cr/0.7) −1.209 × 0. 993 age × 0.813. 15 We classified HDL-C into 3 categories: (1) isolated low HDL-C (HDL-C < 1.0 mmol/L with TG < 1.7 mmol/L and TC < 6.2 mmol/L); (2) non-isolated low HDL-C (HDL-C < 1.0 mmol/L with TG ≥ 1.7 mmol/L and/or TC ≥ 6.2 mmol/L); and (3) normal HDL-C (HDL-C ≥ 1.0 mmol/L), based on the National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATP III). 10 The participants were interviewed face-to-face to confirm whether they had experienced their first stroke through annual follow-up surveys. Participants without complete follow-up examinations were contacted by mail or telephone. The medical histories of the participants were also confirmed if they had been treated in a hospital. We annually obtained death certificates from the public health centers with the permission of the Agency of General Affairs and the Ministry of Health, Labor and Welfare. The diagnosis committee consisted of a radiologist, a neurologist, and 2 cardiologists who independently diagnosed the stroke subtypes using the criteria by the National Institute of Neurological Disorders and Stroke. 16 The continuous variables were expressed as mean and standard deviation or median and interquartile range and categorical variables were expressed as numbers and percentages. The continuous variables included age, BMI, SBP, DBP, PG, TC, TG, HDL-C, LDL-C, and eGFR, and categorical variables included sex, smoking status, alcohol drinking, antihypertensive, antihyperlipidemic or antidiabetic medication, and physical activity. The comparisons between the groups were performed using a one-way analysis of variance or the chi-squared tests. In the present study, a complete-case analysis was used. A multivariable analysis was performed using the Cox's proportional hazards model to calculate the adjusted hazard ratios (HRs) and their 95% confidence intervals (CIs) for stroke incidence with isolated low HDL-C or non-isolated low HDL-C, using normal HDL-C as reference. Model 1 was adjusted for age and sex. Model Table 1 shows the baseline characteristics of the participants. The
| D ISCUSS I ON
This JMS cohort study demonstrated that isolated low HDL-C was not a predictor for the incidence of stroke in the general Japanese population, although low HDL-C with other dyslipidemias (higher TC and/or TG) was found to predict stroke. The results of this study on stroke incidence appeared to be similar to those of previous studies on stroke mortality (meta-analysis in the Asia-Pacific Region 9 and EPOCH-JAPAN 10 ), which reported that isolated low HDL-C was not significantly associated with stroke mortality in men and women combined. As the cardiovascular relevance of isolated low HDL-C is not determined, the absence of a relationship between isolated low HDL-C and stroke incidence and mortality would add to the pathophysiological knowledge on isolated low HDL-C.
Of note, the previous studies reported that the non-isolated low HDL-C did not predict the stroke mortality but predicted cerebral hemorrhage. 9,10 In our study, the non-isolated low HDL-C could predict the incidence of stroke. The difference in the outcomes for stroke may produce the predictability of non-isolated low HDL-C as different factors associated with the incidence and mortality of stroke are assumed. 11 Although the true reason is unknown, it is hypothesized that lipids are carefully managed after a stroke, which leads to the lack of a relationship between non-isolated low HDL-C and stroke mortality. 17 The observation that adding dyslipidemias to low HDL-C other than simple low HDL-C can contribute to incident stroke may be useful to prevent stroke, as dyslipidemias other than HDL-C can be modulated by drugs.
This study has several limitations. First, chronic disease affecting HDL-C, such as liver cirrhosis, nephrotic syndrome, and steroid-treated disorders, could not completely be excluded, although these patients were less likely to participate in general checkups in the communities where the study subjects were recruited in our study. Second, HDL-C level was measured once at baseline, and no assessment of changes in HDL-C during follow-up was undertaken.
However, this approach is consistent using that applied in previous many prospective cohort studies.
In conclusion, our study suggested that isolated low HDL-C might not predict stroke incidence, but non-isolated low HDL-C could predict stroke incidence. These findings would add to the knowledge on the role of low HDL-C in stroke prevention. TA B L E 1 Baseline characteristics of participants according to categories of HDL-C a
Categories
Normal HDL-C Isolated low HDL-C Abbreviations: BMI, body mass index; DBP, diastolic blood pressure; HDL-C, high-density lipoprotein cholesterol; PAI, physical activity index; SBP, systolic blood pressure. a Data are shown as mean (standard deviation) or median (interquartile range: plasma glucose and triglyceride) for quantitative data and as percentage of participants for qualitative data. b P levels were calculated using one-way analysis of variance or chi-square test. | 2019-11-20T14:05:01.065Z | 2019-11-19T00:00:00.000 | {
"year": 2019,
"sha1": "5526e32c8e22d3fcc2f17d7fbc2d3fcb1b1985b2",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcla.23087",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99c91f838bf46e90e606798c7b0b41d420876a9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36408432 | pes2o/s2orc | v3-fos-license | Completeness of Inertial Modes of an Incompressible Non-Viscous Fluid in a Corotating Ellipsoid
Inertial modes are the eigenmodes of contained rotating fluids restored by the Coriolis force. When the fluid is incompressible, inviscid and contained in a rigid container, these modes satisfy Poincar\'e's equation that has the peculiarity of being hyperbolic with boundary conditions. Inertial modes are therefore solutions of an ill-posed boundary-value problem. In this paper we investigate the mathematical side of this problem. We first show that the Poincar\'e problem can be formulated in the Hilbert space of square-integrable functions, with no hypothesis on the continuity or the differentiability of velocity fields. We observe that with this formulation, the Poincar\'e operator is bounded and self-adjoint and as such, its spectrum is the union of the point spectrum (the set of eigenvalues) and the continuous spectrum only. When the fluid volume is an ellipsoid, we show that the inertial modes form a complete base of polynomial velocity fields for the square-integrable velocity fields defined over the ellipsoid and meeting the boundary conditions. If the ellipsoid is axisymmetric then the base can be identified with the set of Poincar\'e modes, first obtained by Bryan (1889), and completed with the geostrophic modes.
I. INTRODUCTION
Rotation is a ubiquitous feature in stars, planets and satellites. The dynamics of these objects is profoundly modified when solid body rotation overwhelmingly dominates all other flows. In this case residual disturbances that make the flow depart from an exact solid body rotation are strongly affected by the Coriolis acceleration which ensures angular momentum conservation of the movements. This is especially true for the low frequency oscillations of stars or planets. For these oscillations buoyancy and Coriolis force are the restoring forces at work. They make gravito-inertial waves possible [2,3].
In stars these waves are of strong interest because their detection and identification allow us to access to both the Brunt-Väisälä frequency distribution as well as the local rotation of the fluid. They are of particular interest in massive stars, where they open a window on the interface separating the inner convective core and the outer radiative, and stably stratified, envelope. But these waves are also a key feature of the response of tidally interacting bodies and therefore of their secular evolution [4][5][6][7]. On this latter subject several studies have recently addressed the dynamics of fluid flows driven by librations, which are common phenomena in planetary satellites [e.g. [8][9][10].
However, the mathematical problem set out by these global oscillations is far from being fully understood. The reason for that comes from the very basic boundary value problem that emerges when diffusion and compressibility effects are neglected: it is ill-posed mathematically [11]. The operator is indeed either of hyperbolic or mixed type in the spatial coordinates, but the solutions need to match boundary conditions. As already noted by many authors after the seminal work of Hadamard [12], illposed problems are plagued with many sorts of singularities [e.g. 13, for a detailed discussion].
With planetary and stellar applications in mind the oscillations of an incompressible fluid confined in a rotating sphere or spherical shell have attracted much attention [6,8,[13][14][15][16]. The oscillating flows in a spherical shell display strong singularities when viscosity vanishes [13]. The singularities occur because perturbations obey the spatially hyperbolic Poincaré equation (see Eq 9 below), and must meet boundary conditions. The strongest singularities, called wave attractors after the work of Maas & Lam [17], result from the reflection of the characteristic lines (or surfaces) on the boundaries 1 . In the twodimensional problem analog to that of the spherical shell, characteristic lines are focussing around periodic orbits (the attractors) [18]. It can be further shown that no eigenmode can exist when an attractor is present [13]. Of course viscosity regularizes the solutions, but numerical solutions of the viscous eigenvalue problem show that actual eigenmodes are strongly featured by attractors. They appear as thin oscillating shear layers attached to the attractor.
Surprisingly, when the inner core of the spherical shell is suppressed, namely the container is a full sphere (or a full ellipsoid) regular polynomial solutions exist for the inviscid eigenvalue problem [19]. For the sphere and the axisymmetric ellipsoid, these solutions have long been known since the paper of Bryan [1], which followed the seminal work of Poincaré [20] on the equilibrium of rotating fluid masses [but see also 11,21].
When Greenspan [11] reviewed the subject in his monograph on rotating fluids, he raised the question of the completeness of the inertial modes in the sphere and the ellipsoid. Indeed, if the normal modes are complete, then any perturbation can be expanded into a linear combination of eigenfunctions. In particular any initial condition can be expanded and the response flow can be calculated, while perturbations by viscous or nonlinear effects can be easily dealt with. Except for the work of Lebovitz [22] (see below), Greenspan's question remained untouched for almost fifty years until the recent works of Cui et al. [23], who proved completeness for the rotating annular channel, followed by the one of Ivers et al. [24] who gave the demonstration for the sphere.
The present work, which has an unusual history (see the end of the paper), extends the results of Ivers et al. to any ellipsoid. Importantly, our demonstration takes another route than the one found by Ivers et al. [24]. We use a more general formulation of the problem allowing us to use the tools of functional analysis in the Hilbert space of square-integrable functions. Since these tools are likely unfamilar to many fluid dynamicists, we try to make our demonstration as pedagogical as possible.
The paper is organised as follows. In the next section we first formulate the Poincaré problem, either for forced flows or for free oscillations. Then, in section 3, we propose another formulation of the free oscillation problem that does not assume continuity or differentiability of velocity fields. Velocity fields are only supposed to be square-integrable. Such an extension of the space of velocity fields is motivated by three arguments: first, inviscid fluid may support discontinuous velocity fields, like the classical vortex sheet [25]. Second, singular velocity field can be expected because of the ill-posed nature of the Poincaré problem. Third, and not least, by assuming only square-integrability of the solution of the problem, we can play in the Hilbert space of square-integrable functions, and benefit from many results of spectral theory on bounded, self-adjoint, linear operators. In section 4, we summarize what we can readily say about this problem using some of the results of functional analysis, recalling in passing the needed concepts of spectral analysis. We then establish a sufficient condition for an operator to own a complete basis of eigenfunctions. We show that polynomial eigenfunctions can constitute such a base if the fluid volume is an ellipsoid. This result was also obtained by Lebovitz [22], but our proof is more direct and clearly exhibit the special nature of the ellipsoidal boundary. In section 5, we consider the well-known (since Bryan 1889) eigenmodes of the rotating spheroid (i.e. the axisymmetric ellipsoid). These solutions are of polynomial nature and we show (section 6) that they constitute the expected complete base that has been infered in the previous section. Notably, we exhibit the set of geostrophic modes that are associated with the zeroeigenfrequency, and without which inertial modes would not make a complete base.
The present work is therefore a follow up of the work of Ivers et al. [24] who obtained a first set of mathematical results when the problem is restricted to the sphere and when the velocity fields are supposed to be oncecontinuously differentiable. The two works share many common results, but hopefully they complete one another and offer the broadest view of the Poincaré problem. The method proposed here seems promising enough that one might hope to use it when the fluid volume is not an ellipsoid. We have investigated two other shapes, a cube and a spherical shell, with only negative results. Hence, except the annular channel [23], we simply do not know whether any non-ellipsoidal volume has a complete set of eigenvelocities of some more general form.
II. CLASSICAL FORMULATION OF THE POINCARÉ PROBLEM
In the steady, undisturbed reference state, an incompressible non-viscous fluid with constant density ρ occupies an open bounded set E with boundary ∂E that has an outward unit normaln. Let E be the closure 2 of E, i.e. E together with ∂E. Both ∂E and the fluid rotate rigidly about some given axis with constant angular velocity Ω. Position vectors r are measured relative to an origin chosen on the axis of rotation. The body force on the fluid in the rotating reference frame is independent of time and consists of self-gravity, externally applied gravity, and centrifugal force. The pressure in the fluid is the hydrostatic pressure required to balance these body forces.
In the disturbed state ∂E is infinitesimally deformed to ∂E t at time t, and the infinitesimal normal velocity of ∂E t is β. An extra infinitesimal time-dependent body force f per unit mass acts on the fluid. In consequence of these forces and its own history, the fluid has an infinitesimal velocity v when viewed from the rotating frame. The hydrostatic pressure suffers an infinitesimal perturbation which it will be convenient to write as 2ρΩq, where Ω is the magnitude of Ω and q is a function of r and the time t. In the rotating reference frame, v and q are governed by the equationsn Because ρ is constant, (1b) is exact, but (1a) and (2) are correct only to first order in the disturbances β, v, q and f . For simplicity it will be assumed that β and f are known for all t > 0, and that v is known everywhere at t = 0. Using this information to find v and q for all r in E and all t > 0 constitutes the Poincaré forced initial value problem. It will be convenient to eliminate β at the outset. If β = 0, let θ be a solution of the following Neumann problem (Kellogg, 1953, p246) at each time t : The solubility conditions for this Neumann problem are that ∂E be sufficiently smooth (for example,n may vary continuously on ∂E) and that a condition whose fulfillment is assured by (1). Given (3c), the solution θ of (3a,b) is determined at each t up to an unknown additive function of t, and ∇θ is uniquely determined for all t.
then v satisfies equations (1) and (2) with β replaced by 0, with q replaced by and with f replaced by Henceforth we drop the primes and take β = 0 in (1a).
(9a) Substituting (8) Equation (9b) is the classical Poincaré equation for the pressure disturbance q, and (9a) is the boundary condition appropriate to the Poincaré problem, in which ∂E rotates rigidly. Given an eigenfunction q and its eigenvalue λ in (9), the corresponding v is recovered from (8). Greenspan (1965) [28] shows that when v is sufficiently differentiable then λ must be real and between −1 and 1. The resulting hyperbolic character of (9b) for the normal modes has led to the suspicion that there might be pathological elements in the boundary value problem (9) [29].
A. Introduction
Inviscid incompressible fluids admit discontinuous velocity fields provided discontinuities are parallel to the field so as to fulfill mass conservation. Hence, eigenvalues may be associated with non-differentiable velocity fields. In view of the ill-posed nature of the Poincaré problem, the possibility of such eigen-velocities cannot be excluded. In this section we therefore reformulate the eigenvalue problem (6)- (7) in order to include nondifferentiable velocity fields.
Under suitable smoothness assumptions Greenspan (1964Greenspan ( , 1965 [28,30] shows that, whatever the shape of the fluid volume E, all eigenvalues λ of (6) and (7) are real and lie in the interval −1 < λ < 1. That author also shows that eigenvelocities v 1 and v 2 belonging to different eigenvalues λ 1 and λ 2 are orthogonal in the sense that v 1 | v 2 = 0, where the inner product is defined as Here |E| is the volume of the region E, and v 1 (r) * is the complex conjugate of v 1 (r).
All this suggests that the eigenvalues λ are the eigenvalues of some bounded, self-adjoint linear operator L on the complex Hilbert space Π consisting of all Lebesgue square-integrable complex vector fields v on E. We recall that square-integrability just means that the total kinetic energy of the flow exists. For such velocity fields, we can define their norm by Now, to find the appropriate operator L : Π → Π, we must interpret (6) and (7) when v is merely squareintegrable and not differentiable or even continuous.
B. Mass conservation for L 2 -velocity fields
For velocity fields v that are merely square-integrable and not differentiable or even continuous ∇ · v is not well-defined in E, andn · v is not well-defined 3 on ∂E. We begin by trying to avoid this difficulty.
The game will be to define subspaces of the general Hilbert space Π that includes all the square-integrable complex vector fields v defined on E. To ease reading, we shall use underlined symbols to denote a space (of functions usually). It'll be boldface if the space is a space of vectorial functions. Thus, we first introduce Π ∞ and Π ∞ that are respectively the spaces of all infinitely differentiable complex scalar and vector fields on E, the closure of E. Define 3 A square-integrable vector field may indeed not be defined on ∂E, namely on a set of volume measure 0 in E.
That is, Γ ∞ consists of all vector fields u which can be written for some φ in Π ∞ . Then clearly Γ ∞ ⊆ Π, but Γ ∞ is not closed in Π under the norm (11). Indeed, we can easily construct a suite of infinitly differentiable function that converges to a discontinuous function. Therefore, we consider its closure, Γ: According to this definition, a vector field u on E belongs to Γ if and only if it is square-integrable on E and there is a sequence In particular, Γ includes all fields u of form (12b) with φ continuously differentiable on E.
Let us now introduce Λ, the orthogonal complement of Γ in Π. Thus Λ consists of all vector fields w squareintegrable on E and such that u|w = 0 for every u in Γ. In particular, w ∈ Λ implies that for every φ in Π ∞ . Conversely, since the orthogonal complement of a set is also the orthogonal complement of its closure, if w is square-integrable on E and (15) is true for every φ in Π ∞ , then w ∈ Λ. Now suppose w ∈ Λ ∩ Π ∞ . Then, Gauss's theorem permits (15) to be rewritten as By the Weierstrass approximation theorem (Courant and Hilbert, 1953, p65) every φ continuous on E can be approximated uniformly and with arbitrary accuracy by polynomials. Therefore (16) holds for all φ continuous on E. Then a well-known argument leads to the conclusion that ∇ · w = 0 in E andn · w = 0 on ∂E.
is the appropriate generalization of (6) to squareintegrable vector fields which are not differentiable.
C. Λ and piecewise continuously differentiable fields
Before going any further, it is worth viewing (17) from a physical point of view. Λ is indeed a very large space that includes, among other fields, unbounded vector fields that are not physically acceptable.
We know that the local equation ∇·v = 0 is equivalent to the integral condition when v is differentiable. It says that for any closed surface S, contained in E, the mass-flux across this surface is zero (for a fluid of constant density). We shall see now that being a piecewise continuous vector field in Λ is equivalent to (18) being satisfied.
Let us first observe that if v is a once-continuously differentiable that verifies (6), then for any φ, a oncecontinuously differentiable function of Π ∞ , we have which shows that such v-fields are members of Λ. Now, Eq. 20 implies (6) by the reasoning following (16). However, we can also be slightly less restrictive on v and just assume a piecewise continuous field. Then we can show that for such fields Λ-membership is equivalent to (18).
If Λ-membership (17) is true, then for any real φ, oncecontinuously differentiable function of Π ∞ , we have However, ∇φ is a vector that is always orthogonal to any iso-φ surface. Since (21) is true for any φ, for a given surface S we can design a φ that is constant inside S and outside S + δS. S + δS is the same as S but dilated by a small increment δ . In between the two surfaces φ is chosen to increase linearly by the same amount so that ∇φ is the same everywhere on the surface. Hence, for this given φ, (21) implies that S v · n ∇φ dSδ = 0 (22) where n is the unit vector ∇φ/ ∇φ normal to the surface. Since φ is chosen such that δ and ∇φ are constant, we can simplify (22) and get (18). We note that since φ is any function of Π ∞ we can construct suites of functions whose limit can fit any closed surface, even with sharp angles. Hence, all piecewise continuous members of Λ satisfy mass conservation expressed in (18). Now we would like to know if Λ contains all the massconserving velocity fields. Let us therefore show that a piecewise continuous field verifying (18) is necessarily in Λ. For that we prove that if this is not the case then we get a contradiction. We thus consider a real velocity field that verifies (18) but that does not belong to Λ. Hence, there exists a scalar field φ ∈ Π ∞ defined over the full volume E such that To make the reasoning easier to follow, we shall assume in addition that φ is a monotonic function over E. If this is not the case then E can be split into sub-volumes where it is monotonic, and the following reasoning applies to each sub-volume. Since φ is defined over E, the equation Since v is a mass-conserving velocity field, (18) is true for any closed surface, in particular for the surface φ = φ 0 . If this surface is not closed, then it is completed by the needed part of ∂E. Thus we can write Since φ is a function defined all over E, let φ m and φ M be the minimum and maximum value reached by φ in E, then However, dφ 0 / ∇φ is the differential length element orthogonal to the surface, hence dSdφ 0 / ∇φ is just the volume element. When φ scans the interval [φ m , φ M ] the surface φ = φ 0 scans the volume E. We thus find that in contradiction with (23).
To conclude, we see that all piecewise continuous velocity fields of Λ satisfy mass conservation in its integral formulation (18) and reciprocally. However, let us stress again that Λ is a much wider space that includes vector fields for which (18) or ∇ · v may not make sense. Its vector fields are just square-integrable and verify (15), which will be sufficient for our purpose.
D. The momentum equation
We need a similar generalization of the equation of momentum. (7) has no derivative in the velocity field, so the question is just a matter of how to reduce the functional space Π to Λ.
Since Γ is closed, and Λ is its orthogonal complement in Π, therefore That is, every v in Π can be written in the form v = u + w with u ∈ Γ and w ∈ Λ, and u | w = 0. The foregoing definitions are very similar to the decomposition of the classical vector space into two orthogonal subspaces (like a plane and a line in IR 3 ). In the following we just identify the projection operators on the subspaces. The orthogonality of the subspaces Λ and Γ means that u and w are uniquely determined by v, so that it is possible to define two functions, Γ : Π → Γ and Λ : Π → Λ, as follows: From the uniqueness of Γ(v) and Λ(v) it follows that Γ and Λ are linear, and since Γv | Λv = 0 it follows that v 2 = Γv 2 + Λv 2 . Thus Γv ≤ v|| and Λv ≤ v||. The functions Γ and Λ are the orthogonal projectors of Π onto Γ and Λ. They are bounded linear operators on Π with the following properties (see Lorch, 1962, p72): Here I Π is the identity operator on Π, and for any linear operator F on Π, F is its norm, namely and F * is its adjoint. The three statements u ∈ Γ, Γu = u and Λu = 0 are equivalent, as are the three statements w ∈ Λ, Λw = w and Γw = 0.
When v and Γv belong to Π ∞ , it is easy to compute Γv and Λv = v − Γv as follows. Let u = Γv and w = Λv. Then w ∈ Λ ∩ Π ∞ , so w satisfies (6). Also, Then, because w satisfies (6), Since v is given, equations (29) constitute an interior Neumann problem for φ (Kellogg, 1953, p246). The solubility condition for this problem is a condition whose validity is guaranteed by Gauss's theorem. Therefore, (29) has a solution φ, unique up to an additive constant. Then Γv = u = ∇φ is uniquely determined by (29), and Λv is the w of (28). We note that (28) is the weak formulation of the classical Helmholtz decomposition of three-dimensional vector fields [see 32, for a mathematical discussion of divergence-free vector fields in three-dimensional domains].
With the foregoing preliminaries we now return to the momentum equation (7). Define the linear operator R : Π → Π by requiring that for any v in Π Then (7) can be written Suppose for the moment that q ∈ Π ∞ . Then ∇q ∈ Γ ∞ , so Λ ∇q = 0. Thus if we apply Λ to (31a) we obtain But this is an equation which makes sense even if v is merely square-integrable, while if v ∈ Π ∞ then (31b) implies (31a) for some q. Thus (31b) generalizes (7) to all square-integrable v. Equation (31b) can be further simplified, since the eigensolution v must also satisfy (17), the generalization of (6). As already noted, (17) is equivalent to v = Λv, and this permits rewriting (31b) as The operator L is defined on the whole space Π, but LΛ ⊆ Λ and LΓ = {0}. Hence the only interesting part of L is actually L| Λ, the restriction of L to Λ. The Poincaré problem (6), (7) is now generalized to square-integrable but possibly nondifferentiable velocity fields v. The pair v, λ solves this generalized Poincaré problem if v is an eigenvector and λ the corresponding eigenvalue of the linear operator L| Λ on the Hilbert space Λ.
Further study of L depends on the observations that and and (33a) follows. To prove (33b), note that for bounded linear operators F, G on Π one has (F G) * = G * F * . Thus, from (32b), L * = Λ * R * Λ * . Then from (27e), L * = Λ R * Λ, and (33b) will follow if we can prove that This last is simply the assertion that for any a fact evident from (10).
In what follows, L| Λ will usually be abbreviated as L when no confusion can result. Properties (33) of L assure that all its eigenvalues λ are real and lie in the interval −1 ≤ λ ≤ 1. Because L is self-adjoint, a wellknown argument (e.g., Lorch, 1962, p112) shows that if Thus we generalized to square-integrable v the results obtained by Greenspan (1964Greenspan ( , 1965 and Kudlick (1966) for continuously differentiable v, with one exception: Kudlick (Greenspan, 1968, p61) shows that for continuously differentiable v, λ = ±1 are not eigenvalues. In fact the numbers λ = ±1 can be excluded from the eigenvalue spectrum for any v which is merely squareintegrable, and for any volume. We give the complete proof in appendix. For triaxial ellipsoids, λ = ±1 also follows from Lebovitz's (1989) result that all eigenfunctions in an ellipsoid are polynomials, and thus smooth enough to admit Kudlick's proof.
A. Introduction
Generalizing the Poincaré problem to squareintegrable velocity fields is useful not only because such fields are needed to describe flows of inviscid fluids, but also because they make available the spectral theory for bounded, self-adjoint linear operators in Hilbert space.
Let us briefly summarize what spectral theory tells us about L (i.e. L| Λ) which we know to be a linear selfadjoint bounded operator defined over a Hilbert space. First this operator is normal as it (obviously) commutes with its adjoint: LL * = L * L. Then, for any nonzero bounded linear operator F on a Hilbert space H, the spectrum σ(F ) of F is the set of all complex numbers λ such that F − λI fails to have a bounded linear inverse. The spectrum is always a non-empty, closed subset of the complex plane (Lorch, 1962 The spectrum can be divided into three parts known as the point spectrum (the eigenvalues), the continuous spectrum and the residual spectrum. These three sets are disjoint and in our case they are subsets of the real axis interval [−1, 1] since L ≤ 1. For a self-adjoint operator, it may be proved that the residual spectrum is empty [e.g. theorem 9.2-4 in 33]. Hence, for our problem we are just left with the continuous and eigenvalue spectra. In this case, a complex number λ can qualify for membership in σ(F ) in two ways: first, there may be a nonzero h in H such that (F − λI)h = 0; that is, λ may be an eigenvalue of F (its eigenvector being h). In other words, when λ is in the point spectrum of F , (F − λI) is not injective. Second, λ may be such that (F −λI) −1 exists but (F −λI) is not surjective. In other words, (F − λI)(H) = H but (F − λI)(H) = H or the image of (F − λI) is dense in H. In this case λ belongs to the continuous spectrum.
Interestingly, another subdivision of the spectrum has been introduced by mathematicians (e.g. Halmos 34,p51 or Furuta 35,p81). This other division is between the approximate point spectrum and the compression spectrum. Unlike the preceding subsets of the spectrum, these two subsets are not disjoint. When λ is in the approximate point spectrum ( F − λI) h may be nonzero whenever h = 0 , but there may be a se- is a linear mapping well-defined on the range of F − λI, but it is not a bounded operator and hence has no linear extension to all of H (Lorch, p44). To be complete the compression spectrum is the set hence a subset of the continuous spectrum. However, we learn from [35] ( §2.4, theorem 12) that for a normal operator the spectrum is identical to the approximate point spectrum. Applied to the Poincaré problem in the spheroid, which admits a set of eigenvalue dense in [-1,1], we may identify this interval with the approximate point spectrum and real numbers that are not eigenvalues are in the continuous spectrum. Of course, no eigenvectors are associated with members of the continuous spectrum.
B. A preliminary step
How do we prove that a bounded, self-adjoint linear operator F : H → H has a complete set of orthonormal eigenvectors, i.e. a collection of orthonormal eigenvectors which constitutes an orthonormal basis for the Hilbert space H ? One method is to find an infinite sequence of subspaces of H , say H 1 , H 2 , H 3 , ..., with these properties: We claim that whenever such a sequence of subspaces exists, F has a complete set of orthonormal eigenvectors in H.
To prove this claim, let K 1 = H 1 and for n ≥ 2 let K n be the orthogonal complement of H n−1 in H n . Then H n = H n−1 ⊕ K n and K m ⊥ K n if m = n. Then (35c) implies that for any h ∈ H there is a unique sequence of vectors k 1 , k 2 , ... with k n ∈ K n and such that The self-adjointness of F implies that F K n ⊆ K n for all n, and thus F | K n is a self-adjoint operator on the finite-dimensional space K n . Therefore K n has an orthonormal basis consisting of eigenvectors of F | K n (Halmos, 1958, p156 [36]). Collecting all these eigenvectors for all the K n gives an orthonormal set of eigenvectors of F in H, and by (36) they constitute an orthonormal basis for H. The direct application of the construction (35) to the Poincaré problem formulated in section III would be to take H = Λ and F = L| Λ. It turns out to be easier to take H = Π and F = L. Suppose that Π contains a sequence of subspaces Π 1 , Π 2 , ... such that (35) is true with H = Π, H n = Π n , and F = L. We claim that then Λ has a complete orthonormal basis consisting of eigenfunctions of L| Λ.
To see this, note that (35) also holds with H = LΠ, H n = LΠ n , and F = L| LΠ. Therefore LΠ has an orthonormal basis consisting of eigenfunctions of L.
Let Λ 0 be the set of all w in Λ such that Lw = 0. Greenspan (1968, p40) calls these the geostrophic motions. Any orthonormal basis for Λ 0 consists of eigenvectors of L. Therefore we have an orthonormal basis for Λ consisting of eigenvectors of L if we can prove that To prove (37), note first that if w ∈ Λ 0 then Lw | v = 0 for every v ∈ Π. Hence w | Lv = 0 for every v ∈ Π. Hence w ⊥ LΠ, so w ⊥ LΠ. Thus Λ 0 ⊥ LΠ. Next, suppose w ∈ Λ and w ⊥ LΠ. Since L 2 w ∈ LΠ, therefore w | L 2 w = 0. But w | L 2 w = Lw | Lw , so Lw = 0 and w ∈ Λ 0 .
C. Polynomial subspaces
To apply the foregoing general remarks to the Poincaré problem, we set H = Π and F = L in (35), and we seek appropriate spaces Π n to use as the H n in (35). In the axisymmetric ellipsoid, the Poincaré modes are all polynomial velocity fields (Greenspan, 1968, p64). This suggests that spaces of such fields might serve as the Π n . To describe these spaces requires some notation. The origin of coordinates is fixed somewhere on the axis about which the fluid rotates, and r is the position vector relative to this origin. Let Π [l, l] be the set consisting of 0 and all complex homogeneous polynomials of degree l in r. If l < n, let Π [l, n] be the set consisting of 0 and all polynomials whose monomial terms have degrees from l to n inclusive. Let Π [l, ∞] be the set consisting of 0 and all polynomials whose constituent monomials have degree l or greater. For any pair of integers (l, n) with l ≤ n, including n = ∞, let Π [l, n] denote the set of vector fields whose Cartesian components are members of Π [l, n] .
The arguments to follow will compare the dimensions of various linear spaces, and these dimension counts begin with the spaces just described. By the definition of Π [l, l], it is spanned by the monomials x a y b z c with a + b + c = l. They are linearly independent, and their number is easily seen to be (l + 1)(l + 2)/2, so dim Π [l, l] = (l + 1)(l + 2)/2 . The foregoing formulas hold with Π replaced by Π if the right sides of (38a,b) are multiplied by 3. In particular dim Π [ 0, n] = (n + 1)(n + 2)(n + 3)/2 .
It remains only to verify (35d) when F = L and H n = Π [ 0, n + 1] . We must show that if n ≥ 2 From (30), clearly so (39) will follow from (32b) if it can be shown that Thus everything hinges on proving (41b). Lebovitz (1989) proves (41) directly by constructing explicit polynomial bases for ΛΠ [ 0, n] and ΓΠ [ 0, n] and showing that their total number is dim Π [ 0, n]. We give here an alternate proof which avoids some computation.
D. The case of the ellipsoid
We now show that (41b) is true whenever E is an ellipsoid, axisymmetric or not. We take the ellipsoid's principal axes as the coordinate axes, so that the equation of ∂E is for some positive constants A, B, C. Then the outward unit normal to ∂E isn = K/ K where, in an obvious notation, and Let D = K · ∇, so that To prove (41b) we choose any v ∈ Π [ 0, n] and try to show that Γv ∈ Π [ 0, n] when n ≥ 2. We know that Γv = ∇φ where φ solves (29). That is, If we can show that (45) Since aA + bB + cC is positive, we can divide by it and solve (46) for D −1 x a y b z c . Now let ψ ∈ Π [1, n − 1] and consider the function φ defined by Clearly φ ∈ Π [1, n + 1], and φ satisfies (45b). Can ψ be chosen in Π [1, n − 1] so that φ also satisfies (45a)? If so, we have proved (41b). Thus the question is whether, given v ∈ Π [ 0, n], we can find a ψ in Π [1, n − 1] such that Define G n−1 to be the set of all scalar fields α on E such that For any vector field v Gauss's theorem implies (49b) for the α computed from (48c). If also v ∈ Π [0, n] then clearly α also satisfies (49a), so α ∈ G n−1 . Therefore, to show that (48a) has a solution ψ ∈ Π [1, n − 1] it suffices to show that We establish (50) in two stages. First we prove that and then we prove that To prove (51a), note that if ψ ∈ Π [1, n − 1] and α = T ψ then the definition of T , (48b), makes (49a) obvious, while (49b) follows from Gauss's theorem. To prove (51b), we note that dim G n−1 = n(n + 1)(n + 2)/6 − 1 = dim Π [1, n − 1] , so it suffices to prove that T is injective, since in that At this point the chain of argument is complete. We have proved (50) and hence (41) when ∂E is the ellipsoid (42), oriented in any way relative to Ω. In consequence we have (39), so that (35) is verified when H = Π, H n = Π [ 0, n + 1] and F = L. It follows that when ∂E is an ellipsoid then Λ has an orthonormal basis consisting of velocity fields w 1 , w 2 , ... each of which is an eigenvector of L| Λ and is an inhomogeneous polynomial in r. This last fact makes available Kudlick's argument (Greenspan, 1968, p61) that +1 and −1 cannot be eigenvalues of any w n , so all the eigenvalues λ n of L| Λ satisfy −1 < λ n < 1.
The foregoing demonstration essentially hinges on the fact that the ellipsoid is a smooth quadratic surface, so that we can work in the functional spaces of polynomials which are square-integrable and infinitely differentiable. With a polynomial velocity field of Π[0, n], we have proved that the projection on the subspace Γ is an internal operation, i.e. Γ(v) still belongs to Π[0, n]. Since the subspace Λ of the mass conservative velocity field and Γ are orthogonal and complementary, it also means that the projection on Λ is also an internal operation for this polynomial space. However, it is easier to work with vector velocity fields of Γ because these vector fields are irrotational and simply described by a scalar function. With these remarks the operator L is also internal in the polynomial space Π[0, n] and polynomial eigenfunctions are possible.
A. Known properties
For an axisymmetric ellipsoid rotating about its axis of symmetry Bryan (1889) extracted from Poincaré (1885) paper a list of particular polynomial eigenvelocities belonging to the family described in the preceding section, and expressible in closed form in terms of Legendre functions. For the Poincaré problem Greenspan (1968) and [25] give a succinct description of such modes. These Poincaré modes are described in more detail than is usual in the literature in appendix B of the paper, this in order to count them and to make possible a proof in the next section that they are complete if supplemented by some geostrophic modes.
From appendix B, we shall keep in mind that the pressure field associated with the eigenmodes read: with 0 < |γ| < π/2 and where h(γ) is given by (B6). Then, the root γ serves in the relation between ξ, η, s and z (B8) and for the determination of the eigenfrequency through (B6). For m = 0 the polynomial solutions given by (52) have an important peculiarity. In that case, if γ 0 solves (53) so does −γ 0 , and the two coordinate systems (B8) generated from γ = γ 0 and γ = −γ 0 give the same pressure function q via (52). However, they give different eigenvalues λ in (B6), equal except for opposite signs. Hence they generate different velocity fields v in (8).
In ordinary eigenvalue problems, the eigenfunction has a unique eigenvalue, so it is better bookkeeping to regard the velocity field v rather than the pressure field q as the eigenfunction belonging to the eigenvalue λ.
As noted by Cartan (1922), Kudlick (1966) and Greenspan (1968, p65), the pressure functions (52) are inhomogeneous polynomials of degree l in the Cartesian coordinates x, y, z, a fact which can be verified from (B15b). Hence the velocity field v calculated via (8) from the q of (52) and the λ of (B6) has Cartesian components which are inhomogeneous polynomials of degree l − 1 in x, y, z.
One other observation will simplify the bookkeeping: when m = 0, γ = 0 cannot be a root of (53) because the left side of (53) is the sum of two terms, one even and one odd in γ. The odd term must vanish at γ = 0, so the even term cannot. Otherwise P m l (µ) would have a double zero at µ = 0. Being a nonzero solution of a second order linear ordinary differential equation, P m l can have no double zeros. When m = 0, the foregoing argument also shows that γ = 0 cannot be a root of (53) if l is odd. If l is even and m = 0, then γ = 0 must be a root of (53). This produces γ = 0, and thus λ = 0 in (B6). But γ = 0 cannot be used in (B8) to generate a curvilinear coordinate system, so there is no pressure field (52) or velocity field (8) corresponding to the root η = 0 of (53) when m = 0 and l is even. This gap is easily repaired. For λ = 0 the pressure field q = s l = (x 2 + y 2 ) l/2 (54a) and the velocity field obtained from it via (7), not (8), are solutions of (6) and (7). These are the classical geostrophic solutions. When l is even, (54a) is a polynomial in x, y, z of degree l, and the Cartesian components of (54b) are polynomials of degree (l − 1). It seems reasonable to assign the eigenvalue λ = 0 and the pressure and velocity eigenfunctions (54) to the root γ = 0 of (53) when m = 0 and l is even.
These bookkeeping conventions permit a simple enumeration of the Poincaré polynomial solutions of (6) and (7). For each integer l ≥ 1 and each integer m in −l ≤ m ≤ l, let η be a root of [ cos η ∂ η − mh(η) ] P m l (sin η) = 0 (55a) − π/2 < η < π/2 . (55b) Set γ = η and find λ from (B6). Find q and v from (B8), (52) and (8) except when η = 0. The root η = 0 can appear only when m = 0 and l is even. In that case, find q and v from (54). Any q and v obtained in one of these ways will be called an (l, m)-Poincaré pressure polynomial and an (l, m)-Poincaré velocity polynomial. An (l, m) pressure polynomial is an inhomogeneous polynomial of degree l in x, y, z, and the Cartesian components of an (l, m) velocity polynomial are inhomogeneous polynomials of degree l − 1 in x, y, z . The foregoing discussion summarizes very briefly the classical literature on the Poincaré polynomial solutions of (6), (7) when ∂E is an ellipsoid symmetric about the axis of rotation of the fluid. We propose to supplement this classical work with a proof in section VI that the Poincaré velocity polynomials are complete. That proof requires that we have a lower bound for the number N (l, m) of (l, m)-Poincaré velocity polynomials. Our bookkeeping conventions assure that N (l, m) is just the number of roots of (55). To calculate this number, define µ = sin η and g(µ) = h(η), so that from (B6b) measures the flatness of the spheroid. Then (55) becomes First, suppose m = 0. Then l + 1 applications of Rolle's theorem in the expression of associated Legendre polynomials, namely, Next, suppose m = 0. If µ is a root of (57) for this m, then − µ is a root for − m. As Greenspan (1968, p64) observes, this means that the Poincaré modes with m = 0 are traveling waves. Therefore and we need calculate N (l, m) only when m > 0. To this end, define so that (57a) becomes Note that so that Using (58), we can now write (57a) as Applying Rolle's theorem l + m times shows that the (l − m)'th degree polynomial ∂ l+m µ (µ 2 − 1) l has exactly l − m simple zeros in −1 < µ < 1. Therefore the l'th degree polynomial (1 − µ) m ∂ l+m µ (µ 2 − 1) l has only these zeros and m zeros at µ = 1. Thus the same is true of the function G(µ) m (1 − µ) m ∂ l+m µ (µ 2 − 1) l . Then Rolle's theorem gives (63) at least l − m roots in −1 < µ < 1. Thus This inequality will suffice in section 7 to prove the completeness of the Poincaré velocity polynomials when ∂E is an ellipsoid symmetric about the axis of rotation of the fluid. That proof will produce, as a byproduct, the conclusion that equality must hold in (64), so One interesting consequence of (57) is that the eigenvalues λ of the Poincaré problem (6), (7) in an axisymmetric ellipsoid are dense in the interval −1 < λ < 1. Indeed, the eigenvalues belonging to m = 0 are already dense. To see this, observe that for m = 0 (57a) becomes ∂ µ P 0 l (µ) = 0. An integration by parts and an appeal to Legendre's equation show that so that the polynomials ∂ µ P 0 l (µ) with l = 1, 2, 3, ... are orthogonal on −1 < µ < 1 with weight function (1 − µ 2 ). It follows (Szegö, 1967, p 111) that their zeros are dense in that interval.
VI. COMPLETENESS OF THE POINCARÉ VELOCITY POLYNOMIALS IN AN AXISYMMETRIC ELLIPSOID
The present section proves the claim made in its title: we wish to verify that the polynomials that have been found by Bryan [1] for the spheroid form indeed the complete base that we expect for the ellipsoid. The proof depends on an appeal to section IV. As noted in that section, Π is the closure of ∪ ∞ n=1 Π [ 0, n]. Since Λ = ΛΠ and Λ is continuous, it follows that Therefore, to prove the completeness of the Poincaré velocity polynomials it sufficies to prove that for each n the Poincaré polynomials of degree ≤ n constitute a basis for ΛΠ [ 0, n]. In fact, we shall see that they almost constitute an orthogonal basis. The first step in the proof is to show that, whatever the shape of the fluid volume E, dim ΛΠ [ 0, n] = n(n + 1)(2n + 7)/6 .
Second, when E is an axisymmetric ellipsoid rotating about its axis of symmetry, of course all the Poincaré eigenvelocity fields with degrees ≤ n are members of ΛΠ [ 0, n], so we finish the proof by showing that the number of linearly independent Poincaré modes of degree ≤ n is at least (68). Lebovitz (1989) establishes (68) for ellipsoids E by constructing a particular non-orthonormal polynomial basis for ΛΠ [ 0, n]. He says (p231, section 7) that such polynomial bases are available for all shapes E. We have not been able to verify this. Nevertheless, (68) is true for all shapes E. What fails for some non-ellipsoids (for example, the cube) is (41a). This does not rule out the existence of a complete polynomial basis for the Poincaré problem because (35) is not an equivalence.
To prove the ⊆ half of (74), suppose that v ∈ Π [ 0, n] ∩ Γ Π [ 0, n]. Then v = Γ v, so v = ∇ φ for some scalar field φ. We can calculate φ ( r ) as the line integral of v along a polygonal curve starting at 0, ending at r, and consisting of straight line segments parallel to the coordinate axes. This calculation succeeds even if E consists of several disconnected pieces, because a polynomial known in any open set is uniquely determined in all space, so the path of integration need not remain in E.
General idea
Having established (68), now we must count the Poincaré modes. Suppose ∂E is an ellipsoid symmetric about the axis of rotation of the fluid. Choose coordinates as in section V and let N (l, m) be as defined there. That is, for any integers l, m with l ≥ 1 and |m| ≤ l, N (l, m) is the number of (l, m)-Poincaré velocity polynomials, and also the number of roots of (55). Let η l, m,ν be those roots, with 1 ≤ ν ≤ N (l, m). Let λ l, m,ν be the eigenvalues obtained by setting γ = η l, m,ν in (B6). Let v l,mν be the corresponding (l, m)-Poincaré velocity polynomials, obtained either from (54b) or from and v l, m,ν ∈ Π [ 0, l − 1] .
We propose to prove that, after a modest amount of Gram-Schmidt orthogonalization, the v l, m,ν with l ≤ n+1 provide an orthogonal basis for ΛΠ [ 0, n]. We make no attempt to normalize these eigenvelocities by finding v l, m,ν . The proof requires two steps: (i) to show that the number of v l, m,ν with l ≤ n + 1 is at least dim ΛΠ [ 0, n]; (ii) to show that the v l, m,ν are linearly independent.
Step (ii) will be accomplished by showing that most of the v l, m,ν are mutually orthogonal and by dealing with the exceptions.
Poincaré polynomials are numerous enough
Step (i) requires counting the Poincaré velocity polynomials v l, m,ν for which l ≤ n + 1. Their number is obviously Comparing (77) with (68), we see that step (i) is complete. If we can carry out step (ii), then the ≥ in (77) must be an equality. Hence the same must be true in (64), which parenthetically proves (65).
This fact is also evident from the observation that each λ l, m,ν and v l, m,ν constitute an eigenvalue-eigenvector pair of the self-adjoint operator L : Π → Π. There remains the possibility that λ l, m,ν = λ l , m ,ν even though (l, m, ν) = (l , m , ν ).
The case of accidental degeneracy
The foregoing case is called an accidental degeneracy. The question is to check that even in that case the two eigenmodes are still orthogonal, namely (78a) is still verified.
Finally, suppose that m = m and (l, ν) = (l , ν ) but λ l, m,ν = λ l , m,ν . (79a) When this happens we must have because if l = l then (79a) implies ν = ν . If we do have (79) then λ l, m,ν and λ l , m,ν produce the same γ in (B5) and the same coordinate system in (B8). Therefore the roots µ and µ of (57) must be the same for l and l and the given m. But from (56) g is a function of as well as µ. Suppose we ask how µ, µ and hence v l, m,ν and v l , m,ν vary as we change slightly. From (57a), ∂ µ is given by Here the terms in g can be calculated from (56a), can be expressed in terms of P m l by means of Legendre's equation, and when µ is a root of (57) then ∂ µ P m l can be expressed in terms of P m l . These substitutions convert (80) into Expression (58) of Legendre polynomials and the argument before equation (64) establish that P m l (µ) has no multiple zeroes. Therefore, at a root of (57) with m = 0 we must have P m l (µ) = 0. Hence, when m = 0 we can cancel P m l (µ) from (81) and obtain a formula for ∂ µ in which no terms depend on l except for l(l + 1) on the left. Since l = l , it follows that if m = 0 then Therefore, if m = 0 and is slightly altered, the eigenvalues of L belonging to v l, m,ν and v l , m,ν will become different and we will have (78). But from (B8) and (52), v l, m,ν and v l , m,ν depend continuously on , so (78a) remains true even at the original value of where (78b) fails. From (81), this argument will break down if m = 0, and that case must now be considered. All other Poincaré velocity polynomials are orthogonal to each other and to those with m = 0. When m = 0 there are two kinds of Poincaré velocity polynomials v l,0,ν , the proper (non-geostrophic) ones and, for even l, the geostrophic ones. There are proper Poincaré velocity polynomials with m = 0 only for l ≥ 3. By (37), all the proper ones have nonzero eigenvalues λ, while all the geostrophic ones have λ = 0. Therefore, as already noted by Greenspan (1965;1968, p54) and Kudlick (1966), the proper and geostrophic Poincaré polynomials are orthogonal to one another, and we can consider them separately.
First consider the proper Poincaré velocity polynomials with m = 0. The γ's needed to generate their coordinate systems (B8) and pressure fields (52) are obtained from sin γ = µ, where µ is a root of (57) with m = 0, i.e., For each fixed l, all the different roots of (83) generate different eigenvalues λ and hence mutually orthogonal Poincaré velocity polynomials. The only trouble comes when l = l and ∂ µ P 0 l (µ) and ∂ µ P 0 l (µ) have a common zero, µ 0 . We know no proof that rules this out, but if it does happen then all the Poincaré velocity polynomials produced by the different l which make µ 0 a root of (83) will be orthogonal to all other Poincaré velocity polynomials. They are linearly independent, being polynomials of different degrees, so they can always be orthogonalized by the Gram-Schmidt process. Perhaps one could prove them mutually orthogonal by perturbing ∂E into a slightly non-axisymmetric ellipsoid and using another continuity argument on (80). But this would require a discussion of the Lamé functions used to produce the analogue of (52) in a triaxial ellipsoid (Poincaré 1885;Cartan, 1922).
We now consider the geostrophic velocity polynomials (54b). They are obviously not mutually orthogonal, but are clearly linearly independent, being polynomials of different degrees. This finishes the proof that the Poincaré velocity polynomials are linearly independent, and accomplishes step (ii) of the overall argument. Thus the Poincaré velocity polynomials are complete in Λ for an axisymmetric ellipsoid E.
Orthogonalized geostrophic velocity polynomials
Although not necessary for the foregoing argument, it may be interesting to note that the Gram-Schmidt orthogonalization of the geostrophic velocity polynomials can be carried out explicitly. Write (54b) as v l = C l s f n ( s 2 /a 2 )φ , l = 2, 4, 6, ...
VII. CONCLUSIONS
In this work we first demonstrated that the Poincaré problem, which governs the inertial oscillations of a rotating fluid, can be formulated in the space of squareintegrable functions without any hypothesis on the continuity or differentiability of the velocity fields. This formulation makes available many results of functional analysis. First, while restricting the velocity field to those that verify incompressibility and boundary conditions, in other words restricting the velocity fields to a Hilbert sub-space of the square-integrable vector fields, we could formulate the Poincaré problem as a simple eigenvalue problem namely Lv = λv showing in passing that the velocity field is the appropriate variable, rather than the pressure, for this formulation. It turns out that the operator L is bounded and self-adjoint of norm less or equal to unity. Hence, the spectrum of L is real and occupies the interval [−1, +1] of the real axis of the complex frequency plane. A theorem of functional analysis [e.g. 33] states that the residual spectrum of such an operator is empty. Hence, the interval [−1, +1] is shared by the eigenvalues (the point spectrum) and the continuous spectrum, the two sets being disjoint and complementary. This first part gives the general framework that can be used to analyse the Poincaré problem in any type of volumes.
From the foregoing background, we could show that the inertial modes of a rotating fluid contained in an ellipsoid are polynomial velocity fields and form a complete base for square-integrable vector fields defined over this volume. We thus confirm in an independent and more direct way a result of Lebovitz [22]. We also show that the inertial modes of a spheroid, first obtained by Bryan [1], form the expected base when they are completed by the geostrophic modes. We here confirm, independently, the same result obtained for the sphere by Ivers et al. [24].
Our work shares many results with those obtained in [24], but these authors restricted, at the outset, their analysis to continuously differentiable velocity fields and exhibit the completeness of the inertial base for the sphere only. In their conclusion they observe that they could have used an extension of their functional space so as to use a Hilbert space, and the ensuing results of functional analysis. Our work thus gives a follow up of this conclusion, but show in addition that the mere Hilbert space of square-integrable functions is sufficient for that (instead of the closure of the set of once continuously differentiable functions). However, both works shed light on the various properties of the Poincaré problem.
Because Poincaré problem is hyperbolic with boundary conditions, thus ill-posed, the geometry of the container is crucial to the properies of the eigenspectrum. As shown in [13] information propagated by characteristics has to be consistent to lead to regular solutions. To give a physical picture, hyperbolic problem are well-posed with initial conditions, while here we impose initial and final conditions, which may not be compatible. Hence, each geometry is a specific case. Except the ellipsoid and the annular channel [23], it is unknown whether the Poincaré problem has a complete set of eigenvelocities. Two nonellipsoidal examples have been considered: the cube and the spherical shell, but the proof of (in)completeness remained elusive. In view of the results of Rieutord et al. [13] for the spherical shell and Nurijanyan et al. [41] for the rectangular parallelepiped, it may well be that the eigenvalue spectrum is almost empty for both of these volumes. On the other hand we know since Kelvin [42] that the cylinder admits eigenmodes but the completeness of their set remains an open question. The present work may give a route towards the answer.
Historical note: The main body of this work was written by GB in the early 1990's but, when submitted to journals in 1993, did not meet its readership. Twenty years after, the subject of oscillations of rotating fluids has been strongly revived and after the demonstration of [24], the present work sheds new light on the math-ematical questions associated with inertial modes. The contribution of MR to the original work has been in updating the introduction and conclusion, and making the text less difficult when possible.
where we aligned the rotation axis with the z-axis. Thus, we obtain which we now integrate over the fluid volume. We finally obtain where we used that (V ) v * · ∇q dV = 0 which trivially follows from mass conservation and boundary conditions when the velocity field is differentiable, but which is also true for merely square-integrable velocity fields thanks to (15) since v * ∈ Λ. Hence, from (A4), we find that ∇q = 0. Now we need to check that the vanishing pressure gradient implies a vanishing velocity field. From the equations of motion, we immediately find that v z = 0 and v y = ±iv x (A5) So the motion, if it exists, is only a planar flow, perpendicular to the rotation axis. Then, mass conservation demands that v ∈ Λ (cf Eq. 17), which means that for every φ ∈ Π ∞ we have (V ) v · ∇φ * dV = 0 (A6) With (A5), setting f = ∂ x φ − i∂ y φ, it also means that for any f ∈ Π ∞ we have Thus v x is orthogonal to all infinitely differentiable complex scalar functions defined on the volume V . It can only be zero, and so is the velocity field. Hence, ±1 are not eigenvalues of the Poincaré problem.
Let us now comment this mathematical result from a more physical view point. The fact that the numbers ±1 are excluded from the eigenvalue spectrum comes from the fact that the fluid's domain is bounded. To view that, it suffices to consider the propagation of characteristics that are associated with the Poincaré operator. In a meridional section of the fluid's volume, these characteristics are straight lines that bounce on the boundaries [e.g. fig. 8 or 9 in 15]. When the frequency gets close to unity, the characteristics get almost perpendicular to the rotation axis and, as they bounce on the boundaries, they form a web of lines which is very dense. If we recall that characteristic lines are the trace of equiphase surfaces, we understand that phase oscillates very rapidly in the z-direction. In other words the wavenumber k z tends to infinity. Thus no mode can exist at λ = ±1 while there is no impediment for a propagating wave in the direction parallel to the rotation axis in an unbounded domain. tan γ = (c/a) λ (1 − λ 2 ) −1/2 . | 2017-05-29T08:01:04.000Z | 2016-06-07T00:00:00.000 | {
"year": 2017,
"sha1": "b9d5bd7e9bb46cb66788fa9373897f5467371e67",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevE.95.053116",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "b9d5bd7e9bb46cb66788fa9373897f5467371e67",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Medicine"
]
} |
240071157 | pes2o/s2orc | v3-fos-license | Serum miR-181b-5p predicts ascites onset in patients with compensated cirrhosis
Background & Aims Treatment with non-selective beta-blockers (NSBBs) reduces the risk of ascites, which is the most common decompensating event in cirrhosis. This study aimed to assess the ability of a serum microRNA (miRNA) signature to predict ascites formation and the hemodynamic response to NSBBs in compensated cirrhosis. Methods Serum levels of miR-452-5p, miR-429, miR-885-5p, miR-181b-5p, and miR-122-5p were analyzed in patients with compensated cirrhosis (N = 105). Hepatic venous pressure gradient (HVPG) was measured at baseline, after intravenous propranolol, and 1 year after randomization to NSBBs (n = 52) or placebo (n = 53) (PREDESCI trial). miRNAs were analyzed at baseline and at 1 year. Results Nineteen patients (18%) developed ascites, of whom 17 developed ascites after 1 year. miR-181b-5p levels at 1 year, but not at baseline, were higher in patients that developed ascites. The AUC of miR-181b-5p at 1 year to predict ascites was 0.7 (95% CI 0.59–0.78). miR-429 levels were lower at baseline in acute HVPG responders to NSBBs (AUC 0.65; 95% CI, 0.53–0.76), but levels at baseline and at 1 year were not associated with the HVPG response to NSBBs at 1 year. Conclusions Serum miR-181b-5p is a promising non-invasive biomarker to identify patients with compensated cirrhosis at risk of ascites development. Lay summary Ascites marks the transition from the compensated to decompensated stage in cirrhosis and indicates a worsening in prognosis. There are currently no easily accessible tools to identify patients with compensated cirrhosis at risk of developing ascites. We evaluated the levels of novel molecules termed microRNAs in the blood of patients with compensated cirrhosis and observed that miR-181b-5p can predict which patients are going to develop ascites.
Introduction
Ascites develops in 5%-10% of patients with compensated cirrhosis per year and represents the most frequent cause of decompensation. 1,2 Ascites results from an increase in portal pressureestimated by the hepatic venous pressure gradient (HVPG) which must increase above 10 mmHg, i.e. clinically significant portal hypertension (CSPH)and the development of circulatory dysfunction and renal sodium retention. The development of CSPH is a hallmark in the natural history of cirrhosis, since it places patients at risk of ascites and other complications of portal hypertension. 3 We currently lack other predictors of ascites formation and, in this regard, there is a clinical need for non-invasive biomarkers. A recent study has shown that non-selective beta-blockers (NSBBs) reduce the risk of decompensation and death in patients with compensated cirrhosis and CSPH by approximately half. 2 The benefit of NSBBs was predominant in patients with HVPG decreasing >10% from baseline at 1 year. MicroRNAs (miRNAs) are non-coding RNAs that regulate cellular processes by repressing the translation of mRNA. 4 miR-NAs are secreted by cells and retrieved in body fluids which, in conjunction with their high stability, makes them attractive and accessible biomarkers. 5 Our group has recently identified a serum signature of 5 miRNAs -miR-452-5p, miR-429, miR-885-5p, miR-181b-5p, and miR-122-5pin decompensated cirrhosis that differentiates patients with diuretic-sensitive from those with refractory ascites (through miR-181b-5p) and identifies those achieving an HVPG response to NSBBs (by means of miR-452-5p and miR-429). 6 A recent study has also characterized a set of circulating miRNAs that are altered in patients who progress to acute-on-chronic liver failure. 7 The aim of the present study was to evaluate our recently described serum miRNA signature as a biomarker to predict ascites formation and the HVPG response to NSBBs in a large cohort of patients with compensated cirrhosis and CSPH.
Patients and methods
Patients and study design Serum samples were obtained from a registered collection (C.0005353, Collections Section, Biobank Registry of Instituto de Salud Carlos III, Madrid, Spain) of patients with compensated cirrhosis included in a randomized trial to evaluate the ability of NSBBs to prevent decompensation (PREDESCI trial; NCT01059396). 2 Cirrhosis was diagnosed on histological or compatible clinical, biochemical, and ultrasonographic findings. All patients' HVPG was measured at baseline and 1 year after being randomized to NSBBs or placebo. Only patients with baseline HVPG > − 10 mmHg were included. Patients were followed up to clinical decompensation (ascites, gastrointestinal bleeding, or overt hepatic encephalopathy) or death.
The novel miRNA signature -miR-452-5p, miR-429, miR-885-5p, miR-181b-5p, and miR-122-5pwas analyzed in serum samples collected and archived at baseline and at 1 year after randomization. 8 The study protocol adhered to the ethical guidelines of the Declaration of Helsinki and was approved by the Ethics Committee for Clinical Research (IRB number: 362/19). All participants granted written informed consent authorizing the storage and research use of their samples. miRNA signature determination in serum samples Extraction of total RNA and subsequent analysis of individual miRNAs by quantitative real-time PCR was performed following previously reported technical steps, and miR-103a-3p was also used as reference gene for data normalization. 6 The relative levels of each miRNA were calculated with the comparative Ct (cycle threshold) method: DCt, with DCt = Average Ct tested miRNA − Average Ct miR-103a-3p (average Ct = mean of the technical triplicate). DCt is inversely correlated with the amount of miRNA in the serum (i.e., the higher the quantity or number of copies of miRNA, the lower the DCt). Detailed information is displayed in the supplementary materials and methods.
Statistical analysis
Sample size was not prespecified. Instead, we included all patients from the PREDESCI trial with archived serum samples collected at baseline and 1 year. 2 A multivariate Cox proportional hazards regression was used with adjustment for treatment arm and first-order interaction tested with a global likelihood ratio test. We plotted receiver-operating characteristic (ROC) curves and calculated the AUC and corresponding 95% CI to evaluate the diagnostic performances of the significant miRNAs. We performed a sensitivity analysis calculating time-to-event ROC curves in a survival data framework. 9 All analyses were 2-tailed and significance was set at p <0.01 after adjustment for multiple comparisons (supplementary materials and methods).
Patient characteristics
We included 105 out of the 201 patients enrolled in the PRE-DESCI trial. The clinical characteristics of the study cohort and that of the patients excluded for lacking paired archived samples were similar (Table 1). Fifty-two (49.5%) and 53 (50.5%) patients belonged to the NSBB and placebo groups, respectively. The median follow-up was 36.2 months (IQR 26.3-47.6).
Serum miRNA levels and ascites formation
Of the 105 patients, 19 (18%) developed ascites during follow-up, 6 (11.5%) and 13 (24.5%) in the NSBB and placebo group, respectively. The median time to ascites formation was 26.3 The serum levels of all 5 miRNAs were reduced at 1 year (Fig. S1). Baseline miRNA levels did not associate with ascites development (Fig. S2). miR-181b-5p serum levels at 1 year were significantly higher in patients who eventually developed ascites than in those who did not (p = 0.001), with no differences in other miRNAs levels (Fig. 1).
Serum miRNAs levels and the hemodynamic response to NSBB miRNA levels showed no correlation with the HVPG at baseline (Table S2). An acute hemodynamic response after propranolol was achieved in 72 of the 105 patients (68.5%). Serum miR-429 baseline levels were lower in responders than in nonresponders (p = 0.01) with no differences for the other miRNAs (Fig. S3). The AUC of miR-429 to identify acute response to NSBBs was 0.65 (95% CI 0.53-0.76).
miRNA levels at 1 year were similar in patients in the NSBB and placebo groups (Table S3). HVPG was reduced > − 10% from baseline at 1 year in 53% and 28% of the patients in the NSBB and placebo group, respectively (p = 0.015). In patients on NSBB, miRNAs levels at baseline and 1 year were not associated with the HVPG response at 1 year (Table S4).
Discussion
Our study shows that circulating miR-181b-5p is altered in patients with compensated cirrhosis and that its elevated serum levels are associated with ascites development. These findings extend the results of our previous study to compensated cirrhosis. Thus, miR-181b-5p is emerging as a novel non-invasive biomarker of ascites formation and progression through the stages of cirrhosis.
The goal of therapy in patients with compensated cirrhosis is to prevent decompensation. Accordingly, predictors of disease progression, specifically ascites formation, are hugely important. Circulating miRNAs have been extensively investigated in chronic liver disease 10 and, to a lesser extent, in decompensated cirrhosis 7,11 to determine their association with cirrhosis etiology and prognosis. However, we are unaware of studies addressing the value of circulating miRNAs in the transition from compensated to decompensated cirrhosis.
The serum levels of the 5 miRNAs comprising the signature significantly changed after 1 year of follow-up, illustrating their dynamic levels as cirrhosis evolves. However, only miR-181b-5p was associated with progression from compensated to decompensated cirrhosis. Indeed, miR-181b-5p levels predicted ascites occurrence since they were higher in individuals that developed ascites in subsequent months. As reinforced by the time-to-event analysis at different points, lower serum miR-181b-5p levels were associated with a low risk of ascites formation in the shortterm. Notably, the predictive ability of miR-181b-5p in serum declined in parallel with the time elapsed to ascites onset, which is the most likely explanation of the absence of an association between baseline levels and ascites occurrence. We hypothesize that the greater ability to predict ascites appearance of miR-181b-5p levels at 1 year compared to those at baseline was due to the closer temporal relationship to the event of the former. Furthermore, NSBBs did not appear to influence the predictive ability of miR-181b-5p.
Previous evidence could provide some clues to understand the association between serum miR-181b-5p and ascites onset. Ascites in cirrhosis results from interrelated mechanisms, including progressive portal hypertension, endothelial dysfunction, and functional renal impairment. In this regard, miR-181b-5p seems involved in endothelial dysfunction and cardiovascular remodeling in different settings, such as vascular stiffness signaling in arterial hypertension, 12 diabetic cardiomyopathy development in mice, 13 and progression of abdominal aortic aneurysms. 14 In cirrhosis with ascites, we observed a direct correlation between circulating levels of miR-181b-5p and creatinine. 6 In the same line, miR-181b-5p, along with other circulating miRNAs, could be used to identify progression to acute-on-chronic liver failure in decompensated cirrhosis. 7 The value of circulating miR-181b-5p to predict cirrhosis progression is independent of etiology, since this property of miR-181b-5p has been identified in both alcohol-and virus-related cirrhosis. 6,7,15,16 NSBBs reduce the risk of portal hypertension-related complications in compensated and decompensated cirrhosis. 2,[17][18][19] The hemodynamic response to NSBBs is heterogeneous and we lack optimal non-invasive alternatives to assess it. In the present study, miR-429 levels were lower in patients who achieved an acute HVPG response to NSBBs, a finding in agreement with results in the decompensated setting, where NSBB responders at 1 month also showed lower levels of this miRNA. 6 Interestingly, miR-429 was not associated with the hemodynamic response to NSBBs in the long-term, a finding similar to the low correlation observed between baseline miR-181b-5p levels and ascites development. Taken together, these results suggest that the more distant the occurrence of the event of interest, the lower the predictive ability of the miRNA.
The results of this study represent a step forward in the search for innovative non-invasive biomarkers of cirrhosis progression and the response to NSBBs. In this study, we measured miRNA by the same robust and affordable technique described in decompensated cirrhosis. 6 We identified the same miRNA as predictors of disease progression (i.e., miR-181b-5p) and response to NSBBs (i.e., miR-429) in the compensated and decompensated settings, reinforcing the reproducibility of our results. We acknowledge some limitations. First, the analysis included only those patients from the original trial with archived samples at baseline and at 1 year suitable for analysis as well as with HVPG assessment at 1 year. However, a related bias seems unlikely since we did not find significant differences between the cohorts. Second, the number of patients that developed ascites was low. However, our study cohort was representative of the original trial in terms of treatment groups, ascites development, and hemodynamic response. Finally, our findings require external validation.
In summary, this study supports the usefulness of serum levels of miR-181b-5p as a non-invasive biomarker to anticipate ascites development in patients with compensated cirrhosis and CSPH who could benefit from a personalized follow-up.
Financial support
Supported by grants from the Ministerio de Ciencia e Innovación and Instituto de Salud Carlos III (SAF 2017-86343-R awarded to A.A., PI20/ 01302 to A.A., PI18/01901 to R.B., CIBEREHD-16PI03 and PI20/00220 to J.G.S.). A.G.G.P is the recipient of a grant from Ministerio de Ciencia e Innovación and Instituto de Salud Carlos III (Contrato Rio Hortega CM18/ 00091). Centro de Investigación Biomédica en Red en Enfermedades Hepáticas y Digestivas (CIBEREHD) is funded by the Instituto de Salud Carlos III with grants cofinanced by the European Development Regional Fund "A way to achieve Europe" (EDRF). Supported in part by a grant from Gilead Sciences (GLD19/00045). | 2021-10-19T15:21:16.993Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "a7c540c0f08983e7a99b1b82244ed660804d59a2",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jhep-reports.eu/article/S2589555921001440/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd58c51655bc0111f98394e25e12bf71f98ed1aa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240747768 | pes2o/s2orc | v3-fos-license | Adsorption of Strontium onto Synthetic Iron(III) Oxide up to High Ionic Strength Systems
: In this work, the adsorption behavior of Sr onto a synthetic iron(III) oxide (hematite with traces of goethite) has been studied. This solid, which might be considered a representative of Fe 3+ solid phases (iron corrosion products), was characterized by X-Ray Diffraction (XRD) and X-Ray Photoelectron Spectroscopy (XPS), and its specific surface area was determined. Both XRD and XPS data are consistent with a mixed solid containing more than 90% hematite and 10% goethite. The solid was further characterized by fast acid-base titrations at different NaCl concentrations (from 0.1 to 5 M). Subsequently, for each background NaCl concentration used for the acid-base titrations, Sr-uptake experiments were carried out involving two different levels of Sr concentration (1 × 10 − 5 and 5 × 10 − 5 M, respectively) at constant solid concentration (7.3 g/L) as a function of − log([H + ]/M). A Surface Complexation Model (SCM) was fitted to the experimental data, following a coupled Pitzer/surface complexation approach. The Pitzer model was applied to aqueous species. A Basic Stern Model was used for interfacial electrostatics of the system, which includes ion-specific effects via ion-specific pair-formation constants, whereas the Pitzer-approach involves ion-interaction parameters that enter the model through activity coefficients for aqueous species. A simple 1-p K model was applied (generic surface species, denoted as >XOH − 1/2 ). Parameter fitting was carried out using the general parameter estimation software UCODE, coupled to a modified version of FITEQL2. The combined approach describes the full set of data reasonably well and involves two Sr-surface complexes, one of them including chloride. Monodentate and bidentate models were tested and were found to perform equally well. The SCM is particularly able to account for the incomplete uptake of Sr at higher salt levels, supporting the idea that adsorption models conventionally used in salt concentrations below 1 M are applicable to high salt concentrations if the correct activity corrections for the aqueous species are applied. This generates a self-consistent model framework involving a practical approach for semi-mechanistic SCMs. The model framework of coupling conventional electrostatic double layer models for the surface with a Pitzer approach for the bulk solution earlier tested with strongly adsorbing solutes is here shown to be successful for more weakly adsorbing solutes.
Introduction
The management of Spent Nuclear Fuel (SNF) is of utmost importance for assuring the safety of future generations. Nowadays, two different SNF management strategies can be distinguished. In one, the fuel is reprocessed to extract fissile nuclides (U-235 and in order to fabricate mixed-oxide fuel, and the resulting high-level nuclear waste is vitrified for final disposal. In the other, SNF is simply considered as a waste and stored pending final disposal in deep repository systems, which is envisaged as the best option to handle the hazards that may otherwise be caused by those types of waste [1]. Briefly, in such facilities, the waste is placed in conditioned galleries buried in deep geological repositories in specific host-rock environments. For instance, depending on the country and available options in saline, clay, or granitic formations, different options are considered concerning the host-rock [2][3][4][5]. This disposal strategy involves an Engineered Barrier System (EBS) designed to mitigate the potential release of radionuclides from the repository to the nearand far-field. The major perturbation scenario starts with the intrusion of the site-specific (pore) waters into the repository, which causes corrosion and alteration reactions in the near-field, resulting in the mobilization and migration of radionuclides. The EBS and natural safety barriers have been and are being studied in the context of reactions with water, and also concerning the retention of radionuclides for the required nuclear waste disposal management settings.
One potential barrier, which typically is not considered in safety assessment concerning the retention of mobilized radionuclides, is the corroded or partially corroded steel canister. While the intact canister in clay formations is expected to isolate radionuclides for about 10,000 years, upon contact with pore-water, steel canisters will corrode, resulting in the formation of, e.g., iron oxides as corrosion products [6]. Such iron oxides are known to be strongly adsorbing minerals. Magnetite, for example, has been shown [7] to adsorb nearly 100% of trivalent cations, even in the presence of 5 M NaCl, i.e., a result comparable to data obtained with much lower salt content. While magnetite is expected to form in the long-term, Fe(III)oxyhydroxides, such as hematite (α-Fe 2 O 3 ) or goethite (α-FeOOH), might form during the early, aerobic/oxic phase of repository evolution [6].
Within the SNF material, the fast/Instant Release Fraction (IRF) of radionuclides is of utmost importance, given that this fraction of radionuclides will be directly mobilized upon exposure to water after a safety barrier failure [8,9]. Sr, and specifically 90 Sr, is part of the elements within the IRF and is released fastest under aerobic/oxic conditions [9]. Typically, 90 Sr, with its short half-life of about 30 years, is not relevant for safety assessment, but Sr is rather taken as an indicator in the context of matrix-corrosion of the nuclear fuel. 90 Sr is more relevant in the context of fallout from nuclear accidents [10] and nuclear weapons [11]. Various comprehensive papers on Sr interaction with mineral surfaces have been recently published [12,13]. In terms of solute mineral surface interaction, Sr adsorbs on oxides at relatively high pH values corresponding to its weak hydrolysis [14]. In this work, Sr as a rather weakly adsorbing cation is studied to better understand adsorption in highly saline conditions in contrast to more strongly binding trivalent actinides, which can be nearly completely bound, as discussed above. In this way, the ability of surface complexation models to deal with highly saline concentrations can also be tested. The salt level dependence for metal ion adsorption onto oxide minerals has frequently been studied and interpreted [15][16][17], but the concentration of the monovalent electrolytes typically has been limited to 1 M. Surface complexation models are known to be able to describe such data [15]. Much less is known regarding the ability of the models to describe uptake data at even higher background salt concentrations.
Some countries discuss settings for nuclear waste disposal that involve high salt concentrations of the relevant waters [18]. As an example, in Germany some options for repositories in areas that would involve highly saline aqueous solutions (up to saturated brines, i.e., for rock-salt about 5 M NaCl at room temperature) have been discussed [19], and experimental data and models for treating such systems are required. While the stateof-the-art in treating solution speciation and solubility in brine solutions using the Pitzer approach is established, and for many relevant systems is quite advanced [20], experimental determination of uptake data and the application of thermodynamic adsorption models to high salt levels are rather scarce.
Early experimental studies on Sr adsorption on clays [21] with high salt levels, as well as the associated modelling attempts [22], may be criticized because the details of pH measurements are not given. The same is true for the titrations of mineral oxides up to high pH, such as silica in up to 4 M NaCl [23] or ferrihydrite in up to 6 M NaCl [24]. For high salt levels, specific precautions must be taken to obtain meaningful data from electrode readings [25][26][27]. From our point of view, self-consistent and comprehensive data covering both titrations and the uptake of solutes in brine solutions have recently gained importance [7,28]. In the iron oxide-hydroxide systems, as well as in the work on clays [27,29,30], the associate potentiometric titrations to design the acid-base models were carried out up to 1 M concentration, while the titrations by Garcia et al. [28] were carried out up to 5 M. The modelling in the above cited papers (involving self-consistent data) coupled non-electrostatic [27,29,30] or electrostatic [7,28] surface complexation models with the Pitzer or Specific ion Interaction Theory (SIT) approaches [31][32][33][34]. Garcia et al. [28] also showed that the acid-base model involved strong shielding of the charge in the Stern layer, particularly at high salt levels, which results in very low diffuse layer potentials. As a consequence, the application of the Gouy-Chapman equation that is included in these models can be justified. The previous electrostatic models that were applied up to the high salt levels [7,28] involved a surface complexation scheme with reaction equations that could, in principle, be linked to the mechanistic level. The non-electrostatic models applied in the high salt concentration context [27,29,30] might also be interpreted, to some extent, in this way. Generic models, which simplify, however, are expected to be favored in many experimental settings where the mechanistic models cannot be applied due to the lack of adsorbent characterization [35]. For such settings, simpler models will be required, and one purpose of the present work is to test a generic 1-site, 1-pK Basic Stern model [35] with a limited, but consistent, set of experimental data up to 5 M NaCl concentrations.
Taken together, to further fill the gaps in the study of adsorption up to high salt level, we present another self-consistent set of data involving Sr adsorption on a synthetic iron(III) oxide including surface titrations. Although the choice of the system is related to the above discussed occurrence of Fe(III) minerals under aerobic/oxic conditions in a potential repository, the solid used here should rather be considered as a model sorbent because other iron minerals are more relevant. Hematite and goethite are highly relevant in soils [36].
Materials
The solid was synthesized by rapidly neutralizing a FeCl 3 solution (0.45 M) with KOH (2.5 M) under argon atmosphere. This results in the formation of ferrihydrite, which is subsequently transformed to a crystalline form by heating at 90 • C for two days. The resulting solid was then dialysed with MilliQ (18.2 MΩ × cm, TOC < 5 ppb) water until the pH of the washing water was that of the MilliQ water (Merck-Millipore, Darmstadt, Germany). The resulting suspension was stored with a solid content of 82 g/L at pH 7.5 under argon in a plastic container. No glassware was used during preparation and storage. Aliquots of the suspension were centrifuged (Heraeus Instruments, Hanau, Germany, Megafuge 2.0, 3500 rpm for 15 min) and subsequently washed with MilliQ water. This was repeated three times, whereupon the solid paste was dried under an argon atmosphere. The specific surface area (measured by BET using nitrogen gas, AUTOSORB-1, Quantachrome Corporation, Boynton Beach, FL, USA) of the solid particles was 21 m 2 /g.
The solid phase was characterized by X-ray powder diffraction using a D8 Advance diffractometer (Bruker AXS, Karlsruhe, Germany) equipped with a Cu anode and a Sol-X detector. Data were recorded from 10 • to 75 • , with a step size of 0.015 • and a counting time of 3 seconds per step. Phases were identified using the DIFFRAC.EVA v5.0 software (Bruker AXS, Karlsruhe, Germany,) by comparison with the PDF-2 database, and quantitative results were obtained from fits to the experimental powder diffractogram using the DIFFRAC.TOPAS v6.0 software (Bruker AXS, Karlsruhe, Germany) and reported crystallographic data of hematite [37] and goethite [38].
The powder was pressed onto indium foil and studied by an XPS system PHI 5000 VersaProbe II (ULVAC-PHI Inc., Chigasaki, Japan) equipped with a scanning microprobe X-ray source (monochromatic Al Kα, hν = 1486.7 eV). An electron flood gun generating low energy electrons (1.1 eV) and low energy argon ions (8 eV) by a floating ion gun were applied for charge compensation at isolating samples (dual beam technique), respectively. Survey scans were recorded with a pass energy of the analyzer of 187.85 eV. Narrow scans of the elemental lines were recorded at 11.75 eV pass energy, which yields an energy resolution of 0.59 eV FWHM (full width half maximum) at the Ag 3d 5/2 elemental line of pure silver. Calibration of the binding energy scale of the spectrometer was performed using well-established binding energies of elemental lines of pure metals (monochromatic Al Kα: Cu 2p 3/2 at 932.62 eV, Au 4f 7/2 at 83.96 eV) [39]. The error of binding energies of elemental lines is estimated at ± 0.2 eV. The O 1s elemental line of α-Fe 2 O 3 at 529.6 eV is used as a charge reference [40].
Atomic concentrations were calculated by areas of elemental lines of survey spectra after subtraction of a local Shirley background, taking into account sensitivity factors and asymmetry parameters of elemental lines and the transmission function of the analyzer. Relative error of atomic concentrations were within ±(10-20)%. Curve fits to narrow scans of elemental lines were performed by Gaussian-Lorentzian sum functions after Shirley background subtraction. Data analysis was performed using the ULVAC-PHI MultiPak program, version 9.9 (ULVAC-PHI Inc., Chigasaki, Japan).
Surface Titrations and Zeta Potentials Measurements
The solid particles were subject to acid-base titrations (848 Titrino plus, Metrohm, Herisau, Switzerland) at different NaCl concentrations: 0.1, 1.0, 3.0, and 5.0 M. The concentration of the particles was 1.39 g/L, and the titrant was 0.01 M HCl. Titration was initiated from alkaline conditions. The suspension was equilibrated overnight under humidified argon (not in a glove box, no bubbling). This atmosphere was also imposed for all titrations. The solid content was relatively low, and with a starting volume of 20 mL, only a few titration points per experiment were obtained. For the titration, 50 µL drops of titrant (0.01 M HCl) were added, and a data point was accepted when the change in pH was below 0.01 for a time span of 20 s. As a first step, 250 µL of 0.01 M NaOH were added to increase the pH to −log([H + ]/M) ≥ 9 (except for the highest salt concentration, where the addition of the base solution resulted in a lower value after equilibration). With this procedure for each ionic strength, around 5-8 points were obtained. For each titration, a separate suspension was prepared. Calibration of the measurement setup involved standard buffers (at least five commercial buffers from Merck, between pH 2 and 10). To obtain meaningful data, we considered the "A"-factor [25], which allows the correction of operational pH values (which were obtained from the electrode readings) to pH c , where pH c = −log([H + ]/M). The setup was purged by purified and humidified argon to avoid intrusion of carbon dioxide and limit evaporation. The data treatment involved the subtraction of a theoretical blank. All data treatment was done on the concentration (molar) scale. The required values for pK w at the different ionic strengths were calculated using the Pitzer formalism. Due to lack of material, the titrations could not be repeated, and they were verified by comparing the data to published data for hematite and goethite, as shown later.
The raw data from the titrations yielded the relative uptake or release of protons as a function of pH c . The relative number of protons adsorbed was calculated as the difference of the known amounts of protons added and the measured amounts of protons remaining in the solution. To obtain absolute proton related surface charge density or proton uptake data, a reference is needed. In the present work, the isoelectric point of the particles was determined at millimolar salt levels and in the absence of added salt to limit shifts of the isoelectric point due to potentially, specifically adsorbing counter-ions using a Brookhaven PALS (Holtsville, NY, USA) setup to fix the absolute level of the surface charge. The samples for these measurements were prepared in the same way as for the titrations. To verify whether Sr adsorption occurred, preliminary zeta-potential measurements (using the Brookhaven PALS) were carried out in solutions containing, additionally 10, 50, or 100 µM SrCl 2 . These samples were equilibrated for 24 h, i.e., prepared in the same way as described below for the adsorption experiments. A humidified argon atmosphere was used to avoid the interference of carbonate during the zeta-potential measurements. The solid concentration was 100 mg/L. The Hückel approximation was used to transfer measured mobilities to zeta-potentials. The settings of the software involved 10 runs per point with a fixed number of 30 measurements per run. The last measurement was then recorded by the software. The measurements were independently repeated by two persons, and the results agreed. The Brookhaven software reported the standard error for the results of the 10 runs, which was typically below 2 mV. The variation between the two distinct series of measurements was below 5 mV.
The Pitzer parameters were calculated using Geochemist's workbench [41] from an appropriate database (thermo_phrqpitz). The database is available at https://www.gwb. com/thermo.php (version 13/04/2009) and contains the aqueous components in the system under investigation: Sr 2+ , Na + , Cl − , H + , and OH − . The use of one database ensures selfconsistency throughout the modelling procedure with the treatment of the experimental titration and adsorption data. Fe(III) was excluded from the aqueous speciation scheme due to the low solubility of iron(III) oxide minerals, which would not affect the activity coefficients of the aqueous components considered in the modelling.
Batch Adsorption Studies
Sr adsorption onto the iron(III) oxide particles was studied separately in NaCl solutions of ionic strengths 0.1, 1, 3, and 5 M. Two Sr total concentrations were used, 1 × 10 −5 and 5 × 10 −5 M, and the solid-to-liquid ratio was 7.3 g/L. A humidified argon atmosphere was used to avoid carbon dioxide, as in the titrations and the zeta-potential measurements.
The Sr concentration range and solid-to-liquid ratio were chosen to assure a reasonable range for the analytical determination of Sr, which is complicated by the high salt levels. pH c values were varied in the range from 4.0 to 11.0. The contact time was 24 h. At the end of this period, the pH was measured in the suspension and corrected in the same way as for the titrations. Subsequently, the suspensions were centrifuged (Heraeus Instruments, Hanau, Germany, Megafuge 2.0, 3500 rpm for 15 min). The concentration of Sr in the supernatant was determined by ICP-OES. Uptake of Sr was determined from the difference between added Sr and Sr measured in the supernatant. One measurement per sample was performed. Reproducibility was verified by performing the two series of measurements by two different persons. The results were reproducible. Errors within the ICP-OES measurements were between three and six percent.
Speciation Calculations and Surface Complexation Modelling
The surface complexation model requires a proper treatment of the aqueous speciation and the activity coefficients of aqueous species. As a second sub-system, the acid-base properties of the sorbent need to be defined in an appropriate model before attempting to simulate Sr adsorption. This requires a stepwise procedure. We first defined the aqueous system, neglecting, as stated above, any reactions of dissolved Fe(III).
For Sr and the conditions investigated, we assumed that only Sr 2+ (aq) was present. This was verified by aqueous speciation calculations.
Starting from the fact that alkaline-earth elements show a poor tendency to hydrolyze, which increases with increasing atomic weight [42], it was inferred from these calculations that Sr hydrolysis is relevant for pH c > 12 for the conditions studied here. For 0.1 M NaCl, the calculations can be carried out using the Davies equation. Additional calculations involved a SIT database (see https://rdrr.io/cran/phreeqc/man/sit.dat.html, 29 May 2021) with PhreeqC [43], where we had to use SIT parameters for Ni 2+ , the one simple divalent cation for which ion interaction coefficients of the chloride ion with Ni 2+ , NiCl + , and NiOH + , i.e., the three species that could be relevant for Sr in NaCl, were available. The above sketched calculations were carried out to verify to what extent the hydrolysis species (SrOH + ) needed to be considered because the Pitzer database that was finally used does not include any hydrolysis species of Sr. In the Pitzer formalism, the interaction between Sr 2+ and Cl − is described by interaction coefficients only. For the final calculations, the activity coefficients are used to obtain the relevant activities for dissolved species for the different NaCl concentrations using the Pitzer formalism. Because the SIT is usually applied up to 3-4 M, we applied the Pitzer approach due to the 5 M dataset. In summary, for Sr only, the dissolved Sr 2+ ion has been considered in the aqueous phase. Pitzer activity coefficients were used as described above.
Surface protonation of the solid is simulated with a generic 1-pK a (≡XOH −1/2 , where X stands for Fe, see Table 1). In general, various surface functional groups exist (singly, doubly, triply coordinated groups), even on ideal surfaces. On real surfaces, defect sites add complexity [44]. Surfaces for which the precise morphology (in terms of exposed crystal planes and their contributions to the overall surface area of a particle) is not available or which are expected to show other types of heterogeneity, do not allow a detailed treatment. Instead, one has to resort to a simpler model, ideally to the simplest one possible that allows one to describe the available data [45,46]. A basic Stern model is used for interfacial electrostatics of the system, which includes ion-specific effects via ion-pair formation constants, (as does Pitzer via ion-interaction parameters). The ion-specific effects for the surface complexation model are included in the pair formation constants of the ions of the background electrolyte [47]. Target cation (i.e., Sr) adsorption is modelled by keeping parameters that were obtained from fitting the titration results, i.e., site density, capacitance, surface hydrolysis constants, and ion-pair formation constants. Parameter fitting is done using the general parameter estimation software UCODE [48] coupled to a modified version of FITEQL2 [49]. Note: a Counter ion charge is placed in the d-plane, i.e., chloride and sodium were placed in the 1-plane (d-plane, at the head end of the diffuse layer) as outer-sphere complexes.
No activity corrections, apart from electrostatic factors, are applied to surface species, i.e., in the mass law equations, only activity coefficients for dissolved species and activities of water have to be considered to calculate the ionic strength dependence of stability constants.
The pre-calculated activity coefficients and water activities from Geochemist's workbench calculations for the respective salt levels were applied in the FITEQL2 input files. Figure 1 shows the XRD and Figure 2 the XPS data for the synthesized solid particles. A good fit to the powder diffractogram was obtained, considering the presence of 94 ± 2% hematite and 6 ± 2% goethite. No other crystalline phases were detected. As can be inferred from Figure 2A, the XPS measurement yields an atomic O/Fe concentration ratio of about 1.55, which is consistent with hematite plus some goethite. Figure 2B indicates that, within the penetration depth of the XPS measurement, the oxide dominates. Therefore, the results from the XRD and the XPS methods agree with each other concerning the presence of minor amounts of goethite. Figure 2A shows the presence of KCl impurities from the synthesis. KCl was not detected in the XRD measurements. Adventitious hydrocarbon is detected by XPS ( Figure 2A) and is due to sample handling (i.e., transferring the sample to the XPS chamber). Finally, Figure 2C shows that no ferrous iron is detectable in the sample by XPS based on the absence of a shoulder in the Fe 2p 3/2 peak [50]. Figure 1 shows the XRD and Figure 2 the XPS data for the synthesized solid particles. A good fit to the powder diffractogram was obtained, considering the presence of 94% ± 2% hematite and 6% ± 2% goethite. No other crystalline phases were detected. As can be inferred from Figure 2A, the XPS measurement yields an atomic O/Fe concentration ratio of about 1.55, which is consistent with hematite plus some goethite. Figure 2B indicates that, within the penetration depth of the XPS measurement, the oxide dominates. Therefore, the results from the XRD and the XPS methods agree with each other concerning the presence of minor amounts of goethite. Figure 2A shows the presence of KCl impurities from the synthesis. KCl was not detected in the XRD measurements. Adventitious hydrocarbon is detected by XPS (Figure 2A) and is due to sample handling (i.e., transferring the sample to the XPS chamber). Finally, Figure 2C shows that no ferrous iron is detectable in the sample by XPS based on the absence of a shoulder in the Fe 2p3/2 peak [50].
Electrokinetics and Surface Titrations
The zeta potential of the synthetic iron(III) oxide sample in the absence and presence of Sr 2+ (aq) is shown in Figure 3 as a function of pH. The isoelectric point of the solid in the absence of added salt and at about 1 mM NaCl can be between pH 7.1 and 7.6. For the modelling, it is taken to be at pH 7.1 ( Figure 3A). Kosmulski [51] reports a range for points of zero charge for synthetic goethite between pH 5.6 and 10.2, and for synthetic hematite between pH 3.2 and 9.5. Therefore, the obtained results are in the range of previously reported values. The two separate series of experiments in Figure 3A show that the data were well reproducible. Figure 3B shows, for systems containing about 1 mM NaCl, that the addition of Sr leads to small but visible shifts of the isoelectric point to higher pH, indicating that Sr (specifically) adsorbs to the solid [52]. We emphasize that an accurate measurement of the isoelectric point (IEP) is difficult. Thus, the data in Figure 3A show significant scatter in the absence of Sr around the IEP. The reason for this is that, close to the IEP, the particles may aggregate more quickly and sediment, causing enhanced errors. We expect that the data points in the presence of Sr with |ζ| ≤ 10 mV are particularly affected by this effect and refrain from interpreting potential differences in the shifts of the IEP with Sr concentration. Rather, we believe the fact that for pH > 7, on average, all measurements in the presence of Sr yield results on the right side of the data in the absence of Sr indicate that there is a shift.
Next, we discuss the results of the titrations. We first show to what extent our results compare to previously published data, which were obtained on the proton concentration scale with the same background electrolyte. Figure 4A compares our experimental data for 100 mM NaCl to data available in the literature for goethite and hematite [53,54]. These sets were chosen because the data were obtained on the proton concentration scale and because high-precision equipment was used. The comparison is done by relating the absolute proton related surface charge density, i.e., the x-axis refers to the respective points of zero charge in the two papers [53,54] and to the IEP measured for the solid studied here, Figure 3A, i.e., −log ([H + ] o /M. The data agree very well, and we are therefore confident that our titration data are reliable, even if the amount of surface area in the titration vessel was relatively low, which also explains why relatively few data points are obtained. Figure 4B shows the comparison for data for the titration of the suspension in 5 M NaCl and the blank used for correction. It is clear that, despite the low amount of surface area used in the suspension titration, there is a clear difference between suspension and blank titration for the chosen conditions (as described in the experimental section). Even though for typical conditions (i.e. titrants with relatively high concentrations, such as 0.1 M) and typical titrant volumes (say, 0.2 mL), the surface area in the titration vessel should be as high as possible, it can be decreased if titrant concentration and volume added are accordingly adapted. The results shown in Figure 4 show that this was appropriately done in the present case, because literature data were reproduced for the low salt concentration ( Figure 4A), and blank and suspension titrations showed significant and consistent differences as shown, for example, for the highest salt content ( Figure 4B).
Electrokinetics and Surface Titrations
The zeta potential of the synthetic iron(III) oxide sample in the absence and presence of Sr 2+ (aq) is shown in Figure 3 as a function of pH. The isoelectric point of the solid in the absence of added salt and at about 1 mM NaCl can be between pH 7.1 and 7.6. For the indicating that Sr (specifically) adsorbs to the solid [52]. We emphasize that an accurate measurement of the isoelectric point (IEP) is difficult. Thus, the data in Figure 3A show significant scatter in the absence of Sr around the IEP. The reason for this is that, close to the IEP, the particles may aggregate more quickly and sediment, causing enhanced errors. We expect that the data points in the presence of Sr with |ζ| ≤ 10 mV are particularly affected by this effect and refrain from interpreting potential differences in the shifts of the IEP with Sr concentration. Rather, we believe the fact that for pH > 7, on average, all measurements in the presence of Sr yield results on the right side of the data in the absence of Sr indicate that there is a shift. Next, we discuss the results of the titrations. We first show to what extent our results compare to previously published data, which were obtained on the proton concentration scale with the same background electrolyte. Figure 4A compares our experimental data for 100 mM NaCl to data available in the literature for goethite and hematite [53,54]. These sets were chosen because the data were obtained on the proton concentration scale and because high-precision equipment was used. The comparison is done by relating the absolute proton related surface charge density, i.e., the x-axis refers to the respective points of zero charge in the two papers [53,54] and to the IEP measured for the solid studied here, Figure 3A, i.e., −log ([H + ]o/M. The data agree very well, and we are therefore confident that our titration data are reliable, even if the amount of surface area in the titration vessel was relatively low, which also explains why relatively few data points are obtained. Figure 4B shows the comparison for data for the titration of the suspension in 5 M NaCl and the blank used for correction. It is clear that, despite the low amount of surface area used in the suspension titration, there is a clear difference between suspension and blank titration for the chosen conditions (as described in the experimental section). Even though for typical conditions (i.e. titrants with relatively high concentrations, such as 0.1 M) and typical titrant volumes (say, 0.2 mL), the surface area in the titration vessel should be as high as possible, it can be decreased if titrant concentration and volume added are accordingly adapted. The results shown in Figure 5 shows the results of the surface titrations in terms of protons adsorbed in molar concentrations as a function of pHc. The lines are calculated with the combination of an electrostatic surface complexation model (a 1-pK, 1-site, Basic Stern model) using the Pitzer approach. The site density was fixed at 6.36 sites/nm 2 , a value typical of iron oxides for this type of simplifying model [55]. Thus, the site density is in the range given by Barron and Torrent for hematite and goethite [36]. The pristine point of zero charge was fixed at pH 7.1, i.e., the isoelectric point in the absence of Sr ( Figure 3A). This value is also used directly in the 1-pK protonation/deprotonation reaction. The remaining parameters that need to be determined from the experimental data are the electrolyte binding constants and the capacitance value. We obtained a good fit for the four salt concentrations investigated using the parameters given in Table 1. Figure 5 shows the results of the surface titrations in terms of protons adsorbed in molar concentrations as a function of pH c . The lines are calculated with the combination of an electrostatic surface complexation model (a 1-pK, 1-site, Basic Stern model) using the Pitzer approach. The site density was fixed at 6.36 sites/nm 2 , a value typical of iron oxides for this type of simplifying model [55]. Thus, the site density is in the range given by Barron and Torrent for hematite and goethite [36]. The pristine point of zero charge was fixed at pH 7.1, i.e., the isoelectric point in the absence of Sr ( Figure 3A). This value is also used directly in the 1-pK protonation/deprotonation reaction. The remaining parameters that need to be determined from the experimental data are the electrolyte binding constants and the capacitance value. We obtained a good fit for the four salt concentrations investigated using the parameters given in Table 1.
The fitted parameters for electrolyte binding indicate a weaker affinity of chloride for the surface compared to sodium (Table 1). This deviates from results reported for a comparable model on goethite [55]. The fitted capacitance is somewhat higher than usually obtained (i.e., for the above cited work [55], the reported capacitance was 1.07 F/m 2 ). Overall, with only three fitted parameters, we obtained an excellent fit to the measured proton uptake. Figure 6 shows the pH dependence of Sr adsorption on the synthetic iron(III) oxide at two slightly different Sr concentrations. While, at low NaCl content, the uptake tends towards 100%, with increasing salt content, the uptake is clearly suppressed, indicating a decreasing affinity to the surface. Literature data for Sr adsorption on hematite (with a point of zero net proton charge of about 8.5) in 0.1 M NaCl [54] or on goethite (point of zero net proton charge of about 9.2) in 0.01 M NaCl [56] show (i) that the onset of Sr adsorption occurs in similarly high pH ranges as in our case and (ii) that, when increasing the Sr concentration under otherwise identical conditions, the fraction of adsorbed Sr decreases significantly. This indicates limited affinity to the surface relative to other metal ions like Eu in the case of magnetite [7]. The Sr uptake data on montmorillonite at pH 5 reported in the literature [21] also indicate a substantial decrease in the distribution coefficient with increasing salt content. However, the decrease of Sr uptake by montmorillonite is clearly explained by cation exchange, which does not occur on oxide minerals. A more appropriate comparison is the data for Ba adsorption on goethite [15], where increasing NaNO 3 concentration shifts the adsorption edge of Ba to higher pH values. A rather simple model is able to describe the complete set of uptake data. With only one surface complex plus chloride co-adsorption, we are able to simulate the effect of salt content over a factor of 50 as going from 0.1 M to 5 M NaCl and the effect of total Sr concentration in a small concentration range. Two options were tested in the modelling, one involving monodentate (A) and the other bidentate (B) surface complex formation. Both were seen to be equally successful. It was found that charge distribution significantly improved the fits in both cases. All parameters are given in Table 2. Thus, overall, two surface species, ≡XOHSr +3/2 and ≡XOHSr +3/2 Cl − (where the strontium charge is distributed over the two planes, while the chloride charge is placed in the d-plane, see Table 2), are sufficient to obtain excellent simulations, whereas a more sophisticated model for trivalent actinide adsorption on magnetite [7] was used. This latter model includes molecular level information in the definition of surface sites and the adsorption mechanism, whereas the present model is a generic one-site model with a limited number of adjustable parameters. Note: a Strontium charge is placed in the 0-plane and 1-plane (for the bare complex one charge unit in each, for the ternary complex 0.38 charge units in the 0-plane), b chloride charge is placed in the 1-plane (d-plane, head end of the diffuse layer). c Strontium charge is placed in the 0-plane and 1-plane (for the bare complex one charge unit in each, for the ternary complex 0.38 charge units in the 0-plane); d bidentate formulation involves a stoichiometric factor of 1 for the A-matrix and consumes two generic sites by setting the coefficient to 2 in the B-matrix [57,58], e chloride charge is placed in the 1-plane (d-plane, head end of the diffuse layer). The fitted parameters for electrolyte binding indicate a weaker affinity of chloride for the surface compared to sodium (Table 1). This deviates from results reported for a comparable model on goethite [55]. The fitted capacitance is somewhat higher than usually obtained (i.e., for the above cited work [55], the reported capacitance was 1.07 F/m 2 ). Overall, with only three fitted parameters, we obtained an excellent fit to the measured proton uptake. Figure 6 shows the pH dependence of Sr adsorption on the synthetic iron(III) oxide at two slightly different Sr concentrations. While, at low NaCl content, the uptake tends towards 100%, with increasing salt content, the uptake is clearly suppressed, indicating a Figure 7A,B shows the surface speciation within the proposed model (Tables 1 and 2). For the low chloride concentration (Figure 7A), the surface ion-pair plays no role. At the highest NaCl concentration, in turn, it dominates the surface speciation of Sr surface complexes ( Figure 7B). The model indicates a trend with NaCl concentration concerning the difference between the two Sr concentrations at the high pH. For these two conditions, the surface speciation is governed by one complex. In the two other cases (speciation not shown) there is more equilibrated contribution from the two surface complexes. The model inherent effect of Sr concentration in all cases involves a slight decrease of fractional adsorption for the ≡XOHSr +3/2 species with increasing Sr concentration, whereas the ion-pair is not much affected. For the intermediate NaCl concentrations, the observed decrease due to electrostatics becomes relevant and visible. creases significantly. This indicates limited affinity to the surface relative to other metal ions like Eu in the case of magnetite [7]. The Sr uptake data on montmorillonite at pH 5 reported in the literature [21] also indicate a substantial decrease in the distribution coefficient with increasing salt content. However, the decrease of Sr uptake by montmorillonite is clearly explained by cation exchange, which does not occur on oxide minerals. A more appropriate comparison is the data for Ba adsorption on goethite [15], where increasing NaNO3 concentration shifts the adsorption edge of Ba to higher pH values. A rather simple model is able to describe the complete set of uptake data. With only one surface complex plus chloride co-adsorption, we are able to simulate the effect of salt Interestingly, the monodentate and bidentate options turned out to be equally successful in terms of goodness of fit. As an example, we plot the comparison for the data at 0.1 M NaCl and 10 µM Sr in Figure 7C. The Charge Distribution (CD)-values suggest that, in the bare complexes as given in the tables, the strontium charge is equally distributed between the two planes, while the co-adsorption of chloride in both cases draws more strontium charge towards the solution side of the interface. In both cases, equilibrium constants and CD-factors are nearly identical, the former because the A-matrix involves, in both cases, a stoichiometric factor of unity for the surface site component [57,58]. Overall, the combination of the conventional electrostatic SCMs with Pitzer aqueous solution approaches is successful in describing the available data. The increased salt levels do not seem to affect the performances of the adsorption models. Finally, Figure 7D compares our Sr uptake data in the 0.1 mM NaCl solution with data on hematite from the literature [54]. To justify the comparison, we have calculated the ratio of the amount of total Sr in the system and the surface area exposed by the respective sorbents. The ratios for the 50 µ M Sr data from the present work (0.33 µ mol/m 2 ) and one series from Karasyova et al. [54] (0.42 µ mol/m 2 ) are close. Figure 7D shows that the two datasets overlap. With the five-fold lower value, the experimental results from the present work do not differ. A ten-fold increase in Sr concentration in the literature data ( Figure 7D) causes a shift in the experimental data similar to the shift caused by an increase of the NaCl concentration by a factor 50. For goethite, we did not find data obtained on the proton concentration scale. Data on the pH scale from Carrol et al. [59] in 0.1 M NaCl, when corrected to the concentration scale, coincide at the lower pH range with our data for comparable Sr to surface area ratios, but at higher pH values our data show lower adsorption.
Sr Adsorption Experiments on the Synthetic Iron(III) Oxide
One important aspect is that the adsorption edge in our dataset does not go to zero as in many other datasets. We have previously observed similar behavior with goethite in 0.01 M NaCl solutions [56]. As shown in Figure 6, for the 50 µ M total strontium concentration, the "residual" uptake is significant. It is, therefore, of interest to discuss the onset of Sr adsorption on oxide minerals. The present data suggest that Sr uptake is not zero at the lower end of the pH range investigated, particularly for the lower salt level. However, Finally, Figure 7D compares our Sr uptake data in the 0.1 mM NaCl solution with data on hematite from the literature [54]. To justify the comparison, we have calculated the ratio of the amount of total Sr in the system and the surface area exposed by the respective sorbents. The ratios for the 50 µM Sr data from the present work (0.33 µmol/m 2 ) and one series from Karasyova et al. [54] (0.42 µmol/m 2 ) are close. Figure 7D shows that the two datasets overlap. With the five-fold lower value, the experimental results from the present work do not differ. A ten-fold increase in Sr concentration in the literature data ( Figure 7D) causes a shift in the experimental data similar to the shift caused by an increase of the NaCl concentration by a factor 50. For goethite, we did not find data obtained on the proton concentration scale. Data on the pH scale from Carrol et al. [59] in 0.1 M NaCl, when corrected to the concentration scale, coincide at the lower pH range with our data for comparable Sr to surface area ratios, but at higher pH values our data show lower adsorption.
One important aspect is that the adsorption edge in our dataset does not go to zero as in many other datasets. We have previously observed similar behavior with goethite in 0.01 M NaCl solutions [56]. As shown in Figure 6, for the 50 µM total strontium concentration, the "residual" uptake is significant. It is, therefore, of interest to discuss the onset of Sr adsorption on oxide minerals. The present data suggest that Sr uptake is not zero at the lower end of the pH range investigated, particularly for the lower salt level. However, the occurrence of broad pH-edges is not uncommon. For example, in the book by Dzombak and Morel [14], in nearly all Sr adsorption edges, even a two-site model is not able to capture the data at the lower pH range. One potential reason for this observation could be that pH changes during the equilibration time towards a lower pH, such that fast initial adsorption at the higher pH would require desorption with the lowering pH. Since desorption is sluggish [60] this would result in an apparent uptake that is higher than if the pH had always been at the lower value. With respect to our uptake data, the zeta-potential results ( Figure 3B) corroborate the batch experiment observations, because Sr causes a shift of the IEP at the low pH range, where adsorption is not decreasing towards zero (Figure 7). Pronounced uptake of Sr below the typical S-shape of the adsorption edge for cations (or very broad adsorption edges or distribution coefficient curves as a function of pH) were reported, for example, on ZrO 2 [61], hydrated TiO 2 [62], various other TiO 2 sorbents [63], hydrous Al, and Fe oxides [64] or γ-alumina [65]. While the pH range of the data on hematite by Karasyova et al. [54] shown in Figure 7D does not allow a conclusion concerning this issue, Carrol et al. report steeper pH-edges on goethite and amorphous silica [59]. Their data on goethite might suggest that the presence of carbonate could enhance Sr uptake at the lower pH range [59], which also agrees with the promotion of Sr uptake on goethite in the presence of selenite at low pH [56]. Mendez and Hiemstra [66] recently modeled the adsorption of alkaline earth ions on ferrihydrite and involved a high affinity site to fit the uptake data at the lower pH range in the absence of carbonate. They also showed that the metal-ion to ferrihydrite ratio determines the steepness of the adsorption edges in such scenarios. A similar approach was reported for γ-alumina [65], where the non-zero uptake of Sr covered a very broad pH range. From the available information, a definitive conclusion concerning the broad uptake curves as a function of pH, which can be considered as established according to Trivedi et al. [64], cannot be drawn. This could be a subject for further research.
Conclusions
In this work, we presented experimental data for Sr adsorption to an iron(III) oxide mineral. It was found that the synthesized solid involved crystalline hematite (>90%) and goethite. Sr adsorption on this solid was found to increase with pH, and at high pH it decreased when increasing NaCl concentration.
The present work is the first that reports experimental data and a successful model for the adsorption of Sr on an oxide surface up to very high salt levels. The model also describes charging curves in the same range of salt concentrations.
The combination of self-consistent experiments with a simple model that merges a conventional electrostatic SCM with a Pitzer approach to calculate the activities of ions in solution describes the uptake of Sr on the synthetic iron(III) oxide up to 5 M NaCl background electrolyte concentration; the model successfully simulates the strong effect of the increasing NaCl concentration on Sr uptake. Unlike for Sr in the present case, for previously studied systems up to such high salt levels involving trivalent cations, nearly 100% uptake was observed, even at the highest salt concentrations. The adsorption model involves few adjustable parameters, including the simulation of charging curves up to the high salt level. We conclude that using such a combination is a robust approach to simulate contaminant uptake from highly saline solutions, given that the model is calibrated selfconsistently on a consistent set of experimental and solution speciation data. Previous models that used high salt levels [7,[27][28][29][30] were more complex (involving more sites and more adjustable parameters). The fact that the very simple adsorption model framework (1-site, 1-pK, Basic Stern model) with fewer adjustable parameters can also be combined with Pitzer approaches in aqueous solution lends credibility to the approach on a general level and, in particular, for nuclear waste management in geochemical settings that involve high salt concentrations. | 2021-10-15T16:01:06.284Z | 2021-10-05T00:00:00.000 | {
"year": 2021,
"sha1": "c4fbaf407f5e0b88bb253dce71f9100670089622",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/11/10/1093/pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5367bb2d8c30336c6e263d8c44dc4da07ea7c8ff",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
253443078 | pes2o/s2orc | v3-fos-license | Structural Basis of Sequential and Concerted Cooperativity
Allostery is a property of biological macromolecules featuring cooperative ligand binding and regulation of ligand affinity by effectors. The definition was introduced by Monod and Jacob in 1963, and formally developed as the “concerted model” by Monod, Wyman, and Changeux in 1965. Since its inception, this model of cooperativity was seen as distinct from and not reducible to the “sequential model” originally formulated by Pauling in 1935, which was developed further by Koshland, Nemethy, and Filmer in 1966. However, it is difficult to decide which model is more appropriate from equilibrium or kinetics measurements alone. In this paper, we examine several cooperative proteins whose functional behavior, whether sequential or concerted, is established, and offer a combined approach based on functional and structural analysis. We find that isologous, mostly helical interfaces are common in cooperative proteins regardless of their mechanism. On the other hand, the relative contribution of tertiary and quaternary structural changes, as well as the asymmetry in the liganded state, may help distinguish between the two mechanisms.
Introduction
Cooperative ligand or substrate binding and regulation of affinity by effectors are important functional properties of some enzymes, receptors, and transporters that allow these macromolecules to respond differently to different environments, a property that J. Wyman called the "cybernetics of biological macromolecules" [1], and J. Monod "the second secret of life". Positive or negative homotropic cooperativity requires that the protein binds its ligand with stoichiometry higher than 1:1 and this is usually achieved because the protein is an oligomer made up of identical or similar subunits; the ligand affinity of the oligomer is then regulated by the number of bound ligands. Moreover, the ligand affinity of the oligomer may also be regulated by effectors binding to sites different from that of the principal ligand (heterotropic regulation). In 1963, Monod called "allosteric" the functional regulation achieved in enzymes by the binding of effectors to sites different from that of the substrate [2]; a later work Monod, Wyman, and Changeux gave a much more precise definition of allostery [3], which we adopt in this work (see below).
Homotropic cooperativity and heterotropic control can be achieved by several molecular mechanisms or types of mechanisms: sequential, concerted (as in Monod's allosteric model [3]), or based on ligand-linked association/dissociation (these will not be considered in the present work). Sequential mechanisms hypothesize that ligation of one site causes P(x) = 1 + 4 K x + 6 α K 2 x 2 + β K 3 x 3 + γ K 4 x 4 (1) where x is the ligand activity, P(x) the sum of all ligation intermediates expressed as multiples of the unliganded one, K the equilibrium association constant for the (non-cooperative) binding step, and α, β, and γ are unit-less interaction factors that define positive (>1) or negative (<1) homotropic cooperativity at each step of ligation. Sequential models make structural predictions on the interaction factors; e.g., in the original Pauling's "tetrahedral" model of Hb, which is the simplest possible model of this type, one has: β = α 3 , γ = α 6 .
The ligand partition function of the concerted model in Monod's original formulation for a n-sites oligomer [3] is: where we recognize two independent states (called R and T) that bind the ligand independently, non-cooperatively, and with different affinities (determined by the association constants KR and KT, respectively), whose equilibrium (in the absence of ligand) is governed by the allosteric constant L0 (notice that for consistency with Pauling [4] and Koshland [5], we converted the dissociation equilibrium constants originally preferred by Monod et al. [3] to association constants). Several variants of this reaction scheme are possible: e.g., in some cases the ligand affinity of the T state may be negligible, or the T state may present functional asymmetry and reduced binding stoichiometry, or the T state may be populated only in the presence of heterotropic effectors, in which case L0 is replaced by the function [Y] T KY, where T KY represents the equilibrium association constant of the effector Y to the unliganded T state protein. The crucial differences between the two models are: (i) the concerted model imposes functional and structural symmetry (all subunits of the oligomer share the same structure/state), whereas the sequential model postulates asymmetry in the partially liganded The ligand partition function of the concerted model in Monod's original formulation for a n-sites oligomer [3] is: where we recognize two independent states (called R and T) that bind the ligand independently, non-cooperatively, and with different affinities (determined by the association constants K R and K T , respectively), whose equilibrium (in the absence of ligand) is governed by the allosteric constant L 0 (notice that for consistency with Pauling [4] and Koshland [5], we converted the dissociation equilibrium constants originally preferred by Monod et al. [3] to association constants). Several variants of this reaction scheme are possible: e.g., in some cases the ligand affinity of the T state may be negligible, or the T state may present functional asymmetry and reduced binding stoichiometry, or the T state may be populated only in the presence of heterotropic effectors, in which case L 0 is replaced by the function [Y] T K Y , where T K Y represents the equilibrium association constant of the effector Y to the unliganded T state protein.
The crucial differences between the two models are: (i) the concerted model imposes functional and structural symmetry (all subunits of the oligomer share the same structure/state), whereas the sequential model postulates asymmetry in the partially liganded derivative(s); and (ii) the concerted model postulates conformational equilibrium and population selection by the ligand, whereas the sequential model forbids this.
The main differences between the two models of cooperativity are summarized in Table 1. We remark that in Monod's original paper [3], the term "allostery" is applied to the concerted reaction mechanism, considered alternative and incompatible with sequential binding, and that Koshland never used the term "allostery" to refer to the sequential reaction mechanism. Intermediate reaction schemes are impossible, but it is possible that the ligand binding reaction of one and the same protein sums up both mechanisms, e.g., because of an allosteric T-R equilibrium, where either state presents sequential cooperativity [6][7][8][9]. Table 1. Similarities and differences in the models of cooperativity.
Concerted Models Sequential Models
tertiary structure symmetry (all subunits share or do not share the same tertiary structure in the various ligation states) necessary for unliganded, partially liganded and fully liganded states (asymmetry within T may be tolerated) necessary asymmetry of ligation intermediates; asymmetry in the liganded state is tolerated quaternary conformation equilibrium in the absence of the ligand necessary, for all ligation intermediates, described by the allosteric constant L (see Equation (2)).
absent (see Equation (1)) cooperativity in the absence of quaternary structure change impossible possible positive homotropic cooperativity possible possible negative homotropic cooperativity usually impossible (some exceptions may be considered) possible heterotropic regulation almost always present possible but not common quaternary enhancement/constraint usually quaternary constraint (but enhancement possible) both possible structural differences between the fully liganded and fully unliganded states present (the main difference occurs between the R and T state, irrespective of ligation) present It is easy to recognize that readily accessible measurements are unable to determine whether a given cooperative protein obeys a concerted or a sequential mechanism; in particular, a structural difference between the fully liganded and fully unliganded state, in the absence of further analysis is not indicative. Moreover, it should be considered that both concerted and sequential phenomena may coexist in the same protein.
In this work, we explore two related hypotheses, namely (i) that a protein obeying the concerted mechanism may present a major quaternary structure change, coupled with smallish tertiary structure changes, whereas the opposite may occur in a protein obeying a sequential mechanism; and (ii) that a protein obeying a sequential mechanism, but not one obeying an allosteric mechanism, may present significant internal asymmetry in the partially, and possibly also in the fully liganded state. To explore these hypotheses, which represent the basic premises of the concerted and sequential models, it is necessary to assign the reaction mechanism independently of structural information; this can be done only in a handful of cases.
We analyzed the structural information available for a set of proteins whose reaction mechanism is known, albeit with different degrees of certainty, to identify possible structural features that may be common to proteins presenting concerted structural transitions and may differentiate them from proteins obeying a sequential reaction mechanism. The set of proteins we analyzed is summarized in Table 2; all of them present homotropic cooperativity, either of the positive or negative type; moreover, all of them are symmetric or quasi-symmetric at least in either the fully unliganded or fully liganded state. Since there are very few proteins presenting positive homotropic cooperativity for which the sequential mechanism is proven beyond doubt, we mainly selected as examples of this mechanism only proteins presenting negative homotropic cooperativity (see Table 1).
Materials and Methods
Structure analyses and computations were carried out using the atomic coordinates from the PDB [10] and the program Insight. For each protein, we tested several superpositions and measured the relative RMSDs of the α-carbons: (i) of the isolated subunits of the unliganded oligomer with each other to assess the intramolecular asymmetry of the unliganded state; (ii) of the isolated subunits of the fully liganded oligomer with each other to assess the intramolecular asymmetry of the liganded state; (iii) of the isolated subunits of the unliganded oligomer vs. the isolated subunits of the fully liganded oligomer, to assess the extent of the ligand-linked tertiary structure change; (iv) of the whole unliganded oligomer vs. the whole fully liganded oligomer, to assess the extent of the ligand-linked overall structure change. The RMSDs of the tertiary and quaternary structure changes are not additive, i.e., one cannot extract from these measurements the fraction of the RMSD due to the quaternary structure change; yet one can express the tertiary RMSDs relative to the overall RMSDs.
Where the structures of partially liganded derivatives were available, this was analyzed in the same way as those of the unliganded and fully liganded derivatives (see Table 3).
Human Hemoglobin
Vertebrate Hbs, among which human Hb is the best studied example, are tetramers formed by two identical αβ heterodimers, called α1β1 and α2β2. Each subunit has a characteristic all-helical structure (the globin fold), made up of 8 helices named A through H [11]. The monomer-monomer interface of the heterodimer is isologous (i.e., symmetric [3]) and is contributed by helices B, G and H from each subunit, with contacts α1B-β1H, α1G-β1G, α1H-β1G. The interdimeric interface is again isologous, with contacts α1C-β2FG and α1FG-β2C, where FG indicates the interhelical segment interposed between helices F and G [12]. Several quaternary structures are known, only one of which has the low affinity characteristic of the T state [13]. The structural transition that converts the T into the R state can be described as a sliding and rotation of one dimer with respect to the other, with changes in the contacts at the α1β2 (and the symmetric α2β1) interface [6,12]. The isolated subunits and the αβ dimers have the same high affinity of the R state tetramer, thus Hb presents the quaternary constraint [14,15].
There is extremely convincing proof that human Hb behaves as a concerted system: (i) binding of heme ligands is cooperative, and heterotropic regulation is exerted by several effectors (protons, chloride, CO 2 , and 2,3 DPG); (ii) O 2 binding in crystals or in silica gels, conditions that prevent the quaternary structure change, is non-cooperative and occurs with the low affinity characteristic of the T state [16,17]; (iii) the ligand-independent allosteric structure change has been recorded by a variety of means [18]; (iv) the structure of doubly liganded intermediates could be determined and is symmetric and T state (i.e., there is not any intermediate quaternary state nor any significantly asymmetric structure) [19][20][21][22].
Superposition of the subunits in the liganded and unliganded T and R state reveals that tertiary changes are very modest, and essentially limited to a small region of each subunit, composed by the F helix and the FG corner (the so-called allosteric core of each subunit) [23,24]; by contrast, quaternary structure changes are substantial (see Table 3 and Figure 2). Only approximately one fourth of the total RMSD of the α carbons between the liganded R structure and the unliganded T structure is accounted for by the tertiary structure change within each subunit, the remaining three fourths being accounted for by the quaternary, allosteric, structure change (see Table 4). Moreover, asymmetries in the structure of subunits within each quaternary state are very minor (Table 3). Indeed, the global (tertiary + quaternary) ligand-linked structure changes cause a RMSD (on the α-carbons) of 2.43 Å (T 0 vs. R 4 ), reduced to only 0.51 Å under conditions in which the quaternary structure change is prevented (T 0 vs. T 4 ) [25]. Ligand-linked tertiary structure changes, in the presence of quaternary transition, cause an average RMSD of 0.67 Å, corresponding to 28% of the overall structural change (Table 4). Ligand-linked tertiary structure changes, in the absence of the quaternary structure transition, cause an average RMSD = 0.28-0.41 Å in the T state and RMSD = 0.4 Å in the R state (measured for the β 4 homotetramer, not reported in Table 3). Ligand-independent structure differences due to asymmetry are minor. All these observations are fully consistent with the expectation that in an allosteric, MWC-like macromolecule, the quaternary structure change dominates in amplitude over the tertiary structure changes, be they ligand-dependent or ligand-independent. Superposition of the α1β1 dimers from R state oxy Hb (PDB code 2DN1; pale red) and T state deoxy Hb (PDB code 2DN2; pale gray) shows that the tertiary structure changes are small; the large quaternary structure change is evident from the poor superposition of the α2β2 dimers from the same structures (vivid red and vivid gray).
Aspartate Transcarbamylase
Asp transcarbamylase (ATC) from E. coli presents sigmoidal activity curves, indicative of positive homotropic cooperativity for aspartate, heterotropic inhibition by CTP, and heterotropic activation by ATP; moreover, it presents large tertiary and quaternary structure changes originally detected because of a ligand-induced change in the sedimentation coefficient [26], and successively confirmed by X-ray crystallography. The presence of a ligand independent equilibrium between the T and R quaternary structure was demonstrated by fluorescence spectroscopy on a site-directed mutant enzyme having a small L0 [27]. Thus, ATC is a proven case of an allosteric enzyme obeying the concerted reaction mechanism. Superposition of the α1β1 dimers from R state oxy Hb (PDB code 2DN1; pale red) and T state deoxy Hb (PDB code 2DN2; pale gray) shows that the tertiary structure changes are small; the large quaternary structure change is evident from the poor superposition of the α2β2 dimers from the same structures (vivid red and vivid gray).
The case of Hb is also relevant for the important information made available by rapid kinetic experiments and molecular dynamic simulations, which consistently point out that quaternary (allosteric) structural changes occur over a time window at least 100-fold faster than tertiary structural relaxations. We comment on this point more extensively in Section 4. Table 4. Relative importance of tertiary structure changes with respect to overall ligand-induced structure changes. Structure changes are expressed as average RMSDs. Tertiary changes are those occurring between single subunits. Overall structure changes comprise the sum of tertiary and quaternary changes induced by ligation in the oligomers. R1: relative importance of the ligandinduced tertiary structure changes, measured as the averaged tertiary RMSDs in the liganded vs. unliganded state divided by the overall RMSD in the liganded vs. unliganded state. R2: asymmetry of subunits in the unliganded state, measured as the averaged tertiary RMSDs in the unliganded state divided by the overall RMSD in the liganded vs. unliganded state. R3: asymmetry of subunits in the liganded state, measured as the averaged tertiary RMSDs in the liganded state divided by the overall RMSD for liganded vs. unliganded state. *: measured for the catalytic subunits only.
Protein
Reaction
Aspartate Transcarbamylase
Asp transcarbamylase (ATC) from E. coli presents sigmoidal activity curves, indicative of positive homotropic cooperativity for aspartate, heterotropic inhibition by CTP, and heterotropic activation by ATP; moreover, it presents large tertiary and quaternary structure changes originally detected because of a ligand-induced change in the sedimentation coefficient [26], and successively confirmed by X-ray crystallography. The presence of a ligand independent equilibrium between the T and R quaternary structure was demonstrated by fluorescence spectroscopy on a site-directed mutant enzyme having a small L 0 [27]. Thus, ATC is a proven case of an allosteric enzyme obeying the concerted reaction mechanism.
The three-dimensional structure of the enzyme is peculiar and complex: it has a central core formed by two trimers of identical catalytic subunits, named C1 through C6. The intersubunit interfaces in each trimer are heterologous. Each trimer has rotational symmetry of 120 • around an axis perpendicular to the plane of the subunits. The two trimers are arranged as a symmetric homodimer, by rotation of one trimer by 180 • with respect to the other; and the trimer-trimer interface is isologous [28]. Due to this peculiar and highly symmetric arrangement, which we called a dimer or rings [29], a structural unit can be identified made up by two subunits, one from each trimer forming a symmetric homodimer; the interfaces between these homodimers are all isologous (we called them "extended isologous" to stress the fact that the unit required by the isologous interface is not a single subunit, but a oligomer, in this case a dimer). Three homodimers of regulatory subunits named R1 through R6 are located at the corners of the dimer of trimers. Surprisingly, the homodimer of regulatory subunits, in spite of having isologous interfaces, is asymmetric (see Table 3).
The quaternary structure change could be studied thanks to the competitive inhibitor PALA (N-phosphonacetyl-L-aspartate) which mimics Asp. In the T to R transition, the ring-ring interface undergoes a rotation of 12 • ; this is associated with a significant tertiary structure change in the catalytic subunits, whose two domains close on each other by 6.8 • [28].
The structural mechanism of ATC cooperativity is more complex than that of Hb, as demonstrated by studies carried out on the substructure composed by the isolated trimer of catalytic subunits. The trimer has high ligand affinity, as characteristic of quaternary constraint, but the tertiary structure of the unliganded subunits in the trimer more closely resembles that of the T rather than the R state ATC, whereas the structure of the PALA liganded trimer resembles that of the PALA liganded R state ATC [30,31]. Thus, the ligandlinked tertiary structure change of ATC freely occurs in the non-cooperative trimer, and the role of the full dodecameric assembly would be that of modulating the flexibility of the catalytic trimers.
The essential results of our analysis of ATC are as follows: global (tertiary + quaternary) ligand-induced structure change causes a RMSD of 6.20 Å; tertiary changes in the presence of quaternary transition (catalytic and regulatory subunits) cause average RMSDs of 1.7-1.85 Å. The ratio between these two values (i.e., the fraction of the total RMSD that may be ascribed to ligand-dependent tertiary structure changes) is 0.28, identical, within errors, to the value obtained for Hb (see Tables 3 and 4). Tertiary structure asymmetry of the catalytic subunits, in both the T and R states, is minor if compared to the overall ligand-dependent structure change. The regulatory subunits form a strongly asymmetric dimer in all states of the enzyme; thus, they should probably not be considered here. Thus ATC, like Hb, presents all the structural features that Monod and co-workers [3] postulated for a concerted, allosteric system.
Phosphofructokinase
Phosphofructokinase (PFK), the key regulatory enzyme of glycolysis, presents sigmoidal activity curves with respect to fructose-6P (but not with respect to ATP), and heterotropic regulation by ADP and P-enol pyruvate (PEP) [32]. Moreover, the enzyme from B. stearothermophilus, whose structure has been determined, presents a large quaternary structure change on substrate ligation that has been interpreted as a T to R transition; hence the enzyme is considered an example of an enzyme obeying the concerted reaction mechanism. Cooperativity in the binding of fructose-6P is only observed in the presence of inhibitors (PEP or its analogue P-glycolate, PGC). In the absence of substrates and inhibitors, the enzyme crystallizes in the putative R state; thus, PFK appears to be a case of inhibitor-induced allostery, and in equation 2 one should replace L 0 with [PGC] T K PGC (see Section 1). Unfortunately, no structure of any ligation intermediate is available.
PFK is a homotetramer whose four monomers (named A, B, C, and D) assemble in two dimers (AB and CD). All interfaces are isologous [33]. The putative T-R structural transition is described as a rigid body rotation of one dimer with respect to the other by approximately 7 • [32]. A very interesting feature of the enzyme is that the binding sites of fructose-6P and PEP or its analogue PGC are located at the interfaces: those of F6P at the interdimeric interfaces, at the contact regions between monomers A-D (and the symmetry equivalents B-C); those of PEP/PGC at the intradimeric contact regions between monomers A-B (and the symmetry equivalents C-D). As is characteristic of isologous interfaces, each binding site occurs twice, thus the tetramer has four F6P and four PEP/PGC binding sites. The F6P and PEP/PGC binding sites are relatively close, and binding of each respective ligand alters both, thus explaining the inhibition by PEP; indeed, PFK presents the phenomenon of R-binding only, and apparently binding of PEP/PGC and F6P is mutually exclusive. An interesting structural feature of T state, PGC-inhibited, PFK is that, contrary to the PGC-free R state, it is asymmetric, both within the dimer and across the dimers ( Table 3).
As typically found for multimeric proteins presenting ligand binding sites at the inter-monomeric (or inter-dimeric) interfaces, PFK presents quaternary enhancement for F6P, instead of quaternary constraint [34,35], and the movement of one dimer against the other caused by the quaternary structure change disrupts the F6P binding site. Quaternary enhancement is usually considered incompatible with the allosteric model [15,36]; however, it seems to us that the analysis by Edelstein and Edsall [37] may be extended to make quaternary enhancement compatible with the allosteric model. We give a simplified demonstration of this point in Figure 3 and its legend, which represents the relationships between ligation, conformational equilibrium and dissociation into dimers of an allosteric tetramer like PFK.
itive or negative type may affect the binding of PGC, as suggested by the T state asymmetry, whereas concerted cooperativity may be the major determinant of homotropic cooperativity for F6P in the tetramer. Consistent with the hypothesized coexistence of sequential and allosteric cooperativity, the extent of the ligand-induced quaternary changes is of the same order of magnitude as the tertiary ones instead of much larger as in the purely concerted systems of Hb and ATC. A tetramer like PFK or Hb, which reversibly dissociated into dimers, presents the equilibria depicted in Figure 3. In this Figure, R0 indicates the unliganded R state tetrameric oligomer; R4 the fully liganded R state tetrameric oligomer; similar conventions are applied to the T state (T0 and T4) and to the dimer (D0 and D2). KR represents the ligand association constant for the R state tetramer, KT for the T state tetramer, and KD for the dimer. T KD,0 represent the equilibrium constant for the dissociation of the unliganded T state tetramer into dimers; similar conventions are applied to R KD,0, T KD,4, and R KD,4. The allosteric constants for the unliganded and fully liganded states are labelled as L0 and L4, respectively.
The scheme, which for simplicity only includes the unliganded and fully liganded states, makes it obvious that: (i) positive homotropic cooperativity demands that KT << KR, L0 >> 1, and L4 << 1. Quaternary enhancement implies that the isolated dimers/subunits have lower ligand affinity than the oligomer, whereas the opposite applies to quaternary constraint; given the above relationships, quaternary enhancement also implies that the liganded state is more tightly associated than the unliganded state (has lower tendency to dissociate into dimers), whereas the opposite occurs for quaternary constraint. Our analysis of the ligand-induced structure changes gives the following results: global (tertiary + quaternary) changes cause a RMSD of 1.55 Å; tertiary structure changes, in the presence of quaternary transition cause an averaged RMSD of 0.94 Å. Ligand-independent tertiary structure differences due to asymmetry cause a RMSD of 0.28 Å in the T state, whereas the R state is symmetric (Table 3). It is important to remark that ligand-free PFK crystallizes in the R state, and the T state is induced by the binding of the inhibitor PEP or PGC. Thus, inhibitors convert the fully unliganded, symmetric R state tetramer into an asymmetric T state one. We suspect that in PFK, sequential and concerted phenomena may both be present, at different levels: sequential cooperativity either of the positive or negative type may affect the binding of PGC, as suggested by the T state asymmetry, whereas concerted cooperativity may be the major determinant of homotropic cooperativity for F6P in the tetramer. Consistent with the hypothesized coexistence of sequential and allosteric cooperativity, the extent of the ligand-induced quaternary changes is of the same order of magnitude as the tertiary ones instead of much larger as in the purely concerted systems of Hb and ATC.
A tetramer like PFK or Hb, which reversibly dissociated into dimers, presents the equilibria depicted in Figure 3. In this Figure, R 0 indicates the unliganded R state tetrameric oligomer; R 4 the fully liganded R state tetrameric oligomer; similar conventions are applied to the T state (T 0 and T 4 ) and to the dimer (D 0 and D 2 ). K R represents the ligand association constant for the R state tetramer, K T for the T state tetramer, and K D for the dimer. T K D,0 represent the equilibrium constant for the dissociation of the unliganded T state tetramer into dimers; similar conventions are applied to R K D,0 , T K D,4 , and R K D,4 . The allosteric constants for the unliganded and fully liganded states are labelled as L 0 and L 4 , respectively.
The scheme, which for simplicity only includes the unliganded and fully liganded states, makes it obvious that: (i) positive homotropic cooperativity demands that K T << K R , L 0 >> 1, and L 4 << 1.
(ii) L 4 = L 0 K T 4 /K R 4 ; (iii) L 0 = R K D,0 / T K D,0 and L 4 = R K D,4 / T K D,4 . From these relationships, we obtain: R K D,0 >> T K D,0 ; R K D,4 << T K D,4 ; and . Quaternary enhancement implies that the isolated dimers/subunits have lower ligand affinity than the oligomer, whereas the opposite applies to quaternary constraint; given the above relationships, quaternary enhancement also implies that the liganded state is more tightly associated than the unliganded state (has lower tendency to dissociate into dimers), whereas the opposite occurs for quaternary constraint.
The allosteric model was originally conceived under the assumption of quaternary constraint (that at the time had been demonstrated only for human HbA). In the case of "perfect" quaternary constraint we have: K D = K R ; and T K D,4 = T K D,0 (K R 4 /K T 4 ). Thus, as pointed out by S.J. Edelstein and J.T. Edsall [37], the tetramer dimer dissociation constant of at least one of the two states can be ligand independent, whereas the other must vary with ligation. In perfect quaternary constraint, the tendency of the T state protein to dissociate increases with ligation until it exceeds that of the R state, causing L to decrease with ligation.
Although less straightforward, the opposite condition of "perfect" quaternary enhancement is not incompatible with allostery: indeed, in this case, we would have: , and the tendency of the R state protein to dissociate de-creases with ligation, causing the fully liganded R state to become more stable than the fully liganded T state, as it occurs in PFK. For the above system, the law of mass conservation dictates: where x, K R , and K T have their usual meaning, K D is the equilibrium association constant of the ligand with the dimer, K diss. is the equilibrium dissociation constant of the tetramer into dimers in the absence of the ligand, D tot is the total protein concentration (activity) expressed on a dimer basis and D 0 is the absolute molar concentration (activity) of the unliganded dimers. Solving this equation for D 0 allows one to calculate the absolute molar concentration of each species in the reaction scheme, and to verify that, under appropriate experimental conditions, the model predicts allosteric homotropic cooperativity no matter whether K D = K R (pure quaternary constraint) or K D = K T (pure quaternary enhancement).
Glycogen Phosphorylase
Glycogen phosphorylase is a very interesting example of a putative concerted enzyme. The non-phosphorylated enzyme (GPb) is activated by phosphorylation (GPa). The quaternary structures of its functionally most important states are the homodimer and the less active homotetramer. Glycogen or oligosaccharides promote dissociation of tetramers to more active dimers. The putative T-state and R-state structures have been characterized; the former is a symmetric homodimer, the latter a homotetramer, described as a dimer of homodimers, whose monomers present minor asymmetries. Thus, the structural comparison is usually carried out between the T and R state homodimers. The 850-residue subunit of mammalian GP has an α/β structure; it contains a PLP cofactor, required for catalysis. The structure includes two domains, the larger N-terminal being responsible for the intersubunit interface. The intersubunit interface is isologous and is provided by two contact regions formed by α-helices 7 (the "tower" helices; residues 262-276 in rabbit muscle GP; see Figure 4) of each subunit, which contact each other in antiparallel directions, and the cap region of each subunit (residues 35-46) contacting the α-helix 2 (residues 47-78) of the opposite subunit [38][39][40].
The quaternary structure change may be described as a rotation of one monomer with respect to the other by 10 • ; this movement causes the cap'-α-helix2 contact region to become tighter by 1 Å and that between the tower helices to move apart by 3 Å [38]. Moreover, in the T quaternary structure the two tower helices are oriented in an almost (anti)parallel direction, forming an angle of −20 • , whereas in the R quaternary structure they are almost perpendicular to each other (angle = −80 • ). The catalytic site is quite far from the interface, but the C-terminus of the long tower helix is contiguous with residues that gate the access to the catalytic site (Pro 281 and Asp 283).
The binding sites of some heterotropic effectors (Ser 14 which can be phosphorylated; inorganic phosphate), lie in close proximity of the cap'-α-helix2 contact region and this explains their effect on the quaternary structure change.
Unfortunately, there are no available structures of ligation intermediates of GPb, and the ligand-independent allosteric interconversion of the two quaternary structures has not been demonstrated, thus it cannot be unequivocally confirmed that GPb obeys a pure concerted reaction mechanism. Moreover, as demonstrated by the structural parameters reported in Tables 3 and 4, GPb presents R state asymmetry, a feature that is unexpected in an allosteric protein obeying the concerted mechanism, and characteristic instead of proteins obeying a sequential reaction mechanism. The global (tertiary+quaternary) ligandinduced structure change has RMSD = 2.61 Å; tertiary structure change, in the presence of quaternary transition has an average RMSD = 1.33 Å; the ratio of the latter to the former yields 50%, a value higher than that we measured for pure allosteric macromolecules like Hb or ATCase, but similar to that of PFK (see Table 4). This finding, and the liganded state asymmetry, may suggest that the homotropic cooperativity of GP could result from the addition of a sequential component to the concerted mechanism, possibly to be envisaged as sequential cooperativity within the R state. Unfortunately, there are no available structures of ligation intermediates of GPb, and the ligand-independent allosteric interconversion of the two quaternary structures has not been demonstrated, thus it cannot be unequivocally confirmed that GPb obeys a pure concerted reaction mechanism. Moreover, as demonstrated by the structural parameters reported in Tables 3 and 4, GPb presents R state asymmetry, a feature that is unexpected in an allosteric protein obeying the concerted mechanism, and characteristic instead of proteins obeying a sequential reaction mechanism. The global (tertiary+quaternary) ligandinduced structure change has RMSD = 2.61 Å ; tertiary structure change, in the presence of quaternary transition has an average RMSD = 1.33 Å ; the ratio of the latter to the former yields ≃ 50%, a value higher than that we measured for pure allosteric macromolecules like Hb or ATCase, but similar to that of PFK (see Table 4). This finding, and the liganded state asymmetry, may suggest that the homotropic cooperativity of GP could result from the addition of a sequential component to the concerted mechanism, possibly to be envisaged as sequential cooperativity within the R state.
The Chemotactic Asp Receptor from S. typhimurium and E. coli
In many bacteria, the chemotactic aspartate receptor is a homodimeric protein exposed on the cell surface that presents negative homotropic cooperativity for Asp. Koshland and co-workers described the cases of the Asp receptor from E. coli and S. typhimurium [41][42][43]; both are homodimers presenting negative cooperativity, which in the case of the E. coli receptor is so marked to induce half-of-the-sites reactivity. The extracellular domain of S. typhimurium Asp receptor is of special interest since the unliganded and (singly) liganded structures have been solved (PDB entries 1LIH and 2LIG). The two monomers of the homodimer have a four-helix bundle structure, and the dimer presents isologous contacts mainly between helices 1 and 4 from each monomer. In the absence of Asp, the homodimer forms a perfectly symmetric structure, but the binding of Asp causes the
The Chemotactic Asp Receptor from S. typhimurium and E. coli
In many bacteria, the chemotactic aspartate receptor is a homodimeric protein exposed on the cell surface that presents negative homotropic cooperativity for Asp. Koshland and co-workers described the cases of the Asp receptor from E. coli and S. typhimurium [41][42][43]; both are homodimers presenting negative cooperativity, which in the case of the E. coli receptor is so marked to induce half-of-the-sites reactivity. The extracellular domain of S. typhimurium Asp receptor is of special interest since the unliganded and (singly) liganded structures have been solved (PDB entries 1LIH and 2LIG). The two monomers of the homodimer have a four-helix bundle structure, and the dimer presents isologous contacts mainly between helices 1 and 4 from each monomer. In the absence of Asp, the homodimer forms a perfectly symmetric structure, but the binding of Asp causes the homodimer to become asymmetric, and the affinity of the two binding sites, which reside in close proximity of the monomer-monomer interface, diverges by 20-fold [42]; moreover, the soluble domain of Asp receptor presents quaternary enhancement, its tendency to dissociate into monomers being diminished in the presence of saturating Asp [44]. In the presence of Asp, the extracellular domain of the S. typhimurium receptor presents complete occupancy of the high affinity site and partial occupancy of the low affinity site. Asymmetry is present both in the tertiary and quaternary structure of the liganded macromolecule: indeed, the two monomers of the liganded homodimer are no longer rotated by 180 • with respect to each other but only by 178 • . The ligand-induced quaternary structure change causes the two monomers to become more parallel and closer to each other, more markedly at the extremity that is farther from the membrane crossing domain. Interestingly, in this case, the interfaces are isologous but non-symmetric, because the interatomic distances differ even though the interacting residues are the same. This effect is amplified if one measures inter-residues distances for residues that are farther from the monomer-monomer interface; e.g., the distance between CαAla85 A-CαThr154 B is 23.2 Å in the unliganded state and 19.9 Å in the liganded state; the values for the distance between CαAla85 B-CαThr154 A are 23.2 Å and 13.9 Å, respectively (A being the high affinity subunit, B the low affinity one).
The Asp binding site lies across the monomer-monomer interface and both subunits contribute to it [43,45]; the relevant residues are Arg64, Ser68, and the segment 149-154 from one subunit and Arg69 and Arg73 from the partner subunit. Ligand binding to one site causes the other site to shrink, making a poorer fit for the second Asp ligand (see Figure 5). by 180° with respect to each other but only by 178°. The ligand-induced quaternary structure change causes the two monomers to become more parallel and closer to each other, more markedly at the extremity that is farther from the membrane crossing domain. Interestingly, in this case, the interfaces are isologous but non-symmetric, because the interatomic distances differ even though the interacting residues are the same. This effect is amplified if one measures inter-residues distances for residues that are farther from the monomer-monomer interface; e.g., the distance between CαAla85 A-CαThr154 B is 23.2 Å in the unliganded state and 19.9 Å in the liganded state; the values for the distance between CαAla85 B-CαThr154 A are 23.2 Å and 13.9 Å , respectively (A being the high affinity subunit, B the low affinity one).
The Asp binding site lies across the monomer-monomer interface and both subunits contribute to it [43,45]; the relevant residues are Arg64, Ser68, and the segment 149-154 from one subunit and Arg69 and Arg73 from the partner subunit. Ligand binding to one site causes the other site to shrink, making a poorer fit for the second Asp ligand (see Figure 5). Our analysis yields the following results: global (tertiary+quaternary) ligand-induced structure change has RMSD = 2.85 Å . The unliganded state of the homodimer is symmetric, whereas the liganded state is asymmetric, and one of the two subunits presents incomplete ligand saturation, nearing half-of-the-sites reactivity. Ligation of the first molecule of Asp causes a tertiary structure change with RMSD in the order of 2.5 Å in both subunits (Table 3); the structure difference of the two subunits in the half-liganded state is assigned a RMSD = 2.44 Å . Consistent with the expectations for a protein Our analysis yields the following results: global (tertiary+quaternary) ligand-induced structure change has RMSD = 2.85 Å. The unliganded state of the homodimer is symmetric, whereas the liganded state is asymmetric, and one of the two subunits presents incomplete ligand saturation, nearing half-of-the-sites reactivity. Ligation of the first molecule of Asp causes a tertiary structure change with RMSD in the order of 2.5 Å in both subunits (Table 3); the structure difference of the two subunits in the half-liganded state is assigned a RMSD = 2.44 Å. Consistent with the expectations for a protein presenting sequential (negative) homotropic cooperativity, the quaternary structure change in the Asp receptor, though evident, is responsible for a small fraction of the total RMSD between the liganded and unliganded structures, the largest structure changes occurring at the tertiary level; moreover, the asymmetry (only present in the liganded state) causes a large tertiary structure difference between the two subunits within the structure of the half-liganded state. These features may be considered typical of the sequential reaction mechanism.
Asp Semialdehyde Dehydrogenase
Asp semialdehyde dehydrogenase (ASADH) is a homodimeric bacterial enzyme presenting negative cooperativity. The structures of ASADH from E. coli in the substrate bound and free state are available, with pdb codes 1T4B or 1T4D, 1GL3 [46]. The enzyme presents several very interesting features. The monomers have a predominantly β-sheet structure, and the isologous monomer-monomer interface is provided by two extended β-sheets; this is at variance with the other proteins considered in this work, which have predominantly helical interfaces. Each monomer is composed of two domains, each of them further split into two sub-domains (labelled N1, N2 and C1, C2). Each monomer presents one binding site for NADPH and one for ASA; however, while two molecules of the former can simultaneously bind to the dimer, only one of ASA can bind, due to the half-of-the-sites reactivity of the enzyme. The authors suggest that this property is important for the release of NADP + , and that asymmetry occurs in an alternate fashion, one site being charged with the substrates, while products are released from the other site.
The substrate (s) free enzyme is symmetric (closed form; pdb code 1GL3); the enzyme containing two molecules of NADPH and one of the ASA analogue S-methyl-L-Cys sulfoxide (SMCS) is asymmetric [46]. NADPH binding causes a set of symmetric concerted tertiary structure changes that can be described as reciprocal rotations and torsions of the sub-domains and domains of each subunit. SMCS binding to one subunit causes the movement of a loop towards the substrate which forbids the same movement in the opposite subunit (open form; pdb codes 1T4B and/or 1T4D).
Our analysis yields RMSD = 1.49 Å for the global (tertiary + quaternary) ligandinduced structure change, to be compared with RMSD = 0.94-1.3 Å for the tertiary structure change. Ligand-independent tertiary structure differences due to asymmetry are described by RMSD = 0.27 Å, in the unliganded state, and RMSD = 0.62 Å in the half-liganded state (structure 1T4B presents SMCS on subunit A only because of half-of-the-sites-reactivity). These values are reminiscent of those of the Asp receptor and should be interpreted along similar lines.
Bacterial D-Lactate Dehydrogenases
Bacterial lactate dehydrogenases (LDHs) are extremely interesting enzymes that would deserve a dedicated analysis. They belong to two distinct groups, with respect to their product: L-lactate or D-lactate. Bacterial L-lactate dehydrogenases are usually cooperative homotetramers that seem to obey an allosteric, MWC-like reaction mechanism. D-lactate dehydrogenases may be homodimeric or homotetrameric, may or may not present homotropic cooperativity and in the best characterized examples, they obey a sequential reaction mechanism.
Here we analyze only two examples of D-lactate dehydrogenases from Gram negative bacteria, whose three-dimensional structure is available: those from Pseudomonas aeruginosa (PaLDH) and from Fusobacterium nucleatum (FnLDH) [47]. Both are homotetrameric enzymes that present homotropic cooperativity, positive in the case of FnLDH (Hill coefficient for pyruvate, at constant NADH and pH = 8: n = 2), negative in the case of PaLDH (Hill coefficient for pyruvate, at constant NADH and pH = 8: n = 0.77). FnLDH has a special interest in this analysis because it is the only protein considered in this work that presents positive homotropic cooperativity and can be confidently assigned a sequential reaction mechanism [47].
Our analysis yields the following results: global (tertiary+quaternary) ligand-induced structure change has RMSDs = 6.44 Å for FnLDH and 3.30 Å for PaLDH; tertiary structure change has RMSDs = 4.7 Å for FnLDH and 2.35 Å for PaLDH. Both enzymes present significant ligand-independent tertiary structure differences due to asymmetry, but the description is this phenomenon is complex, and different in the two cases. Ligand-free (apo) PaLDH (PDB code 6ABJ) is a symmetric dimer of slightly asymmetric homodimers; the two subunits of the same dimer present RMSD = 0.31 Å. Ligand-bound PaLDH (PDB code 5Z20; the molecule contains NADH and oxamate, an inert analog of pyruvate) is an asymmetric tetramer in which no two subunits are identical, with an average RMSD = 0.5 Å. Ligandfree FnLDH (PDB code 6ABI) is a symmetric dimer of strongly asymmetric homodimers; the two subunits of the same dimer present RMSD = 1.9 Å. Contrary to PaLDH, the asymmetry is significantly diminished by ligation of substrates, and the four subunits of the liganded homotetramer (PDB code 5Z21) present an average RMSD = 0.25 Å. Thus, it seems that, contrary to the expectations, asymmetry in FnLDH is induced by de-ligation, rather than by ligation.
Bacterial Purine Nucleoside Phosphorylase
Purine nucleoside phosphorylase (PNPase) is an important enzyme participating in the purine salvage pathway. The enzyme activity is ubiquitous, but the bacterial and eukaryotic variants are unrelated. Bacterial (E. coli) PNPases are homohexamers, assembled as trimers of homodimers [48][49][50], whose structure corresponds to the one we previously characterized as rings of dimers [29]; all monomer-monomer interfaces are isologous, and two types of interfaces exist: one intra-dimeric, the other inter-dimeric. The "functional" dimer is clearly identified by the fact that the substrate binding sites are located at the intradimeric interface. The enzyme binds two substrates, namely phosphate and the nucleoside; the binding sites are adjacent to each other. The phosphate binding site provided by residues Arg24, Arg87, Ser90 and Gly20 plus Arg43 from the neighbouring subunit. The ribose binding site involves Ser90, Met180, and Glu181, plus His4 from the neighboring subunit [49]. The purine base is exposed to the solvent.
The structure of the subunit is composed by a central 8-stranded β-sheet core surrounded by 8 α-helices; the last helix (helix 8) can assume three conformations: it can be continuous, opening access to the substrates binding site, or present an interruption and a bend, closing over the substrates binding site; in some derivatives, it can be partly destructured ( Figure 6).
The unliganded enzyme (PDB code 1ECP) [48] presents a mild intramolecular asymmetry, and pairwise superposition of the six subunits yields RMSDs in the order of 0.32 Å. In all subunits, the C-terminal helix (helix 8) is continuous, and the substrates' binding site is open. Upon binding of substrates, a very interesting structure change occurs, both at the tertiary and quaternary level. The homohexamer acquires a binary symmetry axis and is formed by two symmetric units containing one dimer and one half-dimer each (PDB code 1A69) [49]. The three subunits in the asymmetric unit differ largely in tertiary structure: one (subunit A) presents an interruption in helix 8, closing the access to the substrate binding site; the other two (B and C), which form the functional dimer are both in the open conformation, in spite of the presence of the substrates in all three subunits. Accordingly, the three-subunit assembly is strongly asymmetric, the subunit presenting the split helix 8 differing from the other two by RMSD = 1.32 and 1.76 Å; and the two subunits presenting the continuous helix 8 differing from each other by RMSD = 0.98 Å ( Table 3). The tertiary structure differences among the subunits in the liganded PNPase are greater than those between the liganded and unliganded PNPase (Table 3) More recently, a structure of partially substrate liganded PNPase was obtained, in which the hexamer contains 6 molecules of phosphate (or sulfate) and only 2 molecules of the nucleoside analogue Formycin A (PDB code 4TTA) [51]. The structure of this derivative is intermediate between those of the fully unliganded and fully liganded PNPase, because: (i) the homohexamer does not present symmetry axes, as it occurs in the fully unliganded PNPase; (ii) the functional dimers are AD, BE, and CF; interdimeric contacts occur between subunits A-F, B-D, and C-E; two subunits (A and F) present the closed conformation, two (B and D) the open conformation, as it occurs in the fully liganded PNPase, and the last two (C and E) present a partially broken/destructured helix 8; thus, the inter-dimeric contacts impose greater symmetry than the intra-dimeric ones; (iii) the destructured part of helix 8 is not resolved in subunit C, thus this subunit was not included in the analysis of RMSDs; and (iii) the two molecules of the purine nucleoside analogue are bound to the closed subunits, confirming the expectation that the closed structure has higher affinity for the substrate.
The presence of open and closed subunits in the fully and partially liganded states of the enzyme is consistent with the observed negative cooperativity. The destructured helix 8 causes a tertiary conformation that differs strikingly from both the open and closed ones with RMSDs of 2.8 and 3.5 Å, respectively ( Table 3).
The distribution of the open and closed (and destructured) subunits in fully and partially liganded E. coli PNPase obeys a remarkable rule: closed subunits belong to different dimers, but are adjacent, i.e., they contact each other via the inter-dimeric interface. This distribution is achieved because of symmetry or quasi-symmetry of the subunits connected by inter-dimeric interfaces, in spite of the strong asymmetry between subunits connected by the intra-dimeric interface (i.e., belonging to the same dimer). This distribution suggests a hitherto unrecognized functional relevance of the hexameric assembly. Moreover, the ordered distribution of the asymmetric subunits in liganded PNPase explains the otherwise perplexing fact that the average RMSD for the superposition of subunits is larger than the overall RMSD for the superposition of the whole assemblies: indeed, some subunits never superimpose when superpositions are explored for the whole hexamers.
Other Enzymes Obeying a Sequential Reaction Mechanism
dTMP synthase is the homodimeric, negatively cooperative enzyme that catalyzes the final step of dTMP biosynthesis; its substrates are dUMP and methyl-tetrahydrofolate. The enzyme presents strong negative homotropic cooperativity, with half-of-the-sites reactivity: in the presence of excess dUMP and the THF analogue CB3717, the homodimer binds two molecules of dUMP but only one of CB3717. The binding site(s) of substrates are located at the monomer-monomer interface [52]. The (half-) liganded enzyme is asymmetric, the two monomers differing with RMS = 0.43 Å . Unfortunately, to our knowledge, the structure of the unliganded enzyme is not available, thus we could not carry out a complete analysis.
Other Enzymes Obeying a Sequential Reaction Mechanism
dTMP synthase is the homodimeric, negatively cooperative enzyme that catalyzes the final step of dTMP biosynthesis; its substrates are dUMP and methyl-tetrahydrofolate. The enzyme presents strong negative homotropic cooperativity, with half-of-the-sites reactivity: in the presence of excess dUMP and the THF analogue CB3717, the homodimer binds two molecules of dUMP but only one of CB3717. The binding site(s) of substrates are located at the monomer-monomer interface [52]. The (half-) liganded enzyme is asymmetric, the two monomers differing with RMS = 0.43 Å. Unfortunately, to our knowledge, the structure of the unliganded enzyme is not available, thus we could not carry out a complete analysis.
Glyceraldehyde-3-phosphate dehydrogenase (G3PDH) is a homotetramer presenting negative cooperativity for NAD + , which binds to only two subunits, confirming the half-ofthe-sites reactivity of the enzyme [53]. Indeed, G3PDH is noteworthy because, at least in the enzyme from yeast, the first two molecules of NAD might bind with positive cooperativity, whereas the last two exhibit negative cooperativity [54].The subunits of the tetramer bearing 2 molecules of NAD + (PDB code 1J0X) present mild asymmetry, with an average RMSD of 0.31 Å. Quite surprisingly, the asymmetry is extended to all the four subunits, i.e., the tetramer cannot be described as made up of two identical but asymmetric dimers, each bearing one molecule of NAD + . Here again, we did not find any structure for the fully unliganded enzyme.
Vivoli et al. [55] make an interesting case for bacterial heptose isomerase, a tetrameric enzyme that may present both positive and negative homotropic cooperativity, in the absence of a clear-cut quaternary structure change; however, we feel that their study, though promising, does not include enough information to be included in the present analysis.
Conclusions
The aim of this work is to compare examples of the structures of cooperative proteins obeying either the MWC or the KNF reaction mechanism and to look for structural features that are common to either group or which may discriminate between the two groups. We resorted to a model-free analysis based on the RMSDs, because more refined analyses based on the measurement of "movements" of subunits or domains cannot be easily generalized to the case of structurally unrelated proteins. Our work has several limitations essentially because of the limited number of proteins for which sufficient information is available. Indeed, we required: (i) a reasonable certainty about the ligand binding reaction, whether sequential or concerted; and (ii) the 3D structures at atomic resolution of at least the unliganded and fully liganded states; possibly also of partially liganded intermediates. The first requirement is particularly demanding because sequential and concerted reaction mechanisms are both compatible with positive cooperativity and distinguishing among them is based on assumptions that, despite being very clearly defined, may be difficult to demonstrate empirically. The essential features that distinguish the MWC from the KNF model are listed in Table 1. Assessing these features is less straightforward than it may appear at first sight because the KNF reaction scheme requires that the incompletely saturated reaction intermediates break the structural symmetry of the oligomer, but makes no assumption on the fully unliganded and fully liganded derivatives. In the majority of cases, only the structures of the fully unliganded and fully liganded protein are available, while those of ligation intermediates, which would be more informative, are usually not available, except for proteins presenting negative homotropic cooperativity. Two distinguishing criteria are: (i) the demonstration of a ligand-independent structure change, promoted by heterotropic effectors or other experimental conditions strongly suggests the concerted mechanism; (ii) negative homotropic cooperativity, is consistent with the concerted mechanism only if subunit inequivalence or a specific (and uncommon) breakage of symmetry are present in the R state, thus as a general rule is an indicator that some variant of the KNF reaction scheme is obeyed. Quaternary structure differences between the fully liganded and fully unliganded states, per se, do not imply either the MWC or KNF reaction scheme, but are compatible with both. Moreover, the possibility should be considered that the same protein might present both types of phenomena; we consider this possibility as the sum of the two mechanisms, rather than some intermediate between them.
This study represents a first attempt to quantify some aspects of structure-function relationships of cooperative proteins at a group level, rather than going into detail on single cases; in particular, since only a few examples were available for each group, we cannot pretend to generalize our results; moreover, we excluded possible examples of KNF-like proteins present positive homotropic cooperativity (with the exception of FnLDH), because of the criteria given above. Nonetheless, our analysis offers some general suggestions: (i) All the proteins here considered, except PNPase, are homodimers, either homodimers of monomeric subunits or homodimers of more complex assemblies (e.g., of heterodimers, as in the case of Hb or of trimers, as in the case of transcarbamylase). All of them, PNPase included, present isologous interfaces [3].
(ii) In the absence of ligands, all the proteins considered in this work have symmetric or quasi-symmetric quaternary structure, with the exception of FnLDH, whose unliganded state presents significant asymmetry (see Table 3). In their liganded state they are symmetric or quasi-symmetric if they obey a concerted reaction mechanism, asymmetric if they obey a sequential one. Glycogen phosphorylase b is an outlier in this respect because it is a reputed allosteric protein but presents R state asymmetry.
(iii) Of particular interest are the partially liganded derivatives whose structures are available in a minority of cases. In Hb, the doubly-liganded derivatives of known structure are symmetric and have the same quaternary structure of the unliganded protein; however, one should consider that all such derivatives whose structure is available, have one ligand per heterodimer, a factor that may favor symmetric structure. In all other cases in which the structure of partially liganded derivatives is available, the protein presents negative cooperativity, thus, it is not compatible with the MWC model. In these cases, the partially liganded protein is asymmetric, consistent with the expectation of the KNF model. Interestingly the fully liganded derivative may often remain asymmetric.
(iv) The tertiary structure changes responsible for cooperativity are small both in MWC-like and KNF-like proteins, and limited to small portions of the subunit. Quaternary structure changes favored by ligation are usually large in proteins obeying the concerted reaction mechanism, as predicted by the MWC model, and usually small in sequentialbinding proteins obeying a KNF-like reaction scheme (Table 4). We can rationalize this finding by saying that in a macromolecule that maintains symmetry, a large quaternary structure change is instrumental to obtain the tertiary rearrangements responsible for the changes in ligand affinity; by contrast, symmetry violations may be achieved by limited, if any, quaternary structure rearrangements and local tertiary conformational changes. For example, in the case of Hb, a 1 Å change in the distance between the C helix and FG corner within each subunit [56] is obtained by having the βFG corner sliding over the αC helix by one helical turn, a movement of approximately 6 Å [12]. If we compare the tertiary and overall ligand-dependent structure changes (Table 4), we observe that, in our set, in the proteins obeying a sequential reaction mechanism, tertiary changes correspond to ≥70% of the overall RMSD, whereas in the proteins obeying a demonstrated concerted mechanism, tertiary changes account for only ≤30% of the overall RMSD, the remainder being accounted for by the quaternary structure changes. These estimates should be taken as indicative, given that the tertiary and quaternary contributions to RMSDs are not additive. The cases of PFK and GPb do not fit the above estimates, but probably in these enzymes concerted and sequential events coexist.
(v) α-helices play a very important role in the ligand-linked structure changes we observed (see Figures 4 and 5).
(vi) Isologous interfaces are compatible with asymmetry. This occurs because monomers forming a perfectly isologous interface in the absence of the ligand may undergo subtle tertiary structure changes that prevent some of the interface contacts to remain perfectly symmetric: e.g., a loop may move and prevent the equivalent loop on the symmetry-related subunit from moving in the same way; or an intersubunit contact may be tighter in one place, and looser at the symmetry-equivalent site. These effects may be exquisitely subtle.
(vii) In the structures we analyzed above, it is quite common that the ligand binding site occurs at a monomer-monomer interface, and interacts with residues from both monomers; this is the case for several proteins in our analysis, e.g., PFK (reputedly a MWC protein), and the Asp receptor. Ligand binding at the interface is associated with quaternary enhancement and is not incompatible with the allosteric MWC model.
(viii) In the half-liganded state of proteins that present negative homotropic cooperativity, we observed not only an asymmetry of the constituent monomers, but also tertiary structure differences between both monomers and the monomers of the unliganded state (see Table 3), i.e., ligand binding to one subunit changes the structure of both subunits albeit in different directions. In some cases, these changes are exquisitely ordered within the oligomer (e.g., in PNPase).
(ix) In our set of enzymes, allosteric or reputedly allosteric ones present heterotropic regulation in addition to homotropic cooperativity; heterotropic regulation is usually absent in the enzymes obeying a sequential reaction mechanism (except by hydrogen ions). There are however some examples of enzymes presenting negative homotropic cooperativity and heterotropic regulation (e.g., glyceraldehyde 3-phosphate dehydrogenase and deprenyl derivatives [53]), which would deserve a future study. This observation establishes a link between Monod's two definitions of cooperativity [2,3].
(x) Allosteric enzymes, as predicted by Monod et al. [3], usually catalyze one among the initial reactions of a metabolic path. The case for enzymes obeying a sequential mechanism is not straightforward due to the fact that we selected them because of negative homotropic cooperativity, which, as suggested by Cornish-Bowden [57], is confined to the central or terminal reactions of metabolic pathways.
Our "static" structural analysis may be compared with some pertinent information coming from rapid kinetic experiments and molecular dynamics simulations. In the case of Hb, both rapid photolysis experiment and molecular dynamic simulation consistently demonstrate that the allosteric quaternary structure change occurs over a time window of tens to hundreds of microseconds [58,59], whereas the first tertiary relaxations that follow photochemical de-ligation occur over a time window of hundreds of nanoseconds. This is confirmed by molecular dynamic simulations [60]. Few comparative analyses on different proteins have been carried out, e.g., [61]. It is difficult to generalize the results obtained thus far, but it is tempting to propose the hypothesis that the rate of the structural changes following de-ligation might be indicative of the reaction mechanism, the expectation being that proteins obeying a sequential mechanism (which require mainly tertiary structure changes) would relax much faster to their equilibrium unliganded conformation than proteins obeying a concerted mechanism (which requires substantial quaternary structure changes).
It is interesting to compare the results presented here with some previously published studies. Daily and Gray [62] developed a method to analyze tertiary and quaternary structure changes in allosteric proteins and interpreted their results in terms of a "global communication network" (GCN). These authors, however, defined allosteric any protein presenting heterotropic regulation: non-competitive enzyme inhibition, sequential and allosteric cooperativity, ligand-induced association-dissociation, all fit their definition of allostery which is even larger than that originally proposed by Monod in 1963 [2], let alone the identification of allosteric with concerted proposed by Monod in 1965 [3]. They find that the GCN occurs predominantly at the quaternary level for some of the proteins of their set, predominantly at the tertiary level for other proteins, but is mixed or interdependent (correlated quaternary and tertiary) for the majority of the proteins. Surprisingly, they exclude hemoglobin, and do not find a clear result for ATCase, the two best demonstrated examples of truly allosteric proteins obeying Equation (2).
Johnson and Barford [39] carried out a comparative analysis of structural features of allosteric proteins. These authors did not distinguish sequential and concerted reaction schemes, but the proteins they analyzed are all known or putative cases of allosteric, concerted homotropic cooperativity (Hb, ATC, GP, and PFK). They rightly emphasize that the proteins considered are symmetric oligomers. Since their analysis lacks examples of proteins obeying a sequential reaction mechanism, these authors cannot discuss whether the reaction mechanism has identifiable structural determinants. | 2022-11-10T17:21:20.362Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "5e5f29efc7f35a35c567873d1f2ccc349a7d360a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/12/11/1651/pdf?version=1668568035",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0fce3a0a6940f32a3ac905ca30ccdd587d91f4a",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237248214 | pes2o/s2orc | v3-fos-license | Supply Chain Management: Some Reflections to Improve its Influence in Business Strategy
Supply Chain Management (SCM) is a nearly new discipline of management that can be seen in different ways. However, only a strategic approach of SCM that leads to subsequent tactical decisions and operational implementation will provide the benefits that SCM promises. A presentation of the evolution of SCM from its beginnings as part of Operations Management (OM) to an independent field in management has the objective of determining its correct scope. Therefore, we examined papers where the research agenda of both OM and SCM —from the beginnings of these disciplines to these days— has been discussed in order to show the evolution of SCM as a field of high relevance in OM. We emphasize the strategic nature of SCM and the importance to be considered in this way. We argue that is a narrow view thinking of SCM as directly related to logistics as its main focus, since this logistic vision creates a bias that limits the real scope of SCM. In order for this strategic vision of the SCM to be implemented correctly, it is crucial to know both how the role and the profile of SC managers should be.
Introduction
History -in terms of evolution of research approaches and subjects of interest-has had its role in helping to frame the right questions to ask when teaching, researching or practicing (Wren, 1987). To analyze the evolution of Operations Management (OM) we can go back to the time of the industrial B revolution, or even before. In fact, Sprague (2007) It is crucial the difference between operative and non-operative decisions. The former are those decisions that can be taken based on certain information, a good engineer with a spreadsheet, for example, can arrive to a solution and apply it. The latter exists in the domain of uncertainty, there is not a unique solution hence, no matter what decision you choose, you cannot arrive to a full solution; the "right" decision does not exist. Managers have to deal with these kind of problems, the non-operative ones, since their decisions will be based on what consequences (problems) -derived from those decisions-they want to live with. That is what management is about. This consideration is important because, especially in OM, elapsed a period when operations research had the domain of OM under the figure of OM/OR (operations management/operations research). In this period of more than 30 years, OM had an unquestionable growth based on engineering decisions, more than on managerial ones (Ackoff, 1979;Buffa, 1980;Chase, 1980;Meredith et al., 1989;Voss, 1984Voss, , 2007Wren, 1987).
We can situate this period between post-World War II and the 80s. From that time various scholars claim for a change, proposing a new approach of the field, addressing the operation problems from theoretical approaches to more managerial ones. The consequence was an important shift in the applied methodology.
OM was oriented from its origins to production; hence, topics and the most important issues remained for several decades focused on production problems. It is also in the 80s that a break is observed and services, technology, and integration appeared as important issues (Miller et al., 1981;Pannirselvam et al., 1999).
This journey along the history and evolution of OM considers the progress made regarding topics, the transition from production management to a strategic view of operations, changes in the methodology directly related to the necessity of linking theory with practice, and scholars with managers, because as a scientific discipline, operations and SCM continues to look for practical relevance and theoretical impact for its research and interrelationship with others fields of knowledge (Coughlan et al., 2016) to finally get out of the firm's frontier, thus linking operations with providers, customers, C and the rest of stakeholders -participants of the whole business-, that is, the Supply Chain Management concept.
We explore different approaches of SCM, starting from this discipline as a synonymous of logistics, to later address scm as a strategic approach and a philosophy of management. Once we establish the origin of scm, we will discuss the strategic role of this field in the organization and the importance of linking the scm strategy to the business strategy. To carry forward this strategy we will stablish the role of the SC manager and their relationship with other areas of the organization and its stakeholders, among other characteristics.
From the beginnings of OM to SCM
This revision of OM evolution is not exhaustive, it rather aims at understanding the line of thought that guided OM into what is known as SCM, establishing the basic concepts that originated this new discipline.
Before the Industrial Revolution, production was poorly organized, being reduced to agriculture, livestock, and mining. Then, in the 16th century, Georgius Agricola wrote "De re metallica" (On the nature of metals), a book that catalogs the state of the art of mining, refining, and smelting metals. 2 This work has numerous woodcuts that provide annotated diagrams illustrating equipment and processes of that time described in the text, as well as information on the organization of work, management issues, and tools to be used, being probably the first OM textbook (Voss, 2007). Lewis (2007) goes back to the 19th century to rescue the works of Charles Babbage, centering the body of his work on the book On the Economy of Machinery and Manufactures. Babbage is an OM pioneer, whose work is arguably linked with Adam Smith's Inquiry into the Nature and Causes of the Wealth of Nations, hence having a leading role in the contributions to the Industrial Revolution.
Evidently, the Industrial Revolution is a milestone for OM. Those first attracted to Taylor's writings were engineers who had seen his experiments and publications appear in the transactions of the American Society of Mechanical Engineers (Wren, 2011), and had read his first book, Shop Management.
In 1911, Taylor published his seminal work, "The Principles of Scientific Management," in which he laid out the process of scientifically studying work in order to increase workers' and organizational efficiency (Giannantonio & Hurley-Hanson, 2011). Taylor's work went beyond OM, and he is considered one of the most important contributors to management. Smith, Babbage, and Taylor are exemplars of a widespread phenomenon during the 19th and early 20th centuries (Sprague, 2007), although Taylor shift from studying machinery to examine workers, their work, and their management. In this way, from the ending of 19th century up to WWII, knowledge moved to a new approach: Productivity Revolution. D Gilbreth (Frank and Lilian) and Ford were other two important protagonists in the development of OM at the beginnings of 20th century, the former with the study of movements, the latter with the assembly line. Henry Ford and Charles Sorenson developed a comprehensive manufacturing strategy by combining standardized parts with an assembly line in 1913 (Bayraktar et al., 2007). Much more, it is well known that Ohno's Toyota Production System rescued principles laid down by Ford. 3 Based on this progress, the period between 1890 and 1920, where the works by Taylor, Gilbreth, and Gantt were consolidated, was later defined as "scientific management." However, despite the great depression of the 1930s, the period from 1920 to 1960 can be considered in many ways as the "golden age" of the development of US industry (Bayraktar et al., 2007). Beyond Taylor's and Gilbreth's work -focused on machinery and workers-the OM paradigm required other explanations, being Elton Mayo who revealed other important aspects of operations.
While efficiency through different techniques was the focus of OM, Mayo discovered that other factors had great influence. His experiments at the Hawthorne factory, between 1924 and 1927, showed the importance of the human behavior for OM, thus provoking a shift in the efficiency paradigm (Brown, 1998).
Years later, WWII triggered a race that the US was not prepared for, since this country had a quasiobsolete fleet, no merchants (or very few ones) nor destroyers to protect them, and a not welldeveloped industry, especially in the field of precision optics. However, the applications of Taylor's "task study" allowed the US to revert this situation. In less than 3 months, unskilled workers were converted into first-rate welders and shipbuilders (Drucker, 1993). It was then necessary for the US industry to shift from the commercial to the military role. After WWII, factory management and production management were the center of the development of operations. A singular fact happened in 1959, when, almost simultaneously, the Carnegie foundation and the Ford foundation published two studies related to education. The conclusion of both was similar, the sounded education in business was not happening and business schools had to change their goals and methods. As a result, the OR/OM era began, and industrial and production engineers began to move from engineering schools to business schools.
The American Production and Inventory Society (APICS), founded in 1957 by "practitioners" in production and inventory control, played an important role in the evolution of the field. Singhal and Singhal (2007), in the special issue of the JOM on the evolution of OM, wrote that the work by Holt, Modigliani, Muth, and Simon -HMMScontributed to the renaissance of the field of OM as we know it today. These authors showed how aggregate production planning would evolve to the actual concept of sales and operations, establishing links between strategic and tactical decisions in a firm. According to them, aggregate production planning links operation with strategy, but do more for organizational 3 Taiichi Ohno devoted a chapter of his book Toyota Production System to the Ford System. E integration by linking operations with other areas. It also drives inter-organizational coordination by linking the organization outside its frontiers, including the concepts of SCM in OM. In the years following the end of WWII, OM got immersed in an identity crisis. The applications of operations research as the core of OM affected the natural evolution of the field, making it lose considerable interest among people . On this regard, Buffa (1980) showed three main phases of OM in the decades after WWII: i) a period comprising the 1950s, where OM was called "Industrial Management" or "Factory Management", characterized by a descriptive approach; ii) a period of two decades (60s and 70s), known as "Management Science/Operations Research" or MS/OR, focused on applying a hard quantitative scientific methodology, where scholars were far from managers; and iii) a third period known as "Operations Management," where OM begun to be a functional field of study within management sciences.
The model for OM had to be changed, thus several researchers claimed for a necessary transformation in the orientation of the field. At the beginnings of the 80s the works of Buffa (1980) and Miller et al. (1981) positioned OM in what it was and what it had to be. Like others, Buffa situated the beginnings of OM in the works by Smith, Babbage, first, and Taylor, later, which were centered basically on production. The difficulty in that decade for OM to definitively establish its identity was a broken bridge between the descriptive phase that held way in the 50 and the almost exclusively OM/OR established (Buffa, 1980). OM/OR gave OM its scientific methodology, and the flourishing of this discipline -from the 60 to the 80s-as a scientific field supported by OR put it on the top of the management disciplines, although making it lose its identity. Hence, it was difficult to differentiate between OM and MS/OR. Chase (1980), in the same line, claimed for more case study and less laboratory techniques. In response, a great advance in inventory, scheduling, aggregate planning, quality control, and capacity planning, among others, was observed, although mostly as isolated subsystems, therefore, as stated by Buffa (1980) "[…] we view the field as a collection of seemingly unrelated subsystems rather than a whole system […]." (p. 2) According to Chase (1980), OM research was mostly micro-oriented and suggested system-wide studies. For his part, Miller called for improvement in the communication between OM researchers and managers (Miller et al., 1981). In addition, Buffa called for an OM research agenda that related to the "practical world," recommending that OM researchers made their research results understandable and acceptable to practitioners (Buffa, 1980). Buffa's and Chase's articles appeared in the inauguration of the JOM, whose editor, Lee Krajewski, also claimed for less OR and more empirical research. With the advance of computer systems, material resource planning (MRP) acquired a central place since the 70s, then being enhanced to MRP II. It is through the next stage of MRP when OM contributed to the management integration with the development of enterprise resource planning (ERP), another important milestone for OM.
F
In the 70s, new approaches to OM came from Japan. The MRP, conceived basically as a push system, was challenged by an opposite view. The just-in-time (JIT) philosophy proposed a pull system where the focus was set in quality. The US felt the invasion and quickly the adaptation of the JIT philosophy to US industry occupied the agenda of OM. Voss (1984), in a British view of the same crisis, attempted to enlighten on the difference between production and operations management (P/OM) and operations research (OR), explaining OR as a discipline in its own right, with applications in marketing, finance, personnel, accounting, and P/OM. According to this author, OR is concerned with modeling and optimizing, while P/OM is concerned with procedure and process and may occasionally use OR-based procedures when deemed appropriate. works on the topics of process design/technology and manufacturing strategy had attracted more attention than in the earlier period (Filippini, 1997). According to Heizer (2006), cited by Bayraktar et al. (2007), although efforts in OM mainly focused on cost reduction during early 1980s, within the next decade the focus shifted to quality through collaboration of information systems and leanness.
The 90s, however, witnessed a significant and welcomed change. The creation of the Production and Operations Management Society (POMS), back in 1989, and their "[…] objective in publishing this journal [POMS journal] is to improve practice," represented a critical period of research in OM during the 90s, when empirical research started to appear in substantial quantity. This period can be seen as the "growth" phase of empirical research in OM.
Another important change can be seen in the evolution of research in the field of services. In the period from 1992 to 1998, nearly 75% of the published articles were production-oriented, whilst the 1998-2006 period witnessed an equilibrium between service and production articles. Until the mid-90s, empirical research was focused on specific and stand-alone topics, with potential in the interfaces between OM and other areas, such as accounting, finance, human resources, management, information systems, and marketing. In this context, the growth in SCM networks research, not only interdisciplinary but inter-organizational, was necessary for analyzing real-word operations management problems (Gupta et al., 2006).
The Journal of the Operations Research Society of America, created in 1952, made a call for a special issue in 1996 with the objective of broadening the range of research articles published in OR within the field of OM. In the preface, the authors recognized the new directions in OM's methodology toward a wide range of interdisciplinary and empirical approaches (Cohen & Magazine, 1996).
Ultimately, OM research shift to the topics that were indicated since the 80s and 90s as of substantial importance to the development of the field. Arriving to these days, and based on the analysis of more G than 300 articles form IJOPM, the focus is put on SCM, operations strategy, performance management, service operations, lean management, resource planning systems, quality management, and product design/development (Taylor & Taylor, 2009). Hence, we cannot deny the importance and contribution of OR/OM to actual OM.
The 90s depicted a shift toward a more strategic focus from the micro-focus noted by Chase in the 1980s (Pannirselvam et al., 1999). On this regard, a study on the 1980-2006 period found that the intellectual structure of the field made statistically significant changes between the 1980s, the 1990s, and the 2000s, evolving from a pre-occupation with narrow and tactical topics toward more strategic macro-topics (Pilkington & Fitzgerald, 2006). On the other hand, an exhaustive analysis of OM publications from 1987 to 2003 shows how the discipline has evolved from an axiomatic approach toward an empirical and interpretative view, and from artificial reconstructions toward direct observations and perceptions of people (Craighead & Meredith, 2008).
As we can see through this journey on the evolution of OM, many have been the elements that, by emerging as new trends, were knitting the threads of what is now known as SCM. Figure 1 shows the journey across OM history and a reference view centered on topics, methods, journals, researchers, and schools. Probably it is not the only way to reconstruct its history, but in an informative way it shows how researchers viewed and proposed OM and what it has become now.
H Figure 1. Stages in the journey across OM history. Source: authors.
The appearance and development of SCM
Traveling across the literature on OM, specifically on SCM, it seems difficult to find a connecting thread between both areas. We mentioned before that we can trace the origins of SCM in the work by HMMS, who brought, as cited by Singhal and Singhal (2007), two paradigm changes: i) unrelated and non-managerial individual's functions emerging as part of an integrated systems of managing production; ii) aggregate production planning as the central role of operation management by establishing a link with supply chain and internal integration.
Another milestone in the development of the actual concepts of SCM is the systemic approach of the organization resulting in a more integrated view, known as "systems dynamics," which gave way to a more holistic understanding of the factors involved and to the interrelation inside and outside the frontiers of the organization (Forrester, 1958).
Between 1982 and 1986 there was a great increase in areas of product design, strategy, and quality, confirming -in a way-the predictions made by Miller et al. (1981), though the work by Amoako-Gyampah and -nearly from the same period-shows that 70% of publications cover inventory control and scheduling as topics. Pannirselvam et al. (1999) examined the status of operations management academic research in the 1990s, comparing research trends at that time with past research directions in terms of the topics and the methodologies applied.
Amoako-Gyampah and signal new topic areas based on the 17 issues classification presented by Chase (1980), where we can find SCM for the first time. This new areas are new product development, technological management, technology choice, environmental concerns, and SCM.
Nevertheless, by 1997 SCM is not within OM agenda, as we can observe in the work by Filippini (1997), who used the term supply chain referring to an evolution of purchasing, and mentioned the interaction with customers and suppliers.
A conclusion of the citations and co-citations from the IJOPM between 1994 and 2003 is that emerging subjects within the field include SCM, among others. The hot topic during the 1990s -manufacturing strategy-lost most of its interest in the 2000s, while all the other topics that became relevant between the 80s and 90s continued to gain interest, especially SCM and quality. In such way, SCM appears to be moving away from the more tactical interests of OM, namely inventories, processes, and measurements, even cutting back its interest toward strategy in favor of more tactical and macro issues, such as supply chains and research methodologies (Pilkington & Fitzgerald, 2006).
The work by Taylor and Taylor (2009), which studies the period from 2004 to 2009, indicates a strong presence of SCM in the researcher agenda. As cited by these authors, "To summarize, the three prior studies suggest that several topics are at the forefront of the OM research agenda, especially SCM, operations strategy, performance measurement, and possibly lean systems" (Taylor & Taylor, 2009, p. 1320).
J Looking backwards to the evolution of OM, we can appreciate the emergence of SCM at some time between 1980 and 1990. In regard to the events in the evolution of OM that triggered the appearance of the SCM concept, we mainly find the following: 1. The tendency of internal alignment claimed since the 1980s, in short, recognizes the importance of coordinating the different functions inside an organization beyond better performance Buffa, 1980;Chase, 1980;Filippini, 1997;Larson et al., 2007;Meredith et al., 1989;Miller et al., 1981). This internal alignment is translated into a "holistic approach" of OM, establishing links among the basic management systems -organizational structure, planning, management control, communication and information, and evaluation and rewards-in order to facilitate decision-making processes (Groff & Clark, 1981). In this context, interdisciplinary and inter-organizational research become necessary for analyzing real-word operations management problems (Gupta et al., 2006) and both the internal and external supply chain, known as the "extended supply chain" (Houé & Guimaraes, 2017).
2. The development of logistics as an important issue for management is a more integrated view of typical OM issues such as inventories, supply, and distribution. We can notice that the internal alignment we referred before is a must if we see logistics under the integration paradigm (Larson et al., 2007).
3. Purchasing many times has been related to supply. Both in the academy and the professional world supply management and SCM were treated as synonymous. Therefore, it is important to understand the relation between supply and SCM, which is still a supply-oriented view of the incipient SCM discipline (Filippini, 1997). (Beesley, 1996;La-Londe & Masters, 1994;Mason-Jones & Towill, 1998).
Time compression
5. The strategic view of operations that shows the necessity of interacting with other stakeholders, especially, but not exclusively, with customers and suppliers.
The origins of SCM are mostly supported on the logistics reality. This is how SCM is strongly identified with logistics although they are not the same. Strategic view, internal integration, and relationships beyond the enterprise frontier represent the building blocks of SCM. Figure 2 shows a frame of the evolution of SCM.
Strategic view of SCM
Which is the correct approach to the SCM concept? During the last 20 years (or more), we could assist to a variety of definitions and concepts about SCM, which are normally linked with logistics or purchasing.
There is a tendency to relate SCM with administrating de flow of products or services. This interpretation is not wrong at all, but it is not the essence of SCM.
It sounds interesting to look back to when the concepts of SCM began. Forrester, who introduced a theory of distribution management, recognized the integrated nature of organizational relationships (Forrester, 1958). He is, probably, the first in studying the interaction among firms, as this author identified key management issues and illustrated the dynamics of factors associated with the phenomenon referred to in contemporary business literature as SCM (Mentzer et al., 2001).
The relationship between SCM and logistics as well as between SCM and OM is not a minor issue. In fact, in many firms, there is a misunderstanding of both concepts, being common that they are used as equivalents. If we focus on a logistic view of SCM we can realize we are in an operational field of SCM. In the same way, if we try to confine SCM into the world of OM we will be leaving out strategic relations with areas outside OM that are essential for a complete SCM strategy. This misalignment around the relation between logistics and SCM is presented by Larson, who identifies four conceptual perspectives from practitioners: i) a traditionalist perspective, where SCM is part of logistics; ii) a re-labeling perspective, where SCM replaces logistics; a iii) unionist perspective, where logistics is part of SCM; and iv )an intersectional perspective, where logistics and SCM are related and have commonalties (Larson et al., 2007). The intersectional approach of SCM is where the ultimate goals of this discipline will be reached, and where the strategic focus is present. L Some essential areas for research that could clarify managers when and how could be more suitable for their companies to align the SCM strategy with business strategy are still fragmented and uncompleted.
According to Houé and Guimaraes (2017), "supply chain management is at the heart of business strategy," (p. 5) thereby the SCM strategy should be taken into account when the organization outlines its business strategy (Akın-Ateş et al., 2018;Bag et al., 2018). On the other hand, how to operationalize these strategies, as well as SCM practices and success factors for their implementation, is sometimes disconnected from the business strategy.
This strategic view of SCM indicates its direct relation with business goals. Therefore, not considering the "long term" approach that goes against the "short term" objectives affects significant opportunities for firms to enhance their financial performance, create strategic advantages, and achieve mutually beneficial performance outcomes (Obied-Allah, 2015).
As such, the impetus to integrate is not necessarily to make a process more efficient or capitalize on economies of scale. Instead, integration occurs when supporting firm goals or objectives. In this regard, strategically integrating may have a stronger relationship with improved performance since the foundation for integration is not operational in nature, but rather the foundation is to support an underlying strategy (Ralston et al., 2015).
To obtain the results promised by SCM we should consider its three hierarchical dimensions: i) a SCM strategy linked with business strategy considering SCM as a management philosophy, thus establishing the basis or the strategy; ii) SCM as a set of activities to implement such management philosophy at the tactical level of SCM; and finally, iii) SCM as a set of managerial processes, which is the operationalization of the SCM philosophy (Mentzer et al., 2001).
As a management philosophy, SCM is directly associated with a system approach where the focus is the whole chain, rather than a particular organization, and cooperation is at the core of the strategic view toward a strong customer orientation. Collaboration involves multiple firms or autonomous business entities to engage in a relationship aimed at sharing improved outcomes and benefits. To achieve these improvements in performance, businesses need to establish an appropriate level of trust, share critical information, make joint decisions, and, when necessary, integrate supply chain processes (Soosay & Hyland, 2015).
Based on the above, SCM strategies need to be oriented around these fundamental topics. There is no doubt about the business nature of such conditions, hence the absolute necessity that both strategiesbusiness and SCM-be strongly linked. Consequently, the participation of the latter in the former strategy is imperative.
What identifies SCM is the coordination that the whole chain requires with the purpose of achieving the overall performance. This overall performance implies a long term perspective, regardless of the role of each player in the chain, collaborating to create a win-win condition (Simamora et al., 2016). Normally, M the whole chain involves more than one organization and the level of vertical integration in a chain can affect the implementation of the SCM strategy, but not SCM concepts.
This efficiency has to be found or obtained through the different areas within the organization and through the chain. Consequently, reducing the SCM to a full coordinated logistic approach including vendors and customers is a narrow view, far from what SCM tries to be. SCM concentrates upon relational rather than transactional factors (Cavinato, 1992). Besides, SCM includes areas such as research and development, product design, plant location, in all aspects (Ferdows, 1997;MacCormack et al., 1994;Mentzer, 2008), as well as any other area that need to be coordinated with the objective of making the final product or service have a lesser cost or providing a better service level to final customers. There is definitely a need for the integration of business operations in the supply chain that goes beyond logistics (Cooper et al., 1997a).
Even if it is obvious by definition, SCM is the management of the supply chain, not only its existence, which represents a big difference, especially if we understand what we are trying to manage among organizations. This SCM mining is really about value chains or value networks, which is broader than supply chain (SC), as it involves more than supplying. Thus, we draw a definite distinction between supply chains as business phenomena and the management of these supply chains. The former is simply something that exists (often also referred to as distribution channels), while the latter requires management efforts by organizations within the supply chain (Mentzer et al., 2001).
The 1994 definition provided by The International Center for Competitive Excellence is adopted in this paper given its clarity and specificity: "Supply chain management is the integration of business processes from end user through original suppliers that provides products, services and information that add value for customers" (Cooper et al., 1997b, p. 2). Hence, the participation of the SCM strategy in the business strategy is essential for a successful implementation.
These strategic SCM approach considers an integrated (internal and external) behavior, sharing information among the members of the SC, sharing risk and rewards, process integration, setting the same goals, and partnership for long-term relations, all this under the cooperation and system approach umbrella. In fact, supply chain strategies can be used to support the implementation of competitive strategies (Qi et al., 2011;Sweeney et al., 2015). Nevertheless, these conditions ought to be in a SCM aligned with a firm's business strategy. Other factors derived from the cooperation paradigm play a fundamental role and should be considered as well. These factors derived from the nature of SCM are trust and commitment. Collaborative activities, such as information sharing, joint relationship effort, and dedicated investments, lead to trust and commitment, which, in turn, lead to improved satisfaction and performance (Nyaga et al., 2010).
However, this integration and collaboration between stakeholders has its own difficulties, as previously mentioned, since relationships between two parties are rarely equal, therefore there will be issues of power balance, control, and dependency to resolve or cope with by each party. The relative N position of power, and the extent to which this power extends, may influence the level of cooperation or conflict between parties (Johnsen & Lacoste, 2016) and thus affect the real spirit of collaboration, where companies find cultural conditions and common long-term objectives that lead them to work collaboratively without the presence of an asymmetric power relationship. This type of relationship, then, must migrate to a deeper concept, that of partnership, in order to mitigate the bargaining power. For that to happen, firms must lower barriers, work together to reach a common goal, put aside their individual problems and needs, and develop a team mindset that thereby mitigates asymmetries (Cowan et al., 2015).
Although power will not always prevent collaboration, it will often prevent true partnering as it affects surplus-value sharing (Chicksand, 2015).
As in marketing, mostly applied to customers and products, the widely used concept of segmentation is relevant to SCM and has to be considered with special care when configuring the SCM strategy. The supply chain is more than just a chain, it is a network, where not all members behave the same way, have the same needs, or share the same business strategies or culture. Consequently, organization culture plays a critical role in shaping the behavior of the SC through collaboration (Bag et al., 2018), and so understanding those characteristics will give rise to the construction of different segments, where different modes of action will be applied to conceive the SCM strategy, shaping different supply chains strategies in a "dynamic alignment" (Gattorna, 2006). Therefore, through segmentation, firms with similar cultural conditions are regarded as risk-takers by fostering collaboration. This results in high levels of trust among supply chain actors. Finally, it is worth mentioning that the lack of a truly collaborative culture leads to dissatisfaction and low performance, and vice-versa (Cadden et al., 2015).
The Supply Chain Manager role and profile
The strategy concept we have dealt with so far has to be managed somehow. Therefore, we need to create a body of knowledge that can be useful for practitioners. Otherwise, if we do not transmit a clear message to practitioners, we will be on the other side of the river without a bridge to connect scholars' theory with managers' necessities.
Are the functions or the responsibilities of a supply chain manager clear? If, as we established, SCM is strategic, how does the supply chain manager participate in the business strategy?
For the SC managers to fulfill their mission it is important to determine their role within the organization. The SCM is the management of a whole chain or net, therefore, to ensure the manager meets or enforces the strategic targets for the SC requires a special element, that is, managing the chain outside the organization.
The skills for the management of contemporary logistics are defined from the supply chain orientation, which requires human management skills and a systems view of the business, not only technical skills related to specific functional areas (Abreu & Alcântara, 2015).
O It is difficult that an organization aligns its supply chain if it is not internally aligned. As discussed before, the need of a multidisciplinary approach lies in the evolution of OM, a cornerstone for the development of SCM. Nowadays the importance of internal integration is well known; this is, different areas working together toward the company's objective, opening the water light compartments or silos often caused by the organization itself in search for overall efficiency as a sum of partial efficiencies, sometimes caused by the influence or power exercised by some managers for their own benefit or the benefit of the areas under their responsibility. For a correct implementation of SCM, Jespersen and Skjøtt-Larsen (2005) suggests changes in the traditional organization from a functional structure to a matrix-like structure, where functions become integrated. A process rather than a function approach direct the efforts on meeting the customer's requirements. In this way, the overall organization revolves around these processes. It is worth mentioning that the customer focus not always happens in companies where the silo mentality prevails (Cooper et al., 1997a). Under this paradigm, the role of the SCM should be of mandatory coordination, both internally and externally.
Based on the above, the supply chain manager should hold a staff position within the organization. As a consequence, he or she has to manage different skills, as the principal mission is to coordinate and obtain the necessary collaboration (internal and external) of the different areas and organizations within the SC (Arredondo & Alfaro-Tanco, 2019). This SC manager should better manage soft skills rather than hard ones, for example, communication and teamwork (Prajogo & Sohal, 2013). It is also mandatory this person has a holistic view of the business and of the strategic role bestowed by the top management, considering that, on one side, he/she is the nexus between the organization and stakeholders, those identified as partners in the coordination or cooperation (depending on the SCM engagement degree), and on the other side, has the fundamental role of internally aligning decision-making processes of the various areas of the organization with the defined strategy. Hence, the communication degree and the level of coordination inside and outside the organization play a key role in successful strategy implementation. Thereby, the role of the SC manager is similar to an orchestra's director, who prepares the strategy in a previous stage, then defines the role of each member of the orchestra, and finally coordinates the execution.
Once the organization is internally aligned it ought to open its channel to the chain. Here, managers in the chain need to be in touch with each other, helping the SC manager in the coordination of the chain.
Again, the SC manager is the enabler that make things happen.
The way in which the SC manager achieves his/her objectives without a formal power over the areas that must coordinate is the real challenge. Hence, the importance of this manager's profile. Clearly, we mean not necessarily a logistics manager is the one to be assigned, although the need for internal and external coordination that logistics positions have always required is favorable. However, the risk is in the P counter-message of an organization that relates SCM directly with logistics; the same happens when there is a hierarchical dependence between the logistics manager and the SC manager. Lambert et al. (2008) suggest that the domain of supply chain management is characterized by the following criteria: i) it needs to be cross-functional, ii) it needs to be process-oriented, and iii) it needs to include all activities for managing interactions with customers and suppliers.
Multicultural knowledge, knowledge of the general business scenario, technical knowledge in SCM, training and monitoring of work teams (including multifunctional), change management, conflict resolution, breach of functional barriers, interpersonal and communication skills, ethical awareness, and social responsibility are the main skills, competencies, and functions expected from modern SC managers (Abreu & Alcântara, 2015).
Derived from this approach on how the SCM strategy has to be presented, it is necessary to pay special attention to the incentive or compensation systems applied to managers. The direct relation between the SCM strategy and the business strategy is underlined through this important issue. The alignment between goals, incentives, and SCM initiates with the internal alignment must ensure that incentives for the different areas are not contradictory with respect to particular objectives fulfillment. Once incentive programs are in line with the business as a whole, we will have reached a necessary but not sufficient condition toward SC integration. The role of the SC manager in this previous stage is to achieve the alignment of reward programs for managers in order to harmonize the SCM strategy with the business strategy. Otherwise, it will be impossible to consolidate cooperation, since managers will give priority to their particular compliance objectives despite this does not favor the SC strategy.
Main implications for Latin America
SCM in Latin America has been influenced by practices mostly applied by multinational organizations with subsidiaries in the region. The SCM body of knowledge -more developed in the US and Europe-was not always transferred, although its implementation indeed was. As a result, we have witnessed a degraded version of the SCM, where logistics integration is nearly an equivalent of SCM.
Conferences, seminars, and courses on SCM in Latin America have a very strong, if not entirely, logistics component. As we said, this heritage has affected not only practitioners. Thereon, Ruiz-Torres et al. present a review of Latin America-oriented SCM literature, showing that the reference to SCM and logistics is reiterative, as they suggest in the scope of SCM: "Note that supply chain management is broadly defined to include sourcing, logistics, transportation, distribution, and inventory management" (Ruiz-Torres et al., 2012, p. 21). Their work also shows how scarce and dissipated is the SCM literature focused on the Latin American region. Another example can be found in the work by Young and Esqueda (2005), who introduce a the literature review that first addresses the evolution of global SC, and then the complexity of the global SCM, skipping afterward to logistics in Latin America, completely forgetting about SCM. Once more, SCM in Q Latin America is presented as a logistics problem, although the central issue in that paper is related to the vulnerability of the SC.
Notwithstanding the aforementioned reality, we reinforce the importance of deeply developing all the aspects related to an integrated logistics within the context of proper implementation of the SCM concept. Consequently, we consider it relevant to redefine the Latin American view of SCM from a narrow concept around the improvement of logistics coordination to a broader strategic notion, where integrated logistics, among others, will surely have a key role in the SCM strategy implementation.
Conclusions
Throughout this paper, we have proposed SCM as a philosophy of management. Scholars and practitioners have presented different approaches around the SCM concept, many of them establishing a strong connection between logistics and this body of knowledge. This view of SCM negatively affects the internal and external integration of firms, although it represented a milestone for the correct development of SCM as a strategic concept, which has been emerging due to the evolution of OM.
We explored the beginnings of SCM and the evolution of OM discipline, discovering various topics that gave path to the birth of SCM, among them, internal alignment, the importance of coordinating the different functions inside organizations, customer orientation as a strategic consideration, the importance that integrated logistics provided to OM, time compression, the evolution of purchasing, and the strategic view of operations that has shown the need of interacting with other stakeholders. SCM evolved into a self-area of knowledge which, by its nature, is strongly related to other fields of research in management. SCM can be analyzed from different points of view, however, only a strategic approach to SCM resulting in subsequent tactical decisions and operational implementation will provide competitive advantages for companies. Firms may need to rethink why they integrate among themselves, as well as how integration, both internally and externally, could affect their performance. SCM goes beyond the physical integration of products or services and its objective is to achieve superior performance through different levels, allowing more value to be delivered to final customers. It is from this insight that companies will define how to interact at different levels in order to achieve their goals -which goes beyond the individual boundaries of each company-and then determine how benefits will be distributed among participants.
In this scenario, the role of SC managers is crucial, and their participation in the SCM strategy as part of the business strategy becomes instrumental, since they contribute to the correct design of reward programs for managers, align the organization through the SCM strategy, and manage the existing relationships within the SC.
Finally, we have established the scant development of SCM in Latin America. After reviewing the most important journals, we could not find specific studies in which this field has been addressed particularly for Latin America, or at less its strategic orientation.
Managerial implications
Understanding the dynamics of SCM from the perspective of the internal organization provides tools for company managers and executives to better understand how organizations' strategies should be aligned in order to obtain enhanced results in their ways of establishing and maintaining relationships with the other members of the chain.
For companies, it is of the utmost importance to make correct interpretations regarding their SCM. This tour across the evolution of SCM helps managers to understand from what position to address the strategic aspects of the SCM, becoming a guide for companies to be more efficient by working together and defining the levels of coordination and cooperation that can be achieved with suppliers or clients.
Additionally, this work sought to broaden the understanding of the internal aspects that must be considered before and after defining the SCM strategy out of the limits of the company. In this context, the role and profile of the SCM manager will provide a guide for the correct selection and allocation of tasks, which, as we have already mentioned, include aspects of both internal alignment and external coordination.
Next Steps
Considering that the current interpretation of the SCM by companies in Latin America has not encompassed the strategic vision that we suggest should be adopted, deepening the analysis of the current situation of this field in Latin America will contribute to improving the performance that companies could obtain from the correct application of a collaborative SCM strategy. For this reason, we suggest studying some of the constructs we have presented in this paper, emphasizing the integration of the internal and external SCM, as well as the role that the SC manager should play in the coordination of both.
We consider an important following step is to examine the degree to which companies in Latin America integrate their business strategy with that of the SCM and how, through incentive systems, managers operationalize such strategy.
Disclosures
Authors declare no institutional or personal conflicts of interest. | 2021-08-19T19:49:56.332Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "4bd7895394c541ad6fbbdaf6b5bdd1e2555806e9",
"oa_license": "CCBYNCSA",
"oa_url": "https://revistas.unal.edu.co/index.php/innovar/article/download/95568/80402",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "041bd65e511cafa2daf7a8a1abaa2c602a28ad5b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
219048171 | pes2o/s2orc | v3-fos-license | Marine pharmacology: an ocean to explore novel drugs
The rise in burden of diseases and treatment failure demands discovery of novel compounds. Exploiting natural sources which include territory of land and water in potential manner paves the way for significant future innovations in drug discovery. Marine environment has striking functionalities in its skeleton that has fascinated scientists to show enormous interest in investigation of new compounds. Marine sponges, algae, tunicates, sea whip etc. from the marine pipeline are the important sources for biological active compounds. Recent technology advancements further added to the domain of drug research in isolation and evaluation of marine derived products. To date, significant number of compounds have been isolated. Wide range of antibacterial, anti-inflammatory, antiparasitic, neuroprotective, antiviral, anticancer, analgesic, antimicrobial, antimalarial compounds have been pursued in control and management of diseases. These represent marine ecosystem as a hopeful resource in discovery of novel compounds with ideal starting point in scaffolding additional screening of natural marine products. This is review of abstracting on history, lead development process in identifying and comprehending basic nature of compounds that is promising initial step towards unique pharmacological design, the triumph of approved and ongoing trails, brief depiction on current status and challenges being faced in marine drug discovery field.
INTRODUCTION
Marine pharmacology is a part of pharmaceutical sciences that centers around the substances with dynamic pharmacological properties presenting marine animals and plants . 1 Oceans contain over 80 % of vivid flora and fauna on the planet. Marine living beings, for example, wipes, tunicates, delicate corals, nudibranchs, ocean bunnies, molluscs, echinoderms, bryozoans, prawns, shells, ocean slugs and marine microorganisms are the roots of bioactive mixes. 2 As defense mechanism, marine invertebrates release toxic compounds to protect onslaught of predators which gave scope to scientists to study as secondary metabolites. The mechanism by which they prevent encroachment interact with the similar enzymes and receptors involved in human disease processes. With the advent of new and inevitable human diseases marine pharmacology comes in play with the innovation to explore novel drugs.
HISTORY
Benefits of marine medicine has been employed by mankind for thousands of years. Evidence of Bathini fish in treatment of asthma, oyster powder in GIT disturbances, pearl oxide in treatment of TB, dried sea horses for respiratory system and sexual function proves that.
In 1940s, cephalosporin C produced by fungus acremonium chrysogenum isolated from Mediterranean Sea near by Sardinia, commenced for the initiation of cephalosporin class of antibiotics. Authentic marine drug development started with the discovery of spongothymidine and spongouridine in 1950s from the Department of Pharmacology, Osmania Medical College, Hyderabad, Telangana, India Caribbean sponge tethya crypta (Werner Bergmann). In 1969 anti-cancer drug cytarabine (Ara-C) and 1976 antiviral drug vidarabine (Ara-A) derived from arabinose nucleosides isolated from shallow water marine sponge collected off the coast of Florida.
Shimomura and two American scientists shared the 2008 Nobel prize for discovery and development of a jelly fish protein that later contributed to anti-cancer studies.
MARINE PHARMACOLOGY IN INDIA
Natural products are important source of new therapeutics. Due to development of resistance of pathogenic microorganisms to antibiotics, tb resistance, malaria resistance etc. biological diversity of ocean offers great promise as a source of drugs for the future. Peptides transfer information and regulate life which is involved in biological signal mechanism. Bioactive peptides are very significant settled products that are present in many marine species that interested pharma industry worldwide which aspire them for use in the remedy of various diseases.
India spreading a coastline above 8000 km with a wide variety of marine habitats and organisms with potential to tap the resources in a better way to explore for clinical ailments. In terms of new drug discovery, Indian institutes such as: national institute of oceanology -Goa. Central drug research institute -Lucknow. Bose institute -Kolkata. Regional research laboratory -Bhubaneshwar. Central institute of fisheries education -Mumbai.
Contribute to the research activity from marine sources. Many other Indian institutes, universities, and pharmaceutical companies have also recognized the significance of this subject. 3
SOURCES OF MARINE DRUGS
Ocean flora and fauna -sponges, coelenterates (sea whips, sea fans and soft corals), tunicates, echinoderms (star fish, sea cucumbers etc.) and bryozoans and a wide variety of marine microorganisms.
Handling
Caution should always be taken in handling marine organism. Proper protective equipment like gloves and eye protective always be worn. Hydroids and sponges have highly irritating compounds which causes immediate itching in some people e.g.: Tedania ignis (Fire sponge).
Collection
Very simple and inexpensive to very complicated expensive. Collection from near or on shoes are ready via beach combing, wading or snorkeling, from deep water via dredging-invasive technique, from depths that cannot be achieved by scuba diving, submersibles can be used. Documentation is needed. Record the longitude, latitude, depth, water temperature, salinity and dates of collection. Careful description of organism like color, odour, morphology, mucus thickness and reproductive state is noted. It is common to find worms, mollusks, copepods, brittle stars, small fish and even larger marine invertebrates.
Storage
Many begin to decompose; therefore, organisms need to be either dried, extracted or frozen immediately to reduce spoilage and chemical degradation. After collection, organisms are frozen immediately at -20 0 c. Some organisms are placed into an alcohol such as methanol, ethanol or isopropanol. Algae are air dried and stored at room temperature.
Antiinflammatory
Mediterranean sponge species Spongia officinialis -invivo study on rat model of carragenaninduced paw edema.
Neuro protective
Green seaweed Ulva reticulataneuroprotection by inhibiting acetyl and butyryl cholinesterases, in treatment of alzheimer's
Anti-parasitic
Extracts of Sarcotragus sp.tunisian sponge experimented in invitro for anti-leshmanial activity.
Analgesic
Ziconotide -1 st FDA approved drug in treatment of pain, extracted from marine snail Conus magus.
Anti-microbial
Cephalosporin C derived from marine fungus Cephalosporium acremonium.
OCEAN AS A SOURCE OF NEW NUTRITIONAL VALUES
Proteins in fish have positive role in reducing blood cholesterol and have protective effect in diabetic renal diseases. Fish oil is an essential rich source of omega-3 fatty acids, reduces risk of heart disease and promote healthy skin. Fish lipids: approximately 50% of fatty acids in lean fish and 25 % fatter fish are PUFAS. Marine microalgal species contains high quantities of DHA (docosahexaenoic acid) and arachidonic acid with major proportion of PUFA believed to result in higher levels in the brain -recommended as nutritional additives for infants. Squalene is an isoprenoid compound found in liver oil of deep-sea sharks, got immense medical applications.
Marine-derived nutritional supplements, or "nutraceuticals," present a new opportunity for research in the application of MNP to human health issues, one important application of the many bioactive compounds derived from the marine environment is their use as molecular probes, molecules broadly defined as nondrug substances which can be used to probe the foundations of important biochemical events. 4
Phase III trial
Eribulin Mesylate (E7389) or halichondrin B -polyether macrolide natural molecule extracted from marine sponges-anticancer. Soblidotin (auristatin PE). This drug is undergoing trials in clinical phases I, II, and III with different companies who are trying to use it as a weapon to specific monoclonal antibodies linked via customized peptides. 5 Tetrodotoxin-a very well known "marine toxin," and highly substituted guanidine-derivative, neuropathic pain. 6
Holds
anticancer, anti-inflammatory, antibiotic properties. Studies are focused on biological activity of toxic compounds that repel predators. Sponges are known to be rich source of terpenoids which has shown strong antibiotic activity. Terpenoids is an active ingredient in drugs like variabilins and hydroquinone with both analgesic and anti-inflammatory properties. 7 Manzamine alkaloid derived from Indonesian sponge targets malaria, tb, leishmania. It can be planted and cultured in sea or aquarium by cutting from parent. Discodermolide -deep water sponges of the genus Discodermia is effective against breast and other types of cancer. Lithoplocania lithistoides a polyketide plocabulin, entered clinical development. In advanced preclinical trials, halichondrin B, derived from a New Zealand deep water sponge, Lissodendoryx sp.
Algae
Several scientists reported that sulphated polysaccharides and oligosaccharides isolated from marine algae have antiviral, immuno-inflamatory, antithrombotic, antilipidemic, and antioxidant activities. 8 It Possess antioxidants, anticancer, and antiviral properties.
Algae species S. platensis is hepatoprotective on carbontetrachloride induced liver injury investigated in rats. Brown macro algae lessonia -polyphenol rich extracts modulate gut microbiota in high fat diet and streptozocin induced diabetes mellitus in rats. The ability to produce the bioactive compound specially by green algae is boon to the pharmaceutical research.
Tunicates
Includes seas quirts, chemical metabolites with properties of antifungal, antibacterial, cytotoxic, antimalarial, inhibition of protein kinase C is noticed. Didemnin B derived from tunicates has strong antiviral property against DNA and RNA viruses.
Recently didemnins are also derived from marine alpha proteiobacteria Tistrella mobilis, T. bauzanesis, culturing tunicates on a large scale may be possible and the economics of doing are attractive.as tunicates have few predators, their removal from sea may not have profound ecological impacts. Main area of interest in screening from tunicates is anticancer drugs followed by antimalarials.
Sea whip
Pseudopterosins have anti-inflammatory and analgesic properties. Inhibits synthesis of eicosanoids in polymorphonuclear lymphocytes. The activity of extreme selectivity towards the target of these organisms attracted researchers.
Sea weeds
Holds the wealth of mineral elements from sea.
Squalamine
Derived from the liver of dog fish shark, used in carcinoma and diabetic retinopathy. Geranyl phenazine diol anticholinesterase inhibitor produced by streptomyces isolator.
Fucoidan derived from focus vesiculosus can be novel pharmaceutical compound in treatment of ovarian cancer.
Marine fungus arthrinium in prostate cancer lines -PC 3 assay. Sea cucumber -tegument extract -antiviral activity and human rota virus.
Octopus vulgaris derived from taurine rich water-soluble extract is hepatoprotective by regulation of BCL 2 signalling.
Alzheimer's disease is a neurodegenerative disorder with incurable symptoms. Current drugs show temporary effect for restricted time period. Some marine derived organic compound (example cytarabine, trabectedin, eribulin and ziconotide) found effective in neurodegeneration. 9
ASSESTS AND DEMERITS OF MARINE PHARMACOLOGY
The ocean portrays rich source incessantly novel compounds with great potential as pharmaceuticals, nutritional supplements, cosmetics, enzymes where each of these bioproducts has a strong future market value.
Lifesaving drugs are mainly found plenty in microorganisms, algae, invertebrates. In recent years, markable digit of novel metabolites with potential pharmacological properties have been discovered from marine organisms.
Approximately 25,000 compounds have been screened, about 30 of them are in clinical trials and 10 of them are commercially marketed (Sharma et al, 2005). Oceans continuously provide new opportunities for the discovery of marine-derived antibiotics. Increased refinement in the tools to explore the seas has expanded the habitats and greatly improved the chances for discovery of new species and chemical compounds they produce.
The reasons for strong showing of drug discovery from marine natural products can be attributed to detail carbon skeletons and human bodies accept these molecules with minimal manipulation. Furthermore, advancement in metabolic engineering of microorganisms synergise marine product exploration more easily than mammalian and plant cells.
According to American cancer society, global burden of cancers is expected to grow as high as 27 million new cases and 17.5 million cancer deaths by 2050. Current research effort in cancer therapy are aimed to disturb transcription process involved in maintenance of oncogenic state. Clinical investigation in the marine natural products has promising results e.g.; Lurbinectedin (trabectedin analogue) -from Caribbean sack like sea squirt, Ecteinascidia turbinate.
According to an estimate by Sawadogo et al, in 2011 available promising anticancer compounds of marine origin can be divided into different classes of chemicals majority represented by terpenes and terpenoids (40.5%) following by peptides (19%), macrolides 14.3%), and alkaloids (12%). Among them, 50 % are first time looked upon as anticancer agents.
Certain inhibitors of the signal transduction pathways involved in cancer development have been obtained from marine organisms, and further studies are under process to attain evidence on their efficacy in cancer therapy. 10 Certain fishes have rapidly gaining acceptance as a promising animal model for pharmacological and toxicological screening.
The bicolor damselfish, Pomacentrus partitus, is the first animal model for one type of human cancer that involves nervous system, neurofibromatosis type -I.
Zebra fish
A model for marine peptide-based drug screening. Concurrent technological advances have significantly improved the genetic tractability and unique advantage of in vivo imaging in zebrafish, providing a means to dissect the molecular pathways underlying tumor initiation, progression and metastasis. 11 It is an important model for understanding the mechanisms of development and diseases such as cancer. Cheaper than mice for experimentation. Grows at faster rate. It produces hundreds of off springs with handful of embryos for scientists to study. Transparency of zebra fish facilitated scientists easily to examine internal structures. Shares 70 % genes with the humans. It has unique ability to repair heart muscle on which scientists are working on these specific factors involved in the process if it helps to develop ways of repairing human heart. Retinal angiogenesis can be induced in zebra fish by transgenic fish which helps in study of diabetic retinopathy.
MARINE COMPOUNDS AS DIAGNOSTICS AND EXPERIMENTAL TOOLS
Marine compounds (isolated) are used not only in disease remodelling but also as diagnostics and experimental tools. Pfu, enzyme from marine thermophile pyro coccus furiosus used in PCR. GFP, green fluorescent protein isolated from jellyfish Aequorea Victoria. It is used as a biological marker for labelling of cell structures invitro and in vivo. Shimomura (Japan), Chalfie and Tsien (USA) got the Nobel prize for its detection and establishment of its applications in 2008. 12 LAL-limulus amoebocyte lysate from crab Limulus polyphemus for detection of pyrogenic lipopolysaccharides from G -ve bacteria. Keyhole limpet hemocyanin (KLH) is a large, multisubunit, oxygen-carrying metalloprotein which is found in the hemolymph of the giant keyhole limpet Megathura crenulata, a marine mollusk, living off the coasts of California, USA. 12 Although drugs derived from land are incredibly vital to human mankind, they are not sufficient to meet the needs. Ocean with huge space covers all over the world i.e., acquiring resources is feasible and these has high potential for future development in drug discovery. The rich diversity of marine biodata with its unique physiological adaptations to the harsh marine environment is providing a fruitful source for the discovery of lifesaving drugs.
Demerits
Despite the potential of numerous compounds to combat bacteria, viruses and diseases, marine sources are not extensively investigated because of many reasons.
Identification of feasible clinical formulation also needed by experts. The lack of taxonomic knowledge for marine sp. is a major demerit faced by marine natural products programs. Few species causing marine envenomation e.g., cnidaria, mollusca, chordata.
Additional expenses, time and specialized services of divers, submersibles are required to explore the marine organisms found in remote areas. The pharma industry loses its value and advantages by disclosure of compound from which organism it is isolated and its structure. The techniques used with marine natural products include complex screening procedures that are time inefficient, expensive and labour some. Identification of lead compounds sometimes pose a problem because it is present in low quantity, difficult to isolate. Lurbinectedin (a potential anticancer agent) for extracting 1 gm of drug from source about 1 ton of sea squirts is needed that implies need for massive collection of sources from marine ecosystem. These compounds receive less attention though numerous bioactive compounds are of novel chemical structure and mechanism, never have been synthesized in any rational programme for designing new drugs.
CHALLENGES ASSOCIATED WITH MARINE PHARMACOLOGY
Though 70 % of earth's surface is covered with oceans with rich source of marine products, still major proportion of these areas are inaccessible to researchers. And it needs a still good teamwork of researchers and oceanographer. Application of advanced technology in manufacturing machinery to collect the samples from the depths of the sea helps in reducing manpower.
Different environmental conditions result in the production of different metabolite every time from the same organism. Marine natural products show symbiotic association, are difficult to develop separately because their growth depends directly or indirectly on host. The major drawback for the invitro studies of the products is most of important genes remains silent.
The conservation and protection of these resources and ecosystem is major challenge of which global warming is massive trending threat. Critical point is that while developing drugs from marine sources, marine environment has to be preserved without harming. To overcome the issue of regular supply, the use of natural resources should be under control and need to favour the growth of marine organisms in its natural environment by farming which is also known as "Mariculture." 13 Another option is to culture the marine organisms under artificial conditions by the process called as "aquaculture". The indepth study of chemical structure focus on discovery of lead compound, where genetic engineering plays a key role in targeted approach. Synthesized or hemi synthesized must be done with reference to compound derived from biological source. Incorrect classification of species may compromise entire drug discovery project. The main emphasis is given in search of drugs for dreadful human diseases such as cancer and AIDS. Another challenge is the adequacy of marine resources of particular compound. Pharma industry will try to invest in to trials, if that particular lead compound is effective. Only if supply can be addressed in economically and ecologically feasible fashion, marine drugs will get a chance in the market. Government also should take appropriate measures in the development of marine drug in terms of finances, safety etc.
CONCLUSION
Modern medicine has come a long way from the discovery of penicillin saving thousands of lives wounded in world war 2 and now standing at the zenith of technological development aiding in the discovery of new drugs. But the terrestrial resources we have been using is diminishing, thanks to the continual discovery of new diseases and ailments without effective therapy. An answer to this impasse lies in the usage of marine flora and fauna for the manufacture of new pharmaceuticals. This branch of pharmacognosy pioneered by Bergmann evolved from the two drugs (Ara C used in the treatment of ALL and Ara A used in herpes infection) and has been on the 5 th pedestal discovering the unexplored realms of marine biology.
It's estimated that 70 % of the world's land is covered in water and there lies Representatives of all the phyla of species of which 12 are exclusively marine. With the cultivation of marine microbes which are once considered in culturable has been able to lift our hopes of sustainable resources for new drug discovery owing to the efforts of Davidson, Fenical, Kobayashi and Okami. Such a diverse resource lying unutilised is a humongous mistake considering the need for new drugs which can effectively treat cancer, AIDS, auto immune disorders and the everincreasing antibiotic resistant disorders. The answer to all these plights lies in the marine environment waiting for us to harness the key to a successful identification and production and thereby providing for the needful community of humans.
So, the task Infront of us, is to improve our technology or modify it in a way to harvest marine ecosystem and thereby new medicines in a cost-effective way. Let's not forget that the conclave requiring effective medicine belong to the lower socioeconomic status. Most of the times especially in a developing country like India and so to reach them, the medicine should be of low cost.
The arena of marine sciences is not only a platform for the discovery of new pharmaceuticals but also an aid to identify drug pharmacodynamics at cellular level. | 2020-04-30T09:10:48.188Z | 2020-04-23T00:00:00.000 | {
"year": 2020,
"sha1": "c2a127cb170b61f541ef56780457469882863bb0",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/4045/2875",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68f7f8aafd345f875f1e1df99cc6e95309985831",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258775672 | pes2o/s2orc | v3-fos-license | Does Green Finance Matter in the COVID-19 Pandemic? Case Study of Coca-Cola FEMSA “Green Bond Framework”
. This research focuses and aims to analyze the importance of green finance in helping to minimize the impact of the COVID-19 pandemic and its impact on the Coca-Cola FEMSA's (CCF’s) “Green Bond Framework” (GBF) initial goals that could be hampered due to the phenomenon. This research uses the green theory perspective, measured CCF's GBF as a good green investment initiative, using green investment principles, and measured the success of Coca-Cola with the Environmental, Social, and Governance (ESG) concept. This study conducted qualitative research using case study methods and analyzed secondary data in the form of books, journals, articles, news, reports, and websites. The findings of this research are the COVID-19 phenomenon doesn’t distract CCF’s GBF initial goals and green finance is believed to minimize the impact and occurrence of pandemics in the future. This research also analyzes that CCF can fulfill the ESG criteria, but not all indicators have good results, as in the social field, where Coca-Cola has a bad impact on health. Likewise in the field of governance, where Coca-Cola employees are still dominated by men compared to women. The implications of this research are to contribute to a new understanding, motivate, and increase companies' awareness of the importance of CSR in the form of green finance during the COVID-19 pandemic in achieving sustainable finance and development.
Introduction
The COVID-19 pandemic is an unprecedented global phenomenon with far-reaching effects.This phenomenon has hampered the fulfillment of the United Nation Sustainable Development Goals (UN SDGs) because of the severe implications for social, national, and economic welfare and sustainability [1].As of September 20, 2021, deaths from COVID-19 in the US surpassed the 1918 Spanish flu by a total of 675,446 [2,3].According to the World Economic Outlook in October 2021, global economic growth fell 3.2% in 2020.The world is affected by a $90 trillion global economic recession that exceeded anything seen in nearly a century.By 2020, global trade is expected to have decreased by 5.3%, resulting in massive economic losses for developing countries [3].This study is important because the pandemic phenomenon has caused changes to the environment, and green finance has become one of the most important factors to deal with environmental threats such as the COVID-19 pandemic because green finance can help increase the quality and longevity of mankind [4].Therefore, Multinational Corporations (MNCs) as one of the important actors in world economy need to provide investment programs, promotions, and environmentally friendly industries around the world to face the environmental crisis [5], such as Coca-Cola Femsa's (CCF's) "Green Bond Framework" (GBF) as part of their Corporate Social Responsibility (CSR) programs.
The global crisis has made the world compete to achieve sustainable development, by making investments that are useful for the community.Green finance can be one of the most essential factors and the most effective strategy to contribute in achieving the Sustainable Development Goals (SDGs) (Sachs et al., 2019) [6], protect, and recover our society from the pandemic's crisis.Green bond initiatives are a part of green finance that can be classified as a method of allocating funds for initiative programs aimed at achieving sustainable finance and development, enhancing the quality of natural resources, the environment, and the health of living things by reducing the negative environmental effects of human activities [7].CCF released a CSR program for green financial activities in 2020.Initially, "GBF" was developed to optimize CCF's positive impact by aligning its financial strategy with achieving environmental targets in three strategic areas, namely the circular economy, water stewardship, and climate action, while also contributing to the UN SDGs 2030.However, CCF's initial goals could be compromised due to market turmoil caused by the COVID-19.After conducting a literature study related to this topic, only a few studies were discovered.This indicates insufficient research in this area or a gap in the literature.Because previous studies have focused more on the importance of green finance in a pandemic in general, but none have discussed the adaptation and its importance in the case study of CCF's GBF in the COVID-19 pandemic [1,4,6,8].This research gap has raised questions about "Does green finance matter in the COVID-19 Pandemic and will the COVID-19 phenomenon hinder CCF's GBF initial goals?".
Objectives
The purposes of this research are to analyze the importance of green finance in the COVID-19 pandemic, as well as the impact of the pandemic on the GBF's initial goals.The benefits of this research are to contribute to the discussions about the importance of CSR in the form of green finance during the COVID-19 pandemic and its alignment with SDGs 2030, and provide an additional reference for further research.This research are expected to be taken into consideration by the government in policy-making, collaboration with MNCs, the United Nations, and International Organizations to achieve the SDGs, also motivate and increase companies' awareness of green finance and CSR projects to achieve social, economic, and environmental sustainability.
Corporate Social Responsibility (CSR) and COVID-19 Pandemic
CSR is a business commitment to contribute to sustainable economic development that seeks to support and help improve sustainability in the Environmental, Social, and Governance (ESG) sector.Meanwhile, John Elkington argues that companies that show social responsibility will pay attention to the progress of society, the environment, and improve the quality of the company or profit [9].Stephen Brammer and Andrew Millington also explain that CSR can help companies minimize the adverse effects caused by a crisis [10].A company has a duty and social responsibility to help the community during difficult times or crises, such as the COVID-19 pandemic [11].The number COVID-19 cases has risen dramatically, resulting in a global reduction in all sorts of economic activity [12,13].During the stock market crisis triggered by the pandemic, and according to a sample of companies from 73 countries from January to December 2020, MNCs experienced a substantially bigger loss in company value than Domestic Companies (DCs) [14].The COVID-19 pandemic has a negative impact on human life and there are a lot of companies that contribute efforts, funding, and resources to overcome the consequences of the pandemic, as a form of CSR towards society and the environment.This leads to a better understanding of which CSR practices are most successful in helping to tackle the COVID-19 pandemic [15].
Green Finance and Green Bond
Green finance can be regarded as a form of financial investment used to create sustainable development projects and initiatives.Climate finance, industrial pollution control, water sanitation, renewable energy, sustainable infrastructure, and biodiversity protection are all examples of green finance.Including, investing in initiatives that aim to reduce or eliminate Greenhouse Gas (GHG) emissions, give access to clean water, minimize air pollution, and avoid global warming.Green finance not only improves environmental quality and stimulates social activities, but it also ensures natural resource availability, promotes sustainable financing, and supports economic growth [16][17][18].Thus, banks and other financial institutions may help the social and business sectors achieve long-term economic growth by providing green finance (Dalia and Vitality, 2021) [19].During the COVID-19 pandemic, green finance includes a variety of financial instruments such as green bonds, green securities, green loans, and green project financing that can assist affected countries in improving their environmental quality and economic development [16,18].
Since the introduction of green bonds, growth in the green financial market for issuers has been particularly substantial.Green bonds are debt securities that are used to fund projects that help the environment [20].The worldwide green bond market has grown dramatically in recent years and is now regarded as a long-term investment strategy [8].As a result, green finance, green bonds, and COVID-19 transition bonds can be regarded novel options and approaches for obtaining emergency funding for economic recovery due to this pandemic crisis [18].However, companies must take responsibility and effectively implement their programs or initiatives in order for green recovery solutions to be effective.
Theoretical Perspective
Literature study method has been carried out to analyze and further understand the theories used in related topics and found that [21] analyzed their research using the ESG concept.In contrast to [12] who uses Strength, Weakness, Opportunity, and Threat (SWOT) Analysis, while [22] uses Green Theory.This research will use the green theory.The green theory is a theory in IR that discusses environmental problems that occur in global politics [22].In the International Relations (IR) perspective, the Green Theory exists as a result of human knowledge and awareness of the importance of the environment, as well as the human-nature relationship.The purpose of this perspective is to provide an explanation of the ecological crisis that humans are facing, as well as provide solutions to dealing with this crisis [23,24].Green theory's contribution to IR can help us re-examining the relation between the state, the economy, and the environment [25].The main actors of change in this view are non-state actors who use the Green Politics view to deal with the environmental crisis that occurs.The green theory perspective understands that climate change is a direct consequence of human collective choices [25].Humans not only cause environmental damage but, humans are also capable of developing and implementing solutions for the damage.According to an ecocentric perspective, the health and well-being of living things, populations, species, and ecosystems must be protected [26].So, it can be seen from Figure 1, that green theory has three elements, the main actors of change, namely MNCs, ecocentric perspectives which argue that protecting the health and welfare of living things are important and environmental crisis such as COVID-19 caused by humans.Even so, humans and MNCs can also be actors in protecting nature, by carrying out CSR projects in the form of green finance or investment, and one of them is the "Green Bond Framework" by Coca-Cola FEMSA.
Methods
This research was conducted using a qualitative data analysis method, which means that this research does not perform statistical procedures, analyze non-numeric data, or other forms of quantification [27].The procedure for analyzing the data in this research is by collecting the data, reading it, then organizing and categorizing the data, determining the theme and a relevant theory that will be used to analyze this research, and finally, interpreting the findings or the results [28].As an approach to exploring qualitative data, this research uses a case study method on the CCF's "GBF".A case study is a research strategy to investigate a matter carefully by collecting complete information using various data collection procedures, such as going through documents and analyzing them.For the data analysis process, data is needed in the form of text or images.In this study, different data has been analyzed, to be able to have a deeper understanding of the analyzed data, be able to present the data, and make a broader interpretation of the data [28].This study was carried out because sustainability, particularly in finance and development, is one of the most important topics.Green finance also has an important role in a pandemic's recovery and CCF is one of the MNCs that was issuing a GBF, which is still rarely researched.
Data Collection
This research was conducted by collecting secondary data from books, articles, journals, documents, and reports from official websites, as well as a comprehensive literature study by reviewing, and identification of subjects that can bolster a better understanding of sustainable finance and development, CSR, green finance, green bonds, and the COVID-19 pandemic.This data was collected to find out the importance of green finance and case studies on CCF's CSR program, "GBF" and the impact of COVID-19 on the initial goals of GBF.
Green Theory Analysis and the Importance of Green Finance
The human body can be a reservoir for viruses, such as viruses found in animals.The virus can enter the human body when humans increase their interactions with animals through activities such as raising, slaughtering, trading, and the domestication of exotic animals as pets [29].Inhumane behavior toward animals can be considered as a major cause of pandemics, as happened at the Exotic Animals' Market in Wuhan, China, where the COVID-19 virus, infected humans for the first time.Environmental expert Jane Goodall said that another pandemic could reoccur in the future if humans do not improve their relationship with the environment [30][31][32].As the pandemic continues to spread globally, scientists conclude that environmental damage has become the dominant contributor to the pandemic [32].There are more than 350 global health organizations that have signed a "healthy recovery" petition addressed to the leaders of the G20, stating that environmental concerns had a devastating effect on humans, such as polluted air and dirty water that can lead to the emergence of viruses.WHO has also stated that improving the environment can increase the likelihood of preventing disease outbreaks.Jane Goodall also reckons that the environmental catastrophe has triggered a pandemic since the environment and human health are inextricably linked, and the only solution to avoid future global outbreaks is to save the planet and focus on sustainability [32].
Figure 2 shows that the COVID-19 pandemic phenomenon has caused environmental changes.This virus is caused by an environmental crisis as a consequence of human behavior.Environmental crises such as air and dirty water pollution, as well as human behavior, can affect the health of living things, such as the infection of animals because many animals are held together in small cages at the Market in Wuhan.The animals' placement can stress the animals and allow the exchange and mixing of several different body fluids.Because of the interaction between people and these animals, the virus housed in these reservoirs can spread to humans.The transmissions are reinforced even more by the handlers' long and stressful work hours, as well as scratches, scrapes, bites, and other injuries inflicted by animals on their handlers [33].The COVID-19 virus can be avoided by taking actions or initiatives carried out by humans and MNCs in improving the environment and maintaining the health of living things.All of these things are related and in line with the perspective of green theory where, crises or environmental changes that occur in the world, are a consequence of human behavior.According to an ecocentric perspective, protecting the health and wellbeing of living things are important.So, actions or initiatives by humans and non-state actors, such as MNCs are needed to be able to maintain the health of living things, prevent and minimize impacts, and provide solutions to environmental crises that occur.Like CCF's "GBF" initiative, with the aim of achieving sustainable finance and development, it can also help prevent environmental crises that could cause a pandemic phenomenon in the future.
Green Investment
Green investment or finance, known as environmentally friendly investment centered on ESG, that aims to support corporate practices that will have a positive influence on the environment [34].The notion defines the company's investment operations focused at environmental protection, pollution reduction, carbon emissions reduction, increase the usage of alternative energy, and natural resource conservation [35].The development of green and sustainable project innovations is very important to create wider opportunities [36].The World Bank has succeeded in issuing green bonds using blockchain technology with a potential savings of 10 times the cost of issuing bonds, so that various projects and companies can issue these securities.As of August 2020, 12 of North America Coca-Cola's bottling companies have implemented the blockchain platform for internal supply chain management [37].CCF is also working on changing its master database and payroll system to cloud-based, to comply with market trends and the digital world [38].Coca-Cola FEMSA, S.A.B. de C.V. (CCF) or KOF is the largest Coca-Cola's franchised bottling company, headquartered in Mexico City.Sustainability is the main foundation of CCF, and to support that, they issued the "GBF" initiative [39].Green bonds can be issued by companies, banks, or governments in order to raise funds for climate change solutions and finance environmentally friendly projects.[40] has established the "Green Investment Principles" as a benchmark in determining and assessing the returns of CCF green investments in "GBF": Effective monitoring and engagement CCF's sustainability and finance team will track and report the distribution of net proceeds from the issuance of green bonds transparently [41].
7.
Report transparency CCF has released a publicly accessible Green Bond Report.
Table 1 shows that CCF's GBF has met the green investment's principle.With the success of this green investment, it is hoped that CCF can achieve sustainable finance and development, as well as help prevent and minimize the impact of the COVID-19 phenomenon.
Environmental, Social, and Governance (ESG) Concept
ESG theorists aim to make companies more ethical and responsible.Many investors also realize that the ESG factor is important to understand the company's goals, strategies, and quality of company management [43].ESG elements can also be linked to green theory, such as the Environmental (E) side where companies are prohibited from taking actions that will have a damaging effect, such as polluting air and groundwater, which is related to green theory because the environment is the most important thing that must be taken care of.The Social (S) side discusses gender discrimination and relations with the community related to green theory because the environment can be damaged and repaired by humans.Meanwhile, the Governance (G) side discusses how to manage a company better which is related to green theory because of the importance of governance of an MNC as the main actor of change in improving the environment.The criteria and implementation of the ESG concept in the "GBF" CSR project has various benefits, by being able to measure the CCF's ESG impacts on society, employees, and the planet.There are some criteria that can be considered for each of the three elements of an ESG investment [44][45][46][47][48].LGBTQ+ workplaces for three years in a row [53].
After being analyzed based on a set of ESG criteria from each of the three elements, it can be said that CCF has had an impact and contributed positively to the environment, social and corporate governance.However, there are still some indicators that haven't had a good or significant impact, especially in the social and governance fields.Because Coca-Cola is still unable to contribute positively to public health and safety because the level of sugar contained in Coca-Cola drinks can harm humans' health.The same applies to gender equality, as there are far fewer female employees than men.With this analysis, it is hoped that Coca-Cola can correct deficiencies and increase its positive contribution so that Coca-Cola can become a company that has good CSR and achieve ESG sustainability.
Sustainable Development Goals (SDGs)
CCF has a goal in achieving sustainability, in September 2020, CCF successfully issued green bonds implemented to finance CCF's transition to low-carbon operations, minimize the company's exposure to environmental risks, and strive to achieve sustainability goals and contribute to the achievement of the UN SDGs, especially in the 12th pillar "Responsible Consumption and Production" (Coca-Cola FEMSA, 2020) [39].The SDGs targets according to (United Nations Global Compact, 2015) [54] have become a reference for CCF in making initiatives or innovations that are in line with the SDG's points.Especially on the 12th pillar, there are several possible targets that hopefully can be achieved by the CCF which also align with the three main objectives of the GBF.CCF undertook initiatives to improve energy efficiency in manufacturing, packaging and logistics with a 9.8% increase in energy efficiency from 2018 to 2020, while CCF's bottling operation electricity use 85% of clean energy [41], this initiative is in line with GBF's climate action objectives.In 2021, CCF already reduced 31% of its packaging and increased product recycling by using rPET in plastic bottles, and recycling 98% of manufacturing waste [38], this initiative is in line with GBF's circular economy goals.Then, CCF reduced its water consumption to 1.47 liters by 2021 [38], and this initiative is in line with GBF's water stewardship objectives.
Coca-Cola FEMSA's "Green Bond Framework" Adaptation on COVID-19
Based on the CCF's Integrated Report 2021 and Green Bond Report 2020, it can be concluded that the COVID-19 phenomenon did not hinder the initial goals of GBF because the results showed that CCF was still able to carry out and strive to achieve the initial goals of GBF, as well as trying to help and reduce the impact of COVID-19.From 2018 to 2021, CCF has made progress in increasing Coca-Cola's rPET packaging, increasing water efficiency, and reducing CO2 emissions [38], as noted in previous implementations of ESG and SDGs.
This research sees CCF's GBF as one of the breakthroughs in implementing greener CSR and has a good impact on the recovery from the pandemic.CCF has tried to adapt to environmental changes due to the COVID-19 phenomenon as a strategy in overcoming the current implementation of CSR in CCF so that GBF becomes more relevant, by improving performance on three of CCF's GBF main strategies, namely circular economy, water stewardship, and climate action.The COVID-19 pandemic has had an impact on recycling rates, and low oil prices have made pure plastics less expensive than recycled plastics.As a result, Coca-Cola has been working with suppliers and partners to promote the availability of recycled plastic by collecting a bottle or can for every bottle sold around the world [55].The University of Warwick's researchers determined that a circular economy can aid the pandemic's recovery and the achievement of net-zero carbon emission targets.A circular economy can help the economy adjust to crises such as the COVID-19 epidemic while also making it more sustainable and competitive [56].
During the pandemic, one of the most significant factors in preventing the coronavirus is clean water, sanitation, and hygiene (WASH), but there are still some places that lack of clean water.Since 2010, Coca-Cola has aided with the provision of safe drinking water to more than 10.6 million people.Coca-Cola continued its WASH initiative during the pandemic, providing personal protective equipment and hygiene access [57].
Climate actions are crucial to avoid the occurrence of another pandemic in the future.To combat climate change, GHG emissions must be reduced, which can prevent deforestation and animal migration that can increase the risk of infectious disease spread, also harmful air pollutants that can weaken the immune systems and cause respiratory infections (Harvard T.H. Chan, 2020).As a result, the COVID-19 pandemic can be reminder of the link between health and the environment, as well as the fact that environmental protection may be the only way to avoid another pandemic in the future [56].
During the COVID-19 pandemic, CCF continues to prioritize the safety and well-being of its employees and society by prioritizing health and safety, and also aims to increase the company's commitment in creating economic value [38].Despite the changing environment due to the COVID-19 phenomenon, Coca-Cola continues to focus on its sustainability strategy, COVID-19 precautions and recovery plans to protect short-term results while advancing the company's long-term goals.Coca-Cola has been trying to contribute, handle and prevent the COVID-19 pandemic by focusing on CSR programs such as, donating drinks to medical centers and vulnerable communities of more than 3.8 million liters, expanding hospital capacity, distributing 1, 9 million medical devices, climate action, water management, and a circular economy that have become the most important focus of CCF's GBF.
Conclusion
Emissionsr has demonstrated that CCF can still adapt, run and strive to achieve the initial goals of GBF by reducing water consumption, reducing GHG emissions, increasing the use of rPET bottles, as well as trying to help and reduce the impact of COVID-19 by assisting medical personnel, hospitals, and vulnerable communities.Then, the GBF initiatives and the importance of green finance in the COVID-19 are analyzed using a green theory perspective that, green finance or investment can minimize the impact and occurrence of pandemics in the future.COVID-19 is caused by environmental crisis, so the environment must continue to be taken care of by humans and MNCs as the main actors of change in this perspective, by carrying out CSR projects in the form of green finance or investment, such as CCF's GBF.CCF's green finance initiative has also been measured using the Green Investment Principles which shows that this green finance initiative can be considered a good green finance.GBF's achievement was also analyzed using the ESG concept, where not all of these ESG criteria were fulfilled, such as in the health sector where Coke has a bad impact on humans' health and the lack of gender equality among female employees which affects corporate governance.The achievement of SDGs 2030 were also analyzed, where CCF's GBF may be able to achieve some of the targets of the 12th pillar.With the programs issued by CCF in the GBF initiative, CCF can be said as relevant and has tried to adapt to changes in the global environment due to the COVID-19 pandemic.Because the CSR programs contained in the GBF are expected to help and prevent the consequences of the COVID-19 phenomenon and the upcoming pandemic.For future research, it's recommended to examine further about the role of technology and blockchain in green investment.
Table 1 .
CCF's GBF Implementation on Green Investment Principles | 2023-05-19T15:08:31.873Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "36853b17ce92d3f5580e3a4c56fdc8f7ddad7b65",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/25/e3sconf_icobar2023_03017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f492d4251825f997b0ba5529376fea04329b3e24",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
252300127 | pes2o/s2orc | v3-fos-license | Comparative analysis of optical transformation for image encryption
: This paper first introduces the characteristics of optical transformation, and then according to the relationship between optical transformation and cryptography, it is suitable for image encryption, followed by a detailed description of various common optical transformation, and comparative analysis
Introduction
The application of image has penetrated into every aspect of people's life [1][2]. The development of the Internet [3][4] has greatly promoted the progress of image technology, but also brought high security risks. Criminals can intercept the transmitted images for use or tamper with them. This requires the image to be encrypted before transmission. However, the increasing amount of image data [5][6] puts forward higher requirements and tests for encryption technology.
Refregier and Javidi first proposed the optical image encryption algorithm double random phase coding technology in 1995 [7]. Since then, the classical double-random phase coding technology has been extended from Fourier transform domain to fractional Fourier transform domain and Fresnel transform domain, and the optical dimension of hidden secret information has also been extended from the initial phase information to amplitude information or polarization information. In recent years, many foreign scholars have applied different optical processing techniques to optical encryption algorithms and verified the feasibility of these methods through experiments. Prof. Javidi's group at the University of Connecticut, Prof. Sheridan's group at the University of Dublin, Prof. Zhang's group at the University of Birmingham, Prof. Quan's group at the National University of Singapore, and Prof. Alfalou's group at ISEN in France have done a lot of meaningful work on the design of optical image encryption algorithms.
Image encryption algorithm based on optical transform is no longer limited to classical double random phase coding algorithm, but expanded to different transform domains, such as Fourier transform [8][9], fractional Fourier transform [10][11], Fresnel transform [12][13], Gyrator transform [14][15], linear regular transform [16] and so on. In this paper, several commonly used optical transformation are compared and analyzed, and the optimal selection of optical transformation is obtained under general circumstances.
The rest of this paper is organized as follows. Section 2 introduces the cryptographic advantage of optical transformation and its relation to cryptography in detail. Section 3 gives a detailed introduction and comparative analysis of common optical transformation. Finally, the concluding remarks are summarized in the last section.
The advantage of optical transformation
Optical image encryption technology has the advantages of parallel processing, large key space, strong robustness, fast computing speed and high design freedom. The most prominent advantages are parallel processing, large key space and fast computing speed. Published by Francis Academic Press, UK -11-Parallel processing: Parallel data processing is an inherent ability of optical systems, in which any pixel in an image can be simultaneously transmitted and processed. The more complex the image is and the more information is, the more obvious this advantage will be.
Large key space: the wavelength, amplitude, phase, intensity, and spatial frequency of the light can be used as the key of the system. Fast calculation speed: through a light irradiation can be completed a encryption, fast implementation. There are no comprehensive evaluation and performance indicators Optical transformation and cryptography have the following similarities and differences, as shown in Table 1.
The relationship between chaos theory and cryptography
Because there are many similarities with the password system, so the optical transformation to image encryption has a great advantage, and the difference between the password system, is in the application process need to pay attention to the place. Published by Francis Academic Press, UK -12-DRPE encryption in the Fresnel transform domain relies on a random phase plate rather than a Fourier transform lens. The optical encryption and decryption process of Fresnel transform is shown in Fig.1. In a cryptographic system, 1 RPM and 2 RPM are two random phase plates placed at different distances. The plane optical wavelength and distance z of Fresnel transform can be used as additional key.
Fresnel diffraction
The expression is as follows
Fractional Fourier transform
The cryptographic expression for the fractional Fourier transform can be written as follows Where, sgn(.) is a symbolic function.
The optical device of fractional Fourier transform is shown in Fig.2. Published by Francis Academic Press, UK -13-
Gyrator transform
Gyrator transformation is realized by a combination of three generalized lenses and its mathematical expression is shown below: The optical implementation device of DRPE encryption method based on Gyrator transformation is shown in Fig.3. In the encryption process, two sets of lenses need to be placed behind RPM1 and RPM2 respectively. The decryption process is the opposite.
Where, is the wavelength of plane light.
Classification analysis and comparison
There are five types of optical transformation. Five encryption systems are tested, as shown in Table 2. All of these five optical encryption systems can fully decrypt the original image from their own encrypted image. Comparing the encryption performance of each optical system, fractional Fourier transform encryption system, Gyrator transform encryption system and linear regular transform encryption system perform better in the correlation between adjacent pixels. In terms of anti-noise attack, fractional Fourier transform, Gyrator transform and linear regular transform encryption system perform better than Fourier transform plus Fresnel transform encryption system. Fractional Fourier transform, Gyrator transform and linear regular transform have better performance in the correlation between adjacent pixels and anti-shear attack. In the realization of optical devices, Gyrator transform and Fresnel transform are simpler. In practical applications, not only various encryption properties should be considered, but also the realization of optical devices should be considered. Therefore, Gyrator transform and Fresnel transform are selected for further study in this paper. In addition to the advantages of parallel processing capability and multiple degrees of freedom, optical system also has the following two limitations: First, the ability to resist selective plaintext attack needs to be improved. In a large number of existing optical image encryption algorithms, the key stream used for diffusion is related only to the key. This means that if the key is not changed, the same key stream will be used to encrypt different plaintext images, making it vulnerable to select plaintext attacks. For example, an attacker can construct a plaintext image (such as an all-white or all-black image) composed of special pixel values and apply an encryption system to encrypt the image. One of the basic requirements of modern cryptography is that the cryptographic algorithm itself is completely open. Based on the mastery of the encryption algorithm, the attacker can get the used key stream by comparing the ciphertext image with the plaintext image.
Second, the optical encryption technology matching hyperspectral images is not perfect. If the optical encryption scheme used for color image is directly used for hyperspectral image encryption, it will result in low security or low efficiency. Usually there are two kinds of cooking. One is to encrypt different channels separately for hyperspectral image encryption. If the encryption scheme is the same, a large number of ciphertext with the same key will be generated. If the encryption schemes are different, the encryption efficiency decreases. In the other scheme, different channels are combined into a single channel, which is used for hyperspectral image encryption, with complex preprocessing and low efficiency.
Conclusion
In practical application, the beam can be encrypted once through a lens, which is fast and efficient, but in computer simulation of optical encryption needs to write a strict program to verify. Efficient and secure encryption scheme is the key to guide physical practice. How to design optical encryption scheme for optical transform with excellent performance and verify its security performance is the focus of practical research. | 2022-09-16T15:11:54.121Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "24bfb2dd516fa18ddd3466402ec5cea0ca6f37f1",
"oa_license": null,
"oa_url": "https://francis-press.com/uploads/papers/lK0PjI3zyXfTeI2LOOJPasK3rZpTwMKNsAI6l8Cu.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3aeaab8b4e8ffab56d2efdeb84205889e9600326",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
11120614 | pes2o/s2orc | v3-fos-license | Diasystematic Information in the " Big Five " : A Comparison of Print Dictionaries , CD-ROMS / DVD-ROMS and Online Dictionaries
The information provided by labels is called diasystematic information, which gives restrictions and limitations concerning the use of a lexical item. The focus of the study, the findings of which are presented in this contribution, is five British monolingual learner's dictionaries (OALD9, LDOCE6, COBUILD7, CALD4, MED2), which are often referred to as the "Big Five". The aims of the study are to compare the print edition and the electronic versions (CD-ROM/DVDROM and online dictionaries) of the same dictionary to see whether the lists of labels used in one particular dictionary coincide across versions of one and the same dictionary. Parallels are then drawn between dictionaries to determine similarities and differences in the use of labels providing different types of diasystematic information. Some of the most important findings of the study are that lists of labels differ in all three versions of one and the same dictionary and that some labels enumerated in the lists either are not used in the A–Z section at all or are used in a different form. Apart from that, some labels used in the dictionaries are so close in interpretation that the intended user will probably experience difficulty in distinguishing between them.
Introduction
As one of the information categories in the dictionary entry, labels have been present in dictionaries for a very long time.The information provided by labels is called diasystematic information, which gives restrictions and limitations concerning the use of a lexical item.A comparison of different dictionaries reveals certain similarities and differences in the inclusion and treatment of diasystematic information that will be addressed in more detail in this contribution.Since labels give information on the connotative value of lexical items, they are of particular importance to non-native speakers of a language; thus, the focus of the study is five British monolingual learner's dictionaries (MLDs), i.e., Oxford Advanced Learner's Dictionary of Current English (OALD9), Longman Dictionary of Contemporary English (LDOCE6), Collins COBUILD Advanced Dictionary of English (COBUILD7), Cambridge Advanced Learner's Dictionary (CALD4) and Macmillan English Dictionary for Advanced Learners (MED2), which are often referred to as the "Big Five".All these dictionaries are either accompanied by a CD-ROM or DVD-ROM or they have an online version (free access and/or access by a unique PIN code).This study examines whether a user of different versions of one and the same dictionary gets the same information as regards the meaning and the use of labels, or whether one version provides information, whereas the other one lacks it.Before taking a closer look at the labels used in the "Big Five", it is necessary to outline the theoretical background to diasystematic information provided in the form of labels.
Taxonomies of diasystematic information
The existing literature offers various taxonomies of diasystematic information proposed by different researchers.Mostly, they vary in degree of complexity, but there is also a great deal of overlapping.Jackson (2002: 109-115), for instance, enumerates seven types of usage labels, i.e., dialect; formality; status; effect; history; topic or field and disputed usage.Landau (2001: 217-272), however, classifies eight common kinds of usage information, i.e., currency or temporality; regional or geographic variation; technical or specialized terminology; restricted or taboo sexual and scatological usage; insult; slang; style, functional variety, or register and status or cultural level.Atkins and Rundell (2008: 182-186) distinguish nine types of linguistic labels: domains; region; dialect; register; style; time; slang and jargon; attitude and offensive terms.The most detailed classification can be found in Hausmann (1989: 651), who identifies as many as eleven types of labels.His classification was also adopted by various other scholars such as Bergenholtz and Tarp (1995: 131-134) and Svensén (2009: 326-332) and is also taken as a theoretical background in our research.Below, the classification proposed by Hausmann (1989: 651) is presented, and parallels with other classifications enumerated above are drawn: 1. diachronic information: this associates a word or one of its senses with a particular period in the history of language.This dimension includes a range of labels that can be arranged chronologically from archaic, via obsolete to contemporary words or senses and neologisms.The most common temporal labels found in contemporary dictionaries are old-fashioned, obsolete, archaic, old use or dated.This group is referred to as history by Jackson (2002), currency or temporality by Landau (2001) and as time by Atkins and Rundell (2008).
2. diatopic information: this associates a word or one of its senses with a particular regional dialect or national variety.Most British dictionaries nowadays include words or senses typically used in different varieties of English.Sometimes regional areas within a country are specified; thus, regional or dialect is used as a label.Jackson (2002) refers to this group as dialect, but says that "dialect labels refer to geographical restrictions, and we can take this to include both national varieties and regional dialects within a national variety" (ibid: 110).Landau (2001) calls diatopic information regional or geographic variation, while Atkins and Rundell (2008) divide it into two separate groups: region and dialect.
3. diaintegrative information: this associates a word or one of its senses with the dimension of integration into the native stock of words of a language.
Monolingual dictionaries usually provide information on the language of origin, mostly for words that have retained their original form (e.g., from Latin, from French, from Italian).Interestingly, diaintegrative information is not included in any of the other classifications and can also be regarded as belonging to etymology as a different information category.
4. diamedial information: this associates a word or one of its senses with a particular medium of communication.The most common labels are written and spoken.Apart from Hausmann, Landau (2001) is the only scholar who includes diamedial information in his classification of diasystematic information and refers to it as style, functional variety, or register.
5. diastratic information: this associates a word or one of its senses with a particular social group, consequently referring to sociolects, such as slang and different kinds of jargon.The most common labels are slang, vulgar and taboo.Diastratic information is included in all the taxonomies: Jackson (2002) calls it status, Landau (2001) divides it into two groups, i.e., restricted or taboo and scatological usage and slang, and Atkins and Rundell ( 2008) also have two groups for diastratic information, i.e., slang and jargon and offensive terms.
6. diaphasic information: this associates a word or one of its senses with a particular register of a language, the most common labels being formal and informal.Diaphasic information is also included in all the taxonomies: Jackson (2002) calls it formality, Landau (2001) refers to it as style, functional variety, or register, and Atkins and Rundell (2008) call it register.
7. diatextual information: this associates a word or one of its senses with a particular discourse type or genre.The most common labels are poetic and literary.Diatextual information is disregarded by Jackson (2002) but is referred to as style, functional level, or register by Landau (2001) and as style by Atkins and Rundell (2008).
8. diatechnical information: this associates a word or one of its senses with a particular subject field.In monolingual dictionaries, subject-field labels, field labels or domain labels indicate that a certain word or one of its senses belongs to technical or scientific vocabulary.The large number of sublanguages typical of different subject fields present a problem even for educated native speakers, since each subject field has its own vocabulary.Consequently, some dictionaries, especially learner's dictionaries, often use the general label technical/specialized/specialist instead of giving detailed information on specific subject fields.Diatechnical information is included in all classifications: Jackson (2002) calls it topic or field, Landau (2001) technical or specialized terminology and Atkins and Rundell (2008) refer to it as domains.
9. diafrequent information: this associates a word or one of its senses with a particular frequency of occurrence.Labels used to indicate frequency are less frequent and rare.It should be stressed that dictionaries label only less frequent items, which means that unmarked items are more common.
Hausmann's classification is the only classification that includes diafrequent information -all other scholars disregard it altogether.
10. diaevaluative information: this associates a word or one of its senses with a particular attitude or evaluation or the speaker's mood.Labels used to denote diaevaluative information are appreciative, derogatory, offensive, humorous, ironic and euphemistic.Diaevaluative information is one of the categories included in all classifications studied: Jackson (2002) calls it effect, Atkins and Rundell (2008) attitude, while Landau (2001) divides it into two groups depending on the value a particular label expresses.Labels denoting positive connotation (approving, humorous) belong to style, functional variety, or register, while those expressing negative connotation (offensive, derogatory, disapproving) belong to the group referred to as insult.
11. dianormative information: this associates a word or one of its senses with a certain degree of deviation from a cultural standard.Labels expressing dianormative information are non-standard, substandard and disputed.In other words, the acceptability of items marked with one of these labels is questionable as regards linguistic correctness.Disputed usage is the term used by Jackson (2002) to describe dianormative information, while Landau (2001) refers to it as status or cultural level.Atkins and Rundell do not include this group in the classification of diasystematic information.The label figurative is used as a label in many dictionaries, but it does not fit any of the above-mentioned categories of labels, since it refers to the meaning extension of a lexical item rather than expressing any restriction on usage.In the process of figurative extension, it is possible for a word to acquire semantic features that might not have been present in the original meaning (Atkins and Rundell 2008: 289).This label typically marks examples illustrating the use of the lemma or one of its senses and can thus be said to denote secondary senses that have the status of conventional metaphors (Hanks 2006: 28).That is why it must be treated separately as a label expressing certain shades of semantic meaning as well as a certain degree of stylistic level.
The label trademark is also used quite frequently.According to Landau (2001: 218), this label provides diatechnical information, which he calls technical or specialized terminology, but such a classification of this label can be disputed, since it expresses the origin of the lexical item without any connection whatsoever with technical or specialized terminology, for example, Kleenex, Levi's, Lycra, Polaroid, Rolex, Sellotape, Skype, Tupperware, Teflon, Thermos, etc.On the other hand, lexical items marked as trademarks can also belong to terminology typical of a specific subject field, for example, AZT or Prozac (medical or pharmaceutical terms).In such cases, the user would profit from getting the information concerning the subject field rather than getting the information on the origin of the lexical item.
Against this theoretical background, the aims of this study are: (1) to present a more detailed view of current practices employed in the British monolingual learner's dictionaries under investigation, in which labels of various kinds are abundantly provided; (2) to compare the print edition and the electronic versions (CD-ROM/DVD-ROM and online dictionaries) of the same dictionary to see whether the lists of labels used in one particular dictionary coincide across versions of one and the same dictionary; (3) to draw parallels between dictionaries to determine similarities and differences in the use of labels providing different types of diasystematic information.
2.
Labels in print, CD-ROM/DVD-ROM and online versions of the "Big Five"
Labels in OALD9
In the print edition of OALD9, the labels are listed on the inside front cover http://lexikos.journals.ac.za under the title "Labels used in the dictionary".They can be found in two columns: the first one contains twelve labels "used with words that express a particular attitude or are appropriate in a particular situation" (OALD9: inside front cover), i.e., approving, disapproving, figurative, formal, humorous, informal, ironic, literary, offensive, slang, specialist and taboo, whereas the second one includes five labels that "show other restrictions on the use of words" (ibid), i.e., dialect, old-fashioned, old use, saying and TM .A closer look at the first twelve labels reveals that they belong to different classes of diasystematic information.The labels approving, disapproving, humorous, ironic and offensive express diaevaluative information; formal and informal provide diaphasic information; literary, diatextual information; slang and taboo, diastratic information and specialist expresses diatechnical information, whereas figurative belongs to none of the categories mentioned in Section 1.1.The labels enumerated in the OALD9's second column again provide different types of diasystematic information: the labels old-fashioned and old use express diachronic information and dialect, diatopic information, whereas saying expresses the type of lexical item rather than its connotative value; it is thus not taken into account in any of the taxonomies discussed in Section 1.1.The last label listed in the second column is TM , which gives information on the origin of the lexical item (cf.also Section 1.1).Apart from that, fifteen labels expressing diatopic information are listed under "Abbreviations used in the dictionary": AustralE, BrE, CanE, EAfrE, IndE, IrishE, NAmE, NEngE, NZE, SAfrE, ScotE, SEAsianE, US, WAfrE, WelshE.As has been mentioned, the label dialect, which also belongs to the group of labels providing diatopic information, is, however, listed under "Labels used in the dictionary" and not under "Abbreviations used in the dictionary".Contrary to the CD-ROM of the previous, i.e., the 8th edition, the DVD of the 9th edition does not contain the Advanced Search option, which makes it impossible to compare the list of labels on the DVD with those given in the print and online editions.Among the labels expressing diatechnical information, the print dictionary lists only one very general label, i.e., specialist.The label specialist has been introduced into the 9th edition, while the label technical was used in previous editions.The list of labels in the online edition, however, still includes technical instead of specialist, although the label specialist is used in the A-Z section, an error that should be corrected in the online edition.A closer observation of the A-Z section, however, shows that besides the very general label specialist, numerous other subject-field labels are abundantly provided, though not listed in the front matter.For instance: antilogarithmmathematics polygongeometry antimatterphysics melanomamedical anusanatomy antiviruscomputing meltdown, sense 2economics http://lexikos.journals.ac.za niche, sense 2business polymerchemistry polyphonymusic polysemouslinguistics magnitude, sense 2astronomy magnitude, sense 3geology chromosomebiology fiduciarylaw It has to be stressed that the online version "Oxford Learner's Dictionaries" is the only online dictionary among the dictionaries studied that includes information on the labels.This can be accessed via a tab entitled "About", where the "Guide to Symbols and Labels" section can be selected and the section "Labels used in Oxford Learner's Dictionaries" provides information on the labels.Interestingly, the lists of labels as well as explanations of each individual label are identical and the examples illustrating each label are almost identical in the print dictionary and in the online dictionary.The online dictionary, however, does not include the section Abbreviations, which can be found on the inside front cover of the print dictionary, which means that this information is completely lacking in this version of OALD.Although these labels are not provided in the "Guide to Symbols and Labels", they are used in the online dictionary in exactly the same way as in the print dictionary.
Labels in LDOCE6
Similar to OALD9, LDOCE6 also lists labels on the very first page of the dictionary.Here, the labels are subdivided into three categories: Words which are used only or mainly in one region or country (BrE, AmE, AusE), Words which are used in a particular situation, or show a particular attitude (approving, disapproving, formal, informal, humorous) and Words which are used in a particular context or type of language (biblical, law, literary, medical, not polite, old-fashioned, old use, spoken, taboo, technical, trademark, written).The first category of labels is homogeneous, since all the labels express diatopic information; this cannot be claimed for the second and third categories.The second category includes labels providing diaphasic (formal, informal) and diaevaluative information (approving, disapproving, humorous).The third category is the most heterogeneous one, since the labels law, medical and technical provide diatechnical information; literary and biblical, diatextual information; not polite, diaevaluative information; old-fashioned and old use, diachronic information; spoken and written, diamedial information; and taboo, diastratic information.For the classification of trademark, see Section 1.1.The total number of labels included in LDOCE6 is twenty, which makes it the dictionary with the fewest labels among all the five dictionaries under consideration.Among the labels expressing diatechnical information, the generic label technical and two more specific labels, i.e., law and medical, are used.In the online version offered by the access code, on the other hand, all labels are listed in the Advanced Search option under Register.The labels that are listed in the print dictionary under "Words which are used in a particular situation, or show a particular attitude" and "Words which are used in a particular context or type of language" coincide with the labels under Register in the online version except for the labels approving and disapproving, the labels which were added in the print edition of LDOCE6 but were not included in the previous (i.e., the 5th) edition.The labels expressing diatopic information provided in the print edition under "Words which are used only or mainly in one region or country" cannot be found among the labels in the online version, which does not, however, imply that they are not used in this version.On the contrary, they are used in the same way as in the print dictionary.The free online dictionary seems promising at first sight, since it offers the tab "How to use the LONGMAN DIC-TIONARY OF CONTEMPORARY ENGLISH ONLINE".Here, the user would most certainly expect to find instructions as to the use of all features found in the dictionary, including labels.Disappointingly, the labels cannot be found, nor are they explained, but they are used in the dictionary in the same way as in the print dictionary and in the online version offered by the access code.
Labels in COBUILD7
The first thing one notices when searching for metalinguistic information in COBUILD7 is that the online dictionary accessed by a unique code provides no information on diasystematic information or any other kind of metalinguistic information.The consequence is that no comparison can be made, but at the same time, it can be claimed with a high degree of certainty that labelling is done in much the same way in both versions of the COBUILD dictionary.The labelling described in this contribution is therefore based on the print edition.
If compared to the other four MLDs, COBUILD7 is clearly the only dictionary with quite extensive front matter.In the other four dictionaries, front matter is reduced to a list of labels and/or other abbreviations used in the dictionary and a graphic presentation of the dictionary entry taken from each individual dictionary.COBUILD7, however, differs greatly in this respect.On pages xiii-xv of the front matter, the user gets precise information on Style and Usage (p.xiii-xiv) and Pragmatics (p.xiv-xv).The Style and Usage section has two subsections: Geographical labels and Style labels.The Geographical labels subsection lists two labels, i.e., Brit and Am, and provides brief explanations of them.Then follows a list of other geographical labels used to mark a lexical item typically used in other varieties of English, such as Australian, Irish, Northern English and Scottish.Interestingly, these labels are introduced by 'e.g.', implying that not all the labels used in the A-Z section are listed here.Nevertheless, all these labels provide diatopic information.The Style labels section gives twenty labels in total (business, computing, dialect, formal, humorous, infor-http://lexikos.journals.ac.za mal, journalism, legal, literary, medical, military, offensive, old-fashioned, rude, spoken, technical, trademark, very offensive, very rude, written), but a close inspection of these labels shows that not all the labels listed belong to "style labels" as they are referred to in the title of this section.Among these labels, we can find business, computing, legal, medical, military and technical, which are obviously subject-field labels, i.e., labels expressing diatechnical information.The label dialect belongs to the same group as Brit and Am mentioned above; the labels formal and informal express diaphasic information; humorous, offensive and very offensive, diaevaluative information; journalism and literary, diatextual information; rude and very rude, diastratic information; spoken and written, diamedial information and old-fashioned, diachronic information.For the classification of trademark, see Section 1.1.
The Pragmatics section starts by explaining what pragmatics is and expands on how pragmatic information is included in the dictionary.The "pragmatics labels" include approval, disapproval, emphasis, feelings, formulae, politeness and vagueness, and it is obvious that some of these coincide with "style labels" as used in other dictionaries, i.e., approval with approving, disapproval with disapproving, politeness with polite.
Labels in CALD4
In the print edition of CALD4, the labels are found under the title "Style and usage labels used in the dictionary".The list includes 31 labels (abbreviation, approving, Australian English, child's word/expression, disapproving, female, figurative, formal, humorous, informal, Indian English, Irish English, legal, literary, male, Northern English, not standard, offensive, old-fashioned, old use, polite word/expression, saying, Scottish English, slang, South African English, specialized, trademark, UK, US, written abbreviation, A1, A2, B1, B2, C1, C2).The labels express different types of diasystematic information: diatopic information (Australian English, Indian English, Irish English, Northern English, Scottish English, South African English, UK, US); diaevaluative information (approving, disapproving, humorous, offensive, polite word/expression); diachronic information (old-fashioned, old use); diaphasic information (formal, informal, child's word/expression); diastratic information (slang); diatextual information (literary); diatechnical information (legal, specialized); dianormative information (not standard), and diafrequent information (A1, A2, B1, B2, C1, C2).For the classification of trademark and figurative, see Section 1.1, and for the classification of saying, see Section 2.1.Several labels stand out as they are not labels proper, meaning that they do not fall into any of the categories of labels expressing diasystematic information.Two of the labels peculiar to CALD4 are female and male, which are listed but not explained in the front matter.This means that the user cannot find any information about what they mark.It should be stressed that female and male are actually not labels proper, since they introduce the feminine or the masculine form of the lemma: for example, female lioness in sense 1 of the lemma lion.On the CD-ROM, we are faced with the opposite situation.The two labels are not enumerated in the list of labels, but if the user comes across them in the A-Z section, s/he gets a brief explanation about their meaning by simply placing the cursor over the label: for example, female is explained as 'only applies to women'.The labels abbreviation and written abbreviation should also not go unmentioned, since they indicate the type of lemma rather than functioning as labels proper and are usually treated as part-of-speech indicators.
As in OALD9, there are discrepancies between the print edition and the CD-ROM version of CALD4 in that the print edition gives two labels not included in the list of labels on the CD-ROM, i.e., female and male.An obvious discrepancy concerns the labels expressing diatopic information.In the print dictionary, this category is represented by the eight labels mentioned above, while the CD-ROM version offers two labels (British English only and American English only) and a third choice 'other regions'.The search results for British English only show that the lemmata or their senses marked with the label UK are obtained, and similarly the search for American English only yields those lexical items labelled US.As is to be expected, the possibility 'other regions' yields all other regional varieties, which are more precisely listed in the print dictionary.With respect to labels expressing diatopic information, it can be established that the A-Z section shows certain inconsistencies when compared with the lists of labels: labels not found in any of the lists are used in the A-Z section.If we compare the following two lemmata anyroad NORTHERN FOR anyway bairn SCOTTISH ENGLISH OR NORTHERN ENGLISH a child we can see that "anyroad" is marked Northern, which is a label not found in any of the lists, while "bairn" is marked Northern English, which is a label included in the list in the print edition; besides that, it is self-explanatory; thus any explication seems to be redundant.
A "label" found on the CD-ROM but not in the print edition is short forms used to indicate the contracted forms (e.g., could've SHORT FORM OF could have).Needless to say, this is not a label, neither is it used in the plural form as it appears in the list of labels.Typographically, however, it belongs to labels, as it appears in block capitals.In the print edition, short forms is not to be found among the labels and their typography also differs from that used for labels proper.
As far as the online version of CALD is concerned, there is no list of labels with their explications, but the user learns what a label denotes by positioning the cursor over the label.In this way, the online version resembles the CD-ROM of this dictionary, and the explications in both electronic versions are the same as in the front matter of the print dictionary.
Labels in MED2
The print edition of MED2 lists the labels under three titles: -Regional labels (three labels, i.e., British, American, mainly American, are enumerated and explained; in contrast, eleven labels, i.e., Australian, Irish, Scottish, Canadian, Caribbean, East African, Indian, New Zealand, South African, Welsh, West African, are listed but no explanation is provided).
These titles are to be found on the inside front cover, as is the case in most of its competitor dictionaries.The labels formal, informal, very formal and very informal express diaphasic information; showing approval, showing disapproval, offensive, humorous and impolite, diaevaluative information; old-fashioned, diachronic information; spoken, diamedial information; literary, diatextual information, whereas diatechnical information is provided by all the labels listed under Subject labels and diatopic information by all the labels enumerated under Regional labels.In the print edition, the labels under Style and attitude labels are explained, whereas all others are considered to be obvious and self-explanatory.The same holds true of the treatment of labels on the CD-ROM accompanying the print edition, where the labels can be found under Study pages, About the dictionary, Style and attitude labels.The online version, on the other hand, lacks lists of labels as well as explanations of labels.
It is important to point out that this is the only dictionary which gives a list of subject-field labels and does not use the generic label technical/specialized/ specialist.A comparison of the lists in the print dictionary and the lists in the Super Search under the Advanced Search option on the CD-ROM shows only minor divergences.Apart from the labels listed in the print edition under Style and attitude labels, the electronic version offers the following additional labels: often humorous, journalism, mainly journalism and mainly literary.As regards labels expressing diatopic information, two labels enumerated in the print dictionary are absent from the list on the CD-ROM, i.e., East African and Welsh.Apart from that, one and the same label is given as Indian English on the CD-ROM and as Indian in the print dictionary.The variant given on the CD-ROM, i.e., Indian English, is also used in the A-Z section of the print dictionary.The subject-field labels listed in the print dictionary and on the CD-ROM mostly coincide, the only difference being that the label trademark is listed only on the CD-ROM but is lacking in the print edition.
The online dictionary does not provide any guidance whatsoever as to the labels used in the dictionary.
Labels across the dictionaries studied
In this section, we would like to make a survey of labels included in our research in relation to whether they can be found in all five dictionaries under investigation or whether they appear in one or two dictionaries, but are not used by the lexicographers of other dictionaries.It seems logical to start with labels that can be found in all five dictionaries.There are seven such labels: As far as the subject-field labels are concerned, a more extensive list can be found only in MED2 (see Section 2.5), whereas other dictionaries give just a few subject-field labels, a situation which does not facilitate an apt comparison: OALD9: specialist (the A-Z section, however, includes several subject-field labels, see Section 2.1); LDOCE6: law, medical, technical; COBUILD7: business, computing, legal, medical, military, technical; CALD4: legal, specialized.
If we compare the labels expressing diatopic information, we can see that there is a great deal of overlapping, since the labels British English, American English and Australian English are used in all five dictionaries; Irish English and Scottish English in four (OALD9, COBUILD7, CALD4, MED2); South African English http://lexikos.journals.ac.za (OALD9, MED2, CALD4) as well as Northern English (OALD9, COBUILD7, CALD4) appear in three dictionaries; whereas New Zealand English, West African English and Welsh English are used in two dictionaries (OALD9, MED2) and the same holds true for Canadian English and East African English (OALD9, MED2); South-East Asian English and English from the United States (OALD9), Indian English (CALD4) as well as Caribbean English (MED2) are used in only one dictionary under consideration.
Interpretation and intelligibility of labels easily confused
If we refer back to Section 3, we can see that five labels (formal, informal, literary, humorous and old-fashioned) are used to mark lexical items of various kinds in all dictionaries under consideration.Among these labels, the labels formal and literary may pose some problems for the intended dictionary users, i.e., advanced learners of English, especially if they are used together, which is often the case.This only adds to the complexity of the problem, since the connotation they mark may not be obvious to every user.The reason can also be sought in the users' mother tongue or more precisely, in the method of labelling that is familiar to the user from monolingual dictionaries written in his/her native language.In Slovene lexicography, for example, the label knjižno at least roughly corresponds to the English label formal.The front matter of the Slovar slovenskega knjižnega jezika, 2nd edition (Dictionary of Standard Slovene 2014: §133) explains that the label knjižno is assigned to 'words, senses or phraseological units used particularly in fiction or in scientific texts'.As a matter of fact, knjižno can also be used to mark the same connotation as the English label literary, which means that dictionary users familiar with a system of labelling similar to that for the native speakers of Slovene may be faced with the problem of the correct interpretation of the difference between formal and literary.A comparison of the explications of these two labels in the front matter of MLDs reveals that these are very simple and sometimes even overlap.In CALD4, for instance, the distinction between these two labels is explained as follows: formal -"used in serious or official language or when trying to impress other people" literary -"formal and descriptive language used in literature" It is questionable whether these two explications are sufficiently clear for a foreign learner to distinguish accurately between these two labels, because literary is explained as labelling formal language.The addition of 'in literature' to the explication of literary may also puzzle the user who may not know the connotation of this restriction.Does this label mark expressions that the user is not supposed to use without sounding poetic?Do such expressions express meta-phorical and metonymic transfer?Another problem that should be mentioned in connection with literary is that in some dictionaries (though not in the MLDs under investigation), there is the seemingly similar label literature, which is a domain rather than a style label.Users familiar with the label literature are likely to confuse these two labels, thus interpreting the label literary as being a domain label indicating terms from the field of literature instead of words found in literary and poetic texts.This confusion is also touched upon by Atkins and Rundell (2008: 229).
Another label used only in LDOCE6 and COBUILD7 is written.The explanation provided in the print edition of LDOCE6 says that it refers to "a word or phrase that is used only, or nearly always, in written English".If we compare this with the explanation given for the label formal, which states that it refers to "a word that is suitable for formal speech or writing, but would not normally be used in ordinary conversation", we can see that they are both explained in a similar way.Therefore, the question can be posed whether a learner of English would spot the difference between these two labels.Apart from the difference between written and formal, another pair is used in LDOCE6 and COBUILD7 as well as in MED2, which presents the same difficulty in interpretation as written and formal, namely, informal and spoken.A comparison of the description of these two labels offers no solution to this problem, since in MED2, the explanations provided are extremely close: informal -"more common in speech than in writing and not used on a formal occasion" spoken -"used in speech rather than writing" Slightly more precise, but still not sufficiently clear are the explanations in LDOCE6: informal -"a word or phrase that is used in normal conversation, but may not be suitable for use in more formal contexts, for example in writing essays or business letters" spoken -"a word or phrase used only, or nearly always, in conversation" The same holds true for the explanations found in COBUILD7: informal -"used mainly in informal situations, conversations, and personal letters, e.g.pep talk" spoken -"used mainly in speech rather than in writing, e.g.pardon" Another problem concerns the interpretation of the labels old-fashioned (used in all five dictionaries) and old use (used in OALD9, LDOCE6 and CALD4).The difference is that old-fashioned marks lexical items that are not often used nowadays but are used by older people or have been used by people in the recent past, while old use implies that the lexical item marked with this label is no longer in current use.This distinction may be recognized by a native speaker of http://lexikos.journals.ac.zaEnglish, but it may not be so obvious to a foreign learner, who is the target user of MLDs.COBUILD7 and MED2 use only the label old-fashioned, and considering the fact that the intended user may not be able to understand the difference between the two labels, this seems a sensible decision.The labels old-fashioned and old use are definitely labels that should be used with care in dictionaries if both labels are used in one and the same dictionary.
Labels expressing opposing connotative values
Sets of labels expressing opposing connotative values (formal and informal, polite and impolite, spoken and written) are also worth considering.The labels formal and informal are used in all dictionaries, whereas polite and impolite, spoken and written are not.Impolite (or not polite as used in MED2) is a label found in LDOCE6 and MED2, but interestingly, neither of these dictionaries has the opposing label polite.Polite, on the other hand, is used in CALD4 as well as in COBUILD7 (here, the label politeness is used), but neither of them has the label impolite.This means that the entire pair polite/impolite is not used in any of the dictionaries included in our study.From the point of view of dictionary users, this may indicate inconsistency or a failure to mark the opposing value of the lexical item in question.It seems sensible to label lexical items that imply politeness as well as those that imply impoliteness.This is especially true of dictionaries for foreign learners, who need precise guidance on lexical items expressing a certain degree of politeness/impoliteness, since labelling such words is aimed at warning those unfamiliar with them that they may be either polite or impolite.Doubtlessly, pragmatic information supplied by labels is as vital as the semantic information supplied by the definition (cf.also Norri 2000: 93).Similarly, the label spoken can be found in MED2, and one would rightly expect the label written to be used in the same dictionary, but this is not the case.LDOCE6 and COBUILD7, on the other hand, also use the label spoken, but the label written can also be found.It seems just as illogical as in the case of the pair polite/impolite to find spoken without its opposing label written, which is in line with Fedorova (2004: 269).What is more, MED2, which lacks the label written, has a very precise labelling of spoken connotation, since it uses two labels to mark that a lexical item or one of its senses belongs to spoken language: spoken and mainly spoken.Obviously, the question can be posed why it is necessary to label one concept so precisely while omitting the other concept completely.
Labels expressing diatechnical information
Another interesting feature is that MED2 is the only dictionary that gives a list of subject-field labels (cf.Section 2.5).In OALD9, for example, the only subjectfield label enumerated in the list of labels is the generic label specialist, but as mentioned in Section 2.1, other subject-field labels are also used in the diction-ary.Here, we are faced with a problem of higher-level domain markers (technical/specialized/specialist), on the one hand, and lower-level domain markers (mathematics, physics, biology, business, phonetics, etc.), on the other.This means that some lexical items are marked with a generic label without giving further details as to the specific subject-field of a particular lexical item, while others are labelled with very specific subject-field labels giving the user precise information about the subject-field where the lexical item functions as a term (cf.Vrbinc and Vrbinc 2013: 449, 454).The question is firstly, what is the criterion for deciding which label to use (a generic or a specific one) and secondly, what level of precision would benefit the end user.Is it enough to simply use the label technical/specialized/specialist just to indicate that the lemma or one of its senses thus labelled belongs to terminology?Or would the user expect to find the information about the precise subject-field or sub-field?These are the questions that cannot be answered without carrying out a user survey, which is beyond the scope of this study.The opposite extreme is the use of labels denoting sub-fields (e.g., business vs.economy, medical vs. anatomy, mathematics vs. geometry).A general dictionary user cannot be expected to recognize the subtle differences between such closely related labels.If such labels are used, one would expect an explanation of the distinction between them, but taking account of the type of dictionary and the target audience, it can be claimed with a high degree of certainty that this is an unnecessary complication.In MLDs, one would expect that the subject-field labels would refer to fields of science only and would disregard the sub-fields.That means it is recommendable to use generic subject-field labels rather than more specific subject-field labels.
Labels expressing diatopic information
The labels indicating diatopic information are numerous, which is to be expected, given that the dictionaries claim to cover the vocabulary of the entire English-speaking world.The print edition of OALD9 gives three labels that need to be discussed in more detail: NAmE (North American English), US (English from the United States) and Canadian English.Although the difference between them might be obvious to a linguist or a native speaker of English, it is questionable whether a learner makes any real difference between these labels.The labels US and Canadian English may be understandable, but the label NAmE is most certainly not because it can be interpreted as a label encompassing both US and Canadian English.We dare to assume that an average user of a learner's dictionary would equate the label NAmE with the label US or AmE (as used in other dictionaries); thus, the label NAmE would not do the job it intends to do.Similarly, the CD-ROM version of CALD4 offers two labels (British English only and American English only) and a third choice 'other regions'.The search results for British English only show that the lemmata or their senses marked with the label UK are obtained; similarly, the search for American English only gives http://lexikos.journals.ac.za those lexical items labelled US.The use of 'other regions' instead of giving a full range of labels found in the dictionary is also a feature typical of the CD-ROM accompanying the print edition of CALD4.A similar situation can be observed in MED2, which lists two variants of the same label, one in the print dictionary, i.e., Indian, and one on the CD-ROM, i.e., Indian English.In the A-Z section, however, the label Indian English is used in both versions of the dictionary.Such a discrepancy between the labels used in any of the lists (either in the front matter of the print dictionary or in the list provided within the advanced search options on the CD-ROM) and those actually used in the A-Z section should by all means be avoided.Interestingly, COBUILD7 does not list all the labels indicating the varieties of English but uses 'eg' to indicate that the list is incomplete.This policy is far from ideal, since a user cannot be supposed to know which other labels apart from those listed s/he can encounter in the A-Z section of the dictionary.
Register or style labels?
A general observation is that some MLDs list very different labels among what they call register or style labels, although they are far from being register or style labels,e.g.,biblical,law,medical,technical,trademark (LDOCE6); abbreviation, female, male, saying, specialized, trademark (CALD4); business,computing,dialect,legal,medical,military (COBUILD7).This practice should be avoided and two possible solutions can be put forward.The first one is that labels should be grouped according to the connotative value they express, where the theoretical considerations proposed by metalexicographers within the framework of the classification of diasystematic labelling could be made use of.It is, however, true that considering the very specific target audience of MLDs, lexicographers try to simplify all dictionary components in order to make them more approachable and user-friendly.Taking this into consideration, all the labels used in a particular dictionary could be listed together in alphabetical order without any further division, and the title could simply be "Labels used in the dictionary".
Labels or not?
In COBUILD7, some "pragmatics labels" coincide with labels proper as used in other dictionaries, while some of them show no parallel with labels proper.Among the latter, three labels should be highlighted: emphasis, feelings and vagueness.The question is whether these convey sufficient information for the dictionary user to make full use of them.The print edition offers explanations for them, explanations which are lacking in the online dictionary.The following are the explications found in the front matter of COBUILD7 (xv): emphasis -"The label EMPHASIS indicates that you use the word or expres- The labels saying, TM/trademark, abbreviation and written abbreviation also deserve attention, since they cannot be considered labels proper; consequently, they do not fall into any of the categories for the classification of diasystematic information.These labels give information on the type of lexical item (saying, abbreviation, written abbreviation) or the origin of the lexical item (TM/trademark), but they themselves do not give any information on the connotative aspect of the lexical item they are used to mark.The question can therefore be posed why some dictionaries (print dictionaries or their CD-ROMs) list these labels under style or register labels.It is misleading for the user to find labels that cannot be regarded as style or register labels among other labels that address the connotative aspect of lexical items.Apart from that, phraseological units classified as sayings are included in the special idioms sections in CALD4.The usability of the label saying, however, remains questionable, since it is doubtful whether the user needs the information about the type of phraseological units.Besides that, it seems somehow illogical to label sayings and omit the labelling of formulae, catchphrases, collocations and other types of phraseological units.
The precise classification of phraseological units affects neither the decoding nor the encoding process; it can thus be regarded as superfluous for the learner who is the intended user of a learner's dictionary.
Too precise labelling of the same connotation
If we address the issue concerning the labels used in one dictionary only, we can see that MED2 and COBUILD7 stand out in that they both include as many as six labels that occur in no other dictionary (MED2: often humorous, very informal, journalism, mainly journalism, mainly literary and mainly spoken; COBUILD7: very offensive, very rude, emphasis, feelings, vagueness and formulae).It should, however, be emphasized that the majority of these labels appear in pairs: humorous vs. often humorous, informal vs. very informal, literary vs. mainly literary and spoken vs. mainly spoken, offensive vs. very offensive, rude vs. very rude.Mainly, which precedes journalism, literary and spoken, implies that the lexical item thus marked is chiefly used in journalistic language, in literary works or in spoken language.Very, modifying informal, expresses intensification and is close to the http://lexikos.journals.ac.za label slang, which is not used in this dictionary.Intensification is also expressed by very rude and very offensive in COBUILD7.Often, however, restricts the meaning of humorous, implying that the lexical item with this label is not necessarily used with humorous connotation in all contexts.Such labels, therefore, provide more detailed information on the connotative value of the lexical item in question, but do not in any way contribute to clearer labelling and easier interpretation by the intended dictionary user.
Labels included in the dictionary front matter but absent from the A-Z section
In LDOCE6, the labels approving and disapproving are listed in the print version but are lacking in the online version.This means that the Advanced search option yields no results as regards these two labels.A manual search in the print edition was rather unsuccessful, which means that we were unable to locate a lemma labelled approving or disapproving.Several lemmata, however, were found where the definition of a lemma or one of its senses said: "use this to show approval" (e.g., the lemma succinct is defined in the following way: clearly expressed in a few words -use this to show approval) or "used to show disapproval" (e.g., the lemma agitator is defined in the following way: someone who encourages people to work towards changing something in society -used to show disapproval).This means that the labels proper are not used, since labelling is integrated into the definition part.The question can therefore be raised why the dictionary front matter includes the labels even though they are most probably not used to label the (dis)approving connotative values of the lemmata or their senses.
Conclusion
As opposed to print dictionaries, online dictionaries (the only exceptions being OALD and CALD) are the most deficient as regards information on diasystematic labelling as well as other metalexicographic information, which is a finding that cannot be viewed with much optimism.Nowadays, in the age of modern technology, one can logically expect an increasing number of users for online dictionaries.It is true that currently, not everyone in the world enjoys good internet connectivity, but it is also true that access to the internet is becoming increasingly widespread, and this is expected to improve in the future.Also, the future of print dictionaries seems to be questionable (Macmillan, for example, announced in November 2012 that, in future, only online dictionaries will be available; http://www.macmillandictionaryblog.com/byeprint-dictionary), which means that the future of lexicography probably lies in online dictionaries.Consequently, users of online dictionaries need guidance as to certain aspects of dictionary use, and diasystematic information is doubt-lessly an information category of the dictionary entry that needs to be explained in a comprehensive, clearly formulated, understandable and consistent way.Not all labels are self-explanatory; thus, it is essential for every type of dictionary, regardless of the medium, to provide its users with the necessary information.This is also in line with Norri (2000: 93), who says that "[i]n many cases greater care could be taken when explaining the method of labelling to the reader.All too often, the preface provides scant guidance in this important matter".
Another very important aspect is that all labels listed in the front matter of print dictionaries or in the CD-ROM or online versions should be used in the same form in the A-Z section.To have one form of a label in a list of labels and another form in the A-Z section is undesirable and most user-unfriendly and should certainly be avoided by compilers of all dictionaries, regardless of the target audience.Apart from that, all labels should be followed by a detailed explanation -one that the intended user will understand.In cases, where two different labels express similar connotative values, special care should be taken to explain the subtle differences as precisely as possible.The explanations should be short, concise and to the point, since a user should understand them immediately.Lexicographers should also avoid listing labels that are not used in the A-Z section (e.g., the labels approving and disapproving in LDOCE6), or omit labels that are used only sparsely in the entire A-Z section (e.g., the label biblical is used only seven times in LDOCE6).
Information about the restrictions and limitations provided by diasystematic information is vital for decoding and even more so for encoding.It should therefore be tailored to the needs and skills of the intended dictionary users, so that they can use this type of information correctly and efficiently.We should bear in mind that the intended user is a foreign learner who should receive clear guidance as to the use of lexical items s/he looks up in a dictionary.Consequently, dictionaries would benefit from theoretical classification of labels for their presentation in the front matter or in the electronic versions.To conclude, the possible problems that deficient (treatment of) diasystematic information could have for users should be regarded as a set of hypotheses that still have to be tested empirically, which should be a logical follow-up to this study.
Table 1
below summarizes these four classifications.
Table 1 :
Summary of classifications.
://lexikos.journals.ac.za sion to show that you think something is particularly important or true, or to draw attention to it.An example of a word with this label is absolutely."feelings -"The label FEELINGS indicates that you use the word or expression to show how you feel about a situation, a person, or a thing.An example of a word with this label is unfortunately."vagueness -"The label VAGUENESS indicates that you use the word or expression to show lack of certainty.People often use vague language to make statements 'softer', so that what they say does not appear too direct or too strongly stated.Examples of vague language are presumably …, Do you know what I mean?, kind of …, and sort of …" http | 2017-09-13T10:15:45.603Z | 2015-11-20T00:00:00.000 | {
"year": 2015,
"sha1": "08b9c945184c067cd8d821c4884a4d8856eac343",
"oa_license": "CCBY",
"oa_url": "http://lexikos.journals.ac.za/pub/article/download/1306/813",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "08b9c945184c067cd8d821c4884a4d8856eac343",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213176673 | pes2o/s2orc | v3-fos-license | Atom-Photon Spin-Exchange Collisions Mediated by Rydberg Dressing
We show that photons propagating through a Rydberg-dressed atomic ensemble can exchange its spin state with a single atom. Such a spin-exchange collision exhibits both dissipative and coherent features, depending on the interaction strength. For strong interaction, the collision dissipatively drives the system into an entangled dark state of the photon with an atom. In the weak interaction regime, the scattering coherently flips the spin of a single photon in the multi-photon input pulse, demonstrating a generic single-photon subtracting process. An analytic analysis of this process reveals a universal trade-off between efficiency and purity of the extracted photon, which applies to a wide class of single-photon subtractors. We show that such a trade-off can be optimized by adjusting the scattering rate under a novel phase-matching condition.
We show that photons propagating through a Rydberg-dressed atomic ensemble can exchange its spin state with a single atom. Such a spin-exchange collision exhibits both dissipative and coherent features, depending on the interaction strength. For strong interaction, the collision dissipatively drives the system into an entangled dark state of the photon with an atom. In the weak interaction regime, the scattering coherently flips the spin of a single photon in the multi-photon input pulse, demonstrating a generic single-photon subtracting process. An analytic analysis of this process reveals a universal trade-off between efficiency and purity of the extracted photon, which applies to a wide class of single-photon subtractors. We show that such a trade-off can be optimized by adjusting the scattering rate under a novel phase-matching condition.
In this Letter, we establish a different type of atomphoton interaction in the Rydberg EIT system, with which a single photon can exchange its spin state with a single atom. It is achieved by coupling photons to an atomic ensemble that interacts with a single control atom via Rydberg dressing [32,33]. We show that under suitable conditions, the scattering dynamics can be tuned from dissipative to coherent. In the dissipative regime, the system evolves robustly into an entangled dark state of a photon and the control atom. For coherent scattering, the dynamics maps to a model of generic singlephoton subtraction, whose solution reveals a universal trade-off between efficiency and purity of the subtracted single photon, and yields a phase-matching condition for optimizing its performance.
The system we study is illustrated in Fig. 1(a), where the input photon carries photonic spin (polarization) and can exchange its state with the pseudo-spin (internal state) of the control atom. This atom-photon spinexchange interaction is mediated by an atomic ensemble, which strongly interacts with both the photon and the control atom [34][35][36]. The level structure shown in Fig. 1(b) helps to realize such an interaction. A photon propagates in an atomic ensemble via two distinct EIT processes [37,38]: the left circularly polarized (pseudospin up) photonic fieldÊ ↑ (r) forms a Rydberg EIT involving the ground state |g , the intermediate state |e − , and the Rydberg state |r ; while the right circularly polarized (pseudo-spin down) photonic fieldÊ ↓ (r) participates in a Λ-type EIT formed by |g , |e + , and another ground state |s . In addition, state |s is dressed to Rydberg state |r for both the control atom and ensemble atoms.
It is shown in Ref. [33] that the above dressing scheme induces an effective spin-exchange interactionV ex between atoms in |s and |r . At low photon density, interactions between ensemble atoms are negligible, such thatV ex = i U (r i )σ rsσ i sr + H.c. just describes the spinexchange between the control atom (σ µν = |µ ν|) and each i-th atom (σ i µν ) in the ensemble. Since most ensem- ble atoms are in the ground state |g ,V ex actually describes the spin-exchange between the control atom and a collective excitation (spin-wave) in the atomic ensemble [ Fig. 1(c)], i.e.,V ex = drU (r)σ rsΣ † gs (r)Σ gr (r)+H.c., whereΣ gµ (r) denotes the spin-wave field operator for the collective excitation in state |µ [39]. With the above EIT configuration, the spin-wave fieldΣ gr (r) is coupled to the photonic fieldÊ ↑ (r) to form a dark state polariton (DSP), whileΣ gs (r) is coupled toÊ ↓ (r) to form another DSP. In this way,V ex maps to the exchange interaction between the control atom and the photonic field.
The exchange interaction takes the form U (r) = U 0 /[1 + (|r|/R c ) 6 ], where the strength U 0 = Ω 2 /∆ is determined by the Rabi frequency Ω and the detuning ∆ of the dressing field (Ω ∆), and the effective range is R c = (C 6 /∆) 1/6 with C 6 the van der Waals (vdW) interaction coefficient between atoms in state |r [33]. It does not need one to tune near a Föster resonance and can be conveniently controlled by the dressing field. Furthermore, the dressing scheme adopted here suppresses the unwanted direct interaction (∼ Ω 4 /∆ 3 ) between input photons in modeÊ ↓ . These desirable features as well as other details are compared to the off-diagonal vdW interaction scheme in the Supplemental Material [39].
Single-photon scattering.-First, we consider the interaction between the control atom and a single photon propagating along z-direction. Neglecting the decoherence of the Rydberg state, the input/output state in the one-dimensional (1D) case can be expressed as where |0 denotes the vacuum state for photons, and |↑ a = |r , |↓ a = |s represent two internal states of the control atom. The spatiotemporal feature of the photon is described by the wavefunction E µν (z, t) = ν| a 0|Ê µ (z)|ψ(t) . The output state of the system is determined by the dynamics inside the atomic ensemble z ∈ [0, L], where the spin-wave field needs to be taken into consideration. Let P ↓↑ , S ↓↑ , P ↑↓ , and S ↑↓ describe the collective excitation in state |e + , |s , |e − , and |r , respectively [40]. Then the evolution of the wavefunction ψ(z, t) = (E ↓↑ , P ↓↑ , S ↓↑ , E ↑↓ , P ↑↓ , S ↑↓ ) T is governed by i∂ t ψ = Hψ [39] with where g p and 2γ are the collective atom-photon coupling constant and the linewidth of the |g −|e ± transition, re-spectively, Ω ↑ (Ω ↓ ) denotes the control field for the Rydberg (Λ-type) EIT, and U (z) = U 0 /[1+( z 2 + r 2 ⊥ /R c ) 6 ] is the potential. In the frequency (ω) domain, we have where the susceptibilities χ ↓ and χ ↑ come from the dressing induced diagonal interaction, while κ describes the spin-exchange coupling between states |↓ p ⊗ |↑ a and |↑ p ⊗ |↓ a . For the input state |↓ p ⊗ |↑ a , the solution to Eq. (3) can be written as E ↓↑ (L, ω) = T (ω)E ↓↑ (0, ω), and E ↑↓ (L, ω) = R(ω)E ↓↑ (0, ω). At steady state (ω = 0), we find χ µ (z, 0) = V(z)/v µ , and κ(z, where v µ = cΩ 2 µ /g 2 p (µ =↑, ↓) is the photon group velocity in the slow-light regime, and is the effective potential. In this case, the scattering coefficients For L > 4R c and r ⊥ < R c , the complex phase factor is simply given by p R c /γc denotes the effective optical depth. When the interaction strength U 0 is comparable to the EIT linewidth Ω 2 ↑ /γ or Ω 2 ↓ /γ, the ratio ξ is large. Consequently, both V(z) and φ have a large imaginary part. In this dissipative interacting regime, as OD c increases, the photon loss probability rapidly grows. However, Eq. (3) possesses an eigen state free from dissipation, such that as OD c increases further, the loss rate saturates, and the system eventually evolves into this dark state (Ω ↓ |↓ p |↑ a − Ω ↑ |↑ p |↓ a )/(Ω 2 ↓ + Ω 2 ↑ ) 1/2 with a prob- . Thus, such a dissipative spin-exchange collision can be used for robust generation of atom-photon entanglement. If the interaction strength is much smaller than the EIT linewidth, i.e., ξ 1, the effective potential V(z) ≈ U (z) is essentially real and the imaginary part of φ is largely suppressed. In this case, as OD c increases, the system undergoes a coherent oscillation between |↓ p ⊗ |↑ a and |↑ p ⊗ |↓ a [ Fig. 2(b)]. We calculate the scattering coefficients for a finite beam width w < R c [31] [see Figs. 2(a) and 2(b)], and find nice agreement with the results predicted by the 1D model.
We now focus on the coherent scattering process. When dissipations are negligible as in this case, the propagation of photons inside the atomic ensemble can be described by DSP fields [41]Ψ ↑ (z) = cos θ ↑Ê↑ (z) − sin θ ↑Σgr (z) andΨ ↓ (z) = cos θ ↓Ê↓ (z) − sin θ ↓Σgs (z) with tan θ µ = g p /Ω µ . As verified by Figs. 2(c)-2(f), for frequency components well within the EIT bandwidth, the dynamics inside the medium can be described by the following Hamiltonian whose first line denotes the photon kinetic energy, and the second (third) line represents the density (spinexchange) interaction between the photon and the atom. The single-photon scattering elucidated above can be used as a building block in quantum networks. At small OD c , the induced atom-photon entanglement can be further purified to establish quality entanglement between distant atoms [39]. Unlike the DLCZ protocol [42], the entanglement we discuss here refers to polarization bases instead of Fock space, so that photon-number resolved detectors are not required, and the system is insensitive to interferometric instabilities [8,43]. At large OD c that gives φ = π/2, the spin-exchange collision leads to a direct mapping between atomic and photonic states if Ω ↑ = Ω ↓ , i.e., |↓ p ⊗(α|↓ +β|↑ ) a ↔ (α|↓ −β|↑ ) p ⊗|↓ a , which facilitates quantum state transfer in a network.
Multi-photon scattering.-Next, we consider coherent spin-exchange collisions [governed by Eq. (5)] between the control atom and an input pulse containing n identical photons. Here, we focus on the limit of a long input pulse with a duration ∆t nR c /v µ . In this low-photondensity regime, photons rarely interact with the control atom at the same time, which allows us to obtain an analytical form for the output state based on single-photon scattering coefficients, without numerically solving the multi-photon Schrödinger equation based on Eq. (5).
Assuming the n incoming photons are in the spin-down state with a real temporal wavefunction h(t) normalized as dth 2 (t) = 1 and the control atom is initially spin-up, the input state of the system is given by (taking c = 1) where time ordering for the input photons is introduced [44]. For coherent spin-exchange collisions governed by Eq. (5), the total magnetizationσ ↑↑ + dzΨ † ↑ (z)Ψ ↑ (z) = 1 is conserved, which implies that at most one of the photons can be scattered to flip its spin state. At low photon density, photons interact with the atom one after the other, i.e., if a photon propagates through the medium without exchanging its state with the atom, the next photon still has a probability to do so; but once the exchange occurs, the remaining photons will keep their spin states. In this way, the output state is given by where |ψ in (t − τ ) corresponds to the situation in which no spin-exchange occurs, while |ψ m (t) denotes the event that the spin-exchange is between the control atom and the m-th photon in the pulse, given by with τ and τ the EIT-induced delay time for spin-up and spin-down photons in the atomic ensemble, respectively. In fact, the spin-exchange collision here can be viewed as a heralded single-photon subtractor: a single-photon is subtracted from modeÊ ↓ and added to modeÊ ↑ , conditioned on the spin-flip of the control atom. In contrast to previous schemes [45,46], the single-photon is coherently extracted from the multi-photon pulse here, so it simultaneously behaves as a single-photon source [47,48].
Since the extracted single-photon and the remaining n − 1 spin-down photons together constitute a pure state, the performance of such a single-photon subtractor can be measured by either part of the system. Tracing out n − 1 photons in modeÊ ↓ , the reduced density matrix operator for the spin-up single-photon iŝ ρ = dxdyρ(x, y)Ê † ↑ (x)|0 0|Ê ↑ (y), with the density matrix element ρ(x, y) =ρ(t − τ − x, t − τ − y) and [49] The efficiency for scattering a single-photon to spin-up state is found to be η = tr[ρ] = 1−|T | 2n , and the purity of this extracted single-photon is given by P = tr[ρ 2 ]/tr[ρ] 2 , which has an analytical expression if T = 1 − |R| 2 e iθ is real (i.e., θ = 0, π). For θ = 0, this result proves the fundamental trade-off between efficiency and purity of the single-photon subtraction observed in Ref. [24]: while the decrease of the singlephoton exchange rate |R| 2 reduces the efficiency η, it yields a larger single-photon purity P. The physical origin of this trade-off comes from entanglement between the subtracted single-photon and the remaining n − 1 photons. For a perfect exchange |R| = 1, only |ψ 1 survives in Eq. (6), so the timing for the first photon in modeÊ ↓ carries correlated information about the photon in modeÊ ↑ . This entanglement results in an impure spin-up photon with P = n/(2n − 1), exactly the case discussed in Ref. [44]. In contrast, for |R| 1 (T ≈ 1), each |ψ m in Eq. (6) is almost equally weighted, so the timing of the spin-up photon is uncorrelated with the timings of the n − 1 spin-down photons, i.e., they are not entangled. Therefore, the subtracted photon is almost pure withρ(x, y) ∼ h(x)h(y) and P ≈ 1. To verify the above analysis, we perform numerical simulations for n = 2 based on Eq. (5). As shown in Fig. 3, the existence of this trade-off is largely confirmed and good agreement with analytical predictions is observed. We note the above discussed trade-off is universal for a wide class of single photon subtractors in the literature [24,[45][46][47][48], where the arrival order of incoming identical photons is crucial to the output state. Although it prevents the implementation of a perfect single-photon subtraction with η = P = 1 for arbitrary incoming states, it remains possible to achieve high efficiency and purity simultaneously for a large input photon number. To demonstrate this, we consider the scattering of a coherent input state e −|α| 2 /2 n (α n / √ n!)|n with an optimal scattering rate |R opt | 2 that gives η = P. As shown in Fig. 4(a), both purity and efficiency approach unity as the mean photon number |α| 2 increases.
Finally, we emphasize that to achieve the optimal purity, the phase of T = |T |e iθ needs to be zero, i.e., photons remaining in modeÊ ↓ should acquire the same phase irrespective of whether the spin-exchange hap- pens or not. The monotonic decrease of purity [P ≈ (1 − |T | 2 )/2(1 − |T | cos θ) for |α| 2 1] with the phase mismatch θ [ Fig. 4(b)] can be understood as follows: the phase (m − 1)θ imprinted on |ψ m in Eq. (6) causes the phase distribution of the spin-up photon strongly correlated with the timing of the remaining photons. In the limit of |T | ≈ 1 and θ = π, the purity P ≈ 1/(2n − 1) is even worse than a perfect exchange, although the probability distributionρ(x, x) ∼ h 2 (x) remains unaltered. Such a phase-matching condition highlights the coherent feature of the single-photon subtraction, which cannot be captured by the Monte Carlo simulation used in Ref. [24].
In conclusion, we present a scheme to engineer spinexchange interactions between photons and a single atom, and discuss the scattering dynamics for a singlephoton as well as a multi-photon input. Further studies can use some recently developed techniques [50][51][52][53] to address the interesting multi-photon scattering problem beyond the low-photon-density regime, where collective effects will come into play. The system can also be used to perform quantum logic operations, such as single-photon optical switching [54]. Besides facilitating quantum information processing, the spin-exchange collision discussed here opens a new avenue for the study of strong lightatom interactions.
We acknowledge valuable discussions with Alexey Gorshkov, Ron Belyansky, and Lin Li.
I. DERIVATION OF THE ATOM-PHOTON INTERACTION HAMILTONIAN
In this section, we derive the effective Hamiltonian that governs the atom-photon interacting dynamics. We first consider the interaction between the control atom and ensemble atoms. For the level structure shown in Fig. 1(b) where V mn = C 6 /|r m − r n | 6 (C 6 > 0) is the van der Walls (vdW) interaction between atoms in Rydberg state |r . In the large detuning regime Ω ∆ = E r − E s − ω L , the dynamics of a single Rydberg excitation can be described bŷ where δ = Ω 2 /∆ is the linear light shift, and U mn = Ω 2 V mn /[∆(∆ + V mn )] denotes the Rydberg dressing indcued interaction [1]. In the low-photon-density regime, the number of ensemble atoms excited to state |r , |s is much smaller than one, so we can neglect the interaction between ensemble atom themselves and decompose Eq. (2) intô H eff =Ĥ c +Ĥ e +Ĥ ce , whereĤ c ,Ĥ e , andĤ ce describe the Hamiltonian of the control atom, the ensemble atoms, and their mutual interactions, respectively. Dropping the superscript of the control atom and setting its location as the origin of the coordinate, we then obtainĤ c = (E r + δ)σ rr + (E s − δ)σ ss ,Ĥ e = n [(E r + δ)σ n rr + (E s − δ)σ n ss ], andĤ ce = n U (r n ) σ rrσ n ss +σ ssσ n rr + (e −ik·rnσ rsσ n sr + H.c.) . To describe collective excitations of ensemble atoms into state |r , |s , we introduce the collective spin operator Σ µν (r n ) = n∈∆(rn)σ n µν / ρ(r n )∆(r n ) for atoms located within a small volume ∆(r n ) around coarse grained r n with a density ρ(r n ). Then, the state of ensemble atoms in r n piece can be expressed as with N = ρ(r n )∆(r n ) and |G = |g 1 g 2 · · · g N . In the linear regime where the input photon number per r n cell is much smaller than the corresponding atom N , or most atoms reside in the ground state |g , i.e., j, k N . This makes collective excitations behave as bosonic quasi particles with [Σ gµ (r m ),Σ † gν (r n )] ≈ δ mn δ µν /∆(r n ), [Σ † gµ (r m ),Σ † gν (r n )] = [Σ gµ (r m ),Σ gν (r n )] = 0, andΣ µν (r n ) ≈Σ † gµ (r n )Σ gν (r n )/ ρ(r n ) (µ, ν = r, s). The Hamiltonian involving ensemble atoms can then be expressed in terms of these collective bosonic operators aŝ H ce = rn ∆(r n )U (r n ) σ rrΣ † gs (r n )Σ gs (r n ) +σ ssΣ † gr (r n )Σ gr (r n ) + e −ik·rnσ rsΣ † gs (r n )Σ gr (r n ) + H.c. .
arXiv:2003.08709v1 [quant-ph] 19 Mar 2020
With similar procedures, we obtain the HamiltonianĤ EIT describing the double EIT couplingŝ whereÊ µ (r) denotes the slowly varying operator for the quantized photonic field, g p is the collective atom-photon coupling constant for a uniform atomic density, and Ω µ denotes the Rabi frequency of the classical control field. We then transform the dynamics into the slowly varying and rotating frame according toÛ =Û ↓Û↑Ûc witĥ we finally arrive at the total HamiltonianĤ =Ĥ ph +Ĥ EIT +Ĥ ce witĥ where we consider the two quantized photonic fieldsÊ ↓ andÊ ↑ copropagating along z-direction, neglect the diffraction in x, y directions, and assume the momentum-matching condition k ↑ + q ↑ = k ↓ − q ↓ + k is satisfied. The following analysis of atom-photon collisional dynamics is based on Eqs. (9)-(11).
II. ANALYSIS OF SINGLE-PHOTON SCATTERING
We first consider the scattering dynamics of a single photon. The quantum state for the single control atom and a single photon (or a single spin-wave excitation) can be written as where |↑ a = |r and |↓ a = |s denote the state of the control atom. The evolution of the wavefunctions in Schrödinger picture can be determined by studying the Heisenberg equations for the field operator ∂ tΣgr (r) = −iΩ ↑Σge− (r) − iU (r)σ ssΣgr (r) − iU (r)σ srΣgs (r), The above derivation holds for scatterings in 3D, while in the main text we focus on the scattering in 1D case. For 3D case, the scattering coefficients T (ω) and R(ω) are defined as where r ⊥ = {x, y} denotes transverse coordinates, and E ↓↑ (r ⊥ , z = 0, ω) = exp[−|r ⊥ | 2 /2w − (ω∆t) 2 /2] 4∆t/c √ π/w denotes the (frequency-domain) wavefunction of a Gaussian input beam with a waist w and a temporal width ∆t.
A. Comparison with the scheme using off-diagonal vdW interactions In addition to Rydberg dressing, the spin-exchange interaction can also be induced by exploiting off-diagonal vdW interaction between two Rydberg states. As shown in Fig. 1(a), state |s now denotes another Rydberg state (e.g., |s = |n S 1/2 , J = 1/2, m J = −1/2 of 87 Rb atom). In this configuration, the Hamiltonian describing the interaction between the control atom and ensemble atoms isĤ ce = n U (r n ) [σ rrσ n ss +σ ssσ n rr + λ(σ rsσ n sr + H.c.)] [2]. Near a Förster resonance, λ ≈ 1, which gives a total Hamiltonian of the same form as Eqs. (9)- (11). In this case, the interaction potential is replaced by U (r) = U 0 /[ z 2 + r 2 ⊥ /d ⊥ ] 6 , where d ⊥ denotes the transverse separation between the control atom and the center of the input beam, and U 0 = C 6 /d 6 ⊥ denotes the maximum interaction strength. Correspondingly, the interaction-induced phase is approximately given by φ ≈ (3π/8)ξ[1 − i(21/32)ξ] × OD c , where the effective optical depth becomes OD c = g 2 p d ⊥ /γc, and ξ = U 0 /γ EIT also determines the scattering property. From the perspective of experimental realization, the scheme using off-diagonal vdW interaction is simpler than the dressing scheme discussed in the main text, as it only requires two control beams. Furthermore, since the direct off-diagonal vdW interaction is stronger than the dressing induced interaction, the separation d ⊥ can be made larger in this scheme, which reduces the crosstalk between the control atom and ensemble atoms [to be discussed in Sec. IV]. However, the performance of this scheme is found to be not as good as the dressing scheme.
First, the off-diagonal vdW interaction is relatively sensitive to the variation of the transverse separation ∆r ⊥ = r ⊥ − d ⊥ between the control atom and the input photon [ Fig. 1(b)]. This will distort the transverse profile of the photonic wavefunction and result in a reduced fidelity compared with the prediction of the 1D model [ Fig. 1(d)]. In contrast, the plateau of the dressed interaction potential makes it highly insensitive to the variation ∆r ⊥ [Fig. 1(c)], and can keep the transverse mode profile of the transmitted photon unaltered [ Fig. 1(e)].
Second, in this scheme, the strong interaction between Rydberg atoms in state |s will induce a strong dissipative nonlinearity between input photons themselves for multi-photon scattering, which increases the nonlinear loss of the system. For the dressing scheme, such an unwanted direct interaction between input photons is suppressed by a factor of (Ω/∆) 2 compared with the interaction strength between the control atom and the input photon. B. Long-distance entanglement with single-photon scatterings As discussed above, the spin-exchange collision between the control atom in state |↑ a and a single input photon in modeâ conditioned on the survival of the input photon, with a success probability p = |T | 2 + |R| 2 . We now discuss the implementation of the quantum repeater protocol using our system. First, we describe how to create elementary entanglement. As shown in Fig. 2(a), two independent spin-exchange collisions at node A and node B will produce a product state |ψ a ⊗ |ψ b . The output photons from node A and node B are then combined at a beam splitter (BS), which transforms optical modes asâ µ → (â µ + e iϕb µ )/ √ 2 and b µ → (b µ − e −iϕâ µ )/ √ 2 with µ =↑, ↓, and yields an output state where |Φ ± = (| ↑ a | ↓ b ± | ↓ a | ↑ b )/ √ 2 denotes the Bell states of atoms, |ψ dis represents the state where output photons carry the same spins and will be discarded after postselection. With additional polarization beam splitters (PBS), we can detect photons in different modes, e.g., D 1 , D 2 , D 3 , D 4 for registering photons in modeb ↓ ,b ↑ ,â ↓ ,â ↑ respectively. The click of detectors {D 1 , D 2 } or {D 3 , D 4 } then heralds the generation of maximally entangled state |Φ + , while the click of {D 1 , D 4 } or {D 2 , D 3 } heralds the Bell state |Φ − .
Once the elementary entanglement is established, we can proceed to extend it to longer distance via entanglement connections. Suppose that we have two entangled pairs |Φ + ab and |Φ + cd distributed at {A, B} and {C, D}, respectively. Then, we inject a spin-down single-photonb † ↓ |0 interacting with the atom at node B and measure the spin state of the atom. Conditioned on the survival of the input photon and the measurement result |↓ b , we can swap the atom-atom entanglement for the atom-photon entanglement, i.e., creating an entangled state with a success probability (1 + |R| 2 )/2. Similarly, we can exchange the state of the atom at node C with a single photon probabilistically, and transform the atom entangled pair |Φ + cd into the atom-photon entangled state |φ dc . Then, by using the same linear optics setup described previously [see Fig. 2(b)], we can herald that atoms at nodes A and D are entangled in Bell basis |Φ + cd or |Φ − cd , depending on the outcomes of the photon detections.
There are several advantages of our scheme compared with the popular DLCZ protocol [3]. First, the entanglement here is encoded in atomic spin basis rather than in the Fock space, such that photon-number resolved detectors are not required [4]. Second, the entangled state produced here is independent of the phase ϕ. Such an interferometric phase is always unstable over long communication time scales, and can severely limit the performance of quantum repeaters based on single photon detections [5]. Further, the quantum information at each node is encoded in the internal state of a single atom, which is much easier to manipulate and to detect compared with an atomic ensemble, and does not require the quantum storage and retrieval.
The above scheme remains workable in the presence of photon loss and imperfect detection efficiency. For a more realistic consideration, dark counts of photon detectors and decoherence of atomic state will reduce the fidelity of the final Bell state, which can be further optimized by using error-correction procedures [4].
III. ANALYSIS OF MULTI-PHOTON SCATTERING
We consider the multi-photon scattering in the coherent regime (ξ 1 and |T | 2 + |R| 2 ≈ 1). As verified in the main text, the dynamics in this case can be described by a Hermitian Hamiltonian The symmetry of this Hamiltonian and the low-photon-density assumption allows us to obtain an analytical form of the output state |ψ , as given by Eq. (6) in the main text. To analyze such a many-body entangled state, we trace out remaining photons in modeÊ ↓ to obtain a reduced density matrixρ = dxdyρ(x, y)Ê † ↑ (x)|0 0|Ê ↑ (y) for the single photon in modeÊ ↑ , where the matrix element ρ(x, y) is determined by The density matrix element ρ(x, y) for the n-photon Fock input state is given in the main text. For a coherent input state e −|α| 2 /2 n (α n / √ n!)|n , we find ρ(x, y) when x > y andρ(x, y) =ρ * (y, x) for x ≤ y. The efficiency and the purity for the extracted single photon are respectively given by η = tr[ρ] = 1 − e −|α| 2 (1−|T | 2 ) and To verify the analytical results, we perform numerical calculations for the n = 2 Fock input state. The output state can be obtained by solving equations of motion for DSP wavefunctions derived from Eq. (23), together with the boundary condition (α, λ, µ =↑, ↓) The accuracy of the approximate analytical solution depends on the effective number of photons n eff = nR c /v ↓ ∆t that can be simultaneously interacting with the control atom. As shown in Figs. 3(a) and 3(b), as n eff becomes smaller to validate the low-photon-density assumption n eff 1, the analytical predictions for purities and efficiencies show better agreement with the numerical results. In the main text, we take n eff = 0.1, with which the density matrix ρ(x, y) can be well described by the analytical solution [ Fig. 3(c)]. For a large n eff , the analytical solution is not as accurate, but can still provide a qualitative prediction of the output state as verified by Fig. 3. . 3. Purity and efficiency of the extracted photon for (a) θ = 0 (by taking φ = π/2 and Ω ↓ > Ω ↑ ) and (b) θ = π (by taking φ = π/2 and Ω ↓ < Ω ↑ ). (c) Normalized density matrix |ρ(x, y)|/η for the indicated values of T . The left and the right columns show numerical calculations (with n eff = 0.1) and analytical results, respectively.
IV. EXPERIMENTAL CONSIDERATIONS
There are several aspects that need to be carefully considered for the experimental realization of our scheme. First, the validity of the 1D treatment requires the effective range R c of the potential to be larger than the waist and smaller than the Rayleigh range of the input beam, i.e., w < R c < πw 2 /λ 0 . In Fig. 2(b) of the main text, we take w = 2 µm, R c = 12 µm, and λ 0 = 0.78 µm, which satisfies the above condition. It is worth pointing out that the deviation from this condition will not influence the mode profile of the output photon in the dissipative interacting regime at large OD c , as the entangled dark state does not depend on the detail of the interaction potential.
Second, the tail of the EIT pumping beams Ω ↑ and Ω ↓ can influence the control atom. To suppress such crosstalks, the waists of these pumping beams need to be smaller than the distance d ⊥ between the control atom and the center of the atomic ensemble. The crosstalks can be minimized by using different species of atoms for the control one and the ensemble one, where the dressing fields Ω for these two species of atoms also need to be different.
Third, the finite dressing parameter (Ω/∆) puts a limitation on the maximum photon number n max ≈ (∆/Ω) 2 . For a typical dressing parameter Ω/∆ = 0.1, the input photon number should be smaller than 100, otherwise there would on average be one photon whose spin is directly flipped without interacting with the control atom.
Finally, we discuss the necessary condition for neglecting the decay of the Rydberg state. First, the time duration for a photon inside the medium should be much smaller than the inverse decay rate 1/γ s of the Rydberg collective excitation in the atomic ensemble. Second, the time duration for a photon completely passing through the medium needs to be much smaller than the inverse decay rate 1/γ c of the Rydberg state for the control atom. These requirements lead to the conditions (i) 4γ s R c /v ↓ 1; and (ii) γ c (∆t + 4R c /v ↓ ) 1, where ∆t denotes the time duration of the input pulse, and the length of the medium is taken to be L = 4R c . After the interaction, one can transfer the Rydberg excitation of the control atom to another long-lived ground state to preserve its coherence. For realistic decay rates γ s /2π = 0.1 MHz and γ c /2π = 5 kHz in current experimental setups, the parameters used in Fig. 2(c) of the main text give 4γ s R c /v ↓ ≈ 0.66 and γ c (∆t + 4R c /v ↓ ) ≈ 0.06. Although condition (i) is not strictly satisfied for these experimental parameter choices, it only introduces certain extra photon loss, and one can make use of post-selection to explore the underlying physics of the system and some possible applications discussed in Sec. II B. | 2020-03-20T01:00:54.992Z | 2020-03-19T00:00:00.000 | {
"year": 2020,
"sha1": "4375f3e06375d05eaa7c871ac6d051e3ccbd4913",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.08709",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4375f3e06375d05eaa7c871ac6d051e3ccbd4913",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
256426032 | pes2o/s2orc | v3-fos-license | Current tidal power technologies and their suitability for applications in coastal and marine areas
A considerable body of research is currently being performed to quantify available tidal energy resources and to develop efficient devices with which to harness them. This work is naturally focussed on maximising power generation from the most promising sites, and a review of the literature suggests that the potential for smaller scale, local tidal power generation from shallow near-shore sites has not yet been investigated. If such generation is feasible, it could have the potential to provide sustainable electricity for coastal homes and communities as part of a distributed generation strategy, and would benefit from easier installation and maintenance, lower cabling and infrastructure requirements and reduced capital costs when compared with larger scale projects. This article reviews tidal barrages and lagoons, tidal turbines, oscillating hydrofoils and tidal kites to assess their suitability for smaller scale electricity generation in the shallower waters of coastal areas at the design stage. This is achieved by discussing the power density, scalability, durability, maintainability, economic potential and environmental impacts of each concept. The discussion suggests that tidal kites and range devices are not well suited toward small-scale shallow water applications due to depth and size requirements, respectively. Cross-flow turbines appear to be the most suitable technology, as they have high power densities and a maximum size that is not constrained by water depth. Oscillating hydrofoils would also be appropriate, provided comparable levels of efficiency can be achieved.
A
Device swept area (m 2 ) A b Basin surface area (m 2 ) A c Channel cross-sectional area (m 2 ) B Channel blockage ration (dimensionless) b Hydrofoil blade span (m) C p Turbine power coefficient (dimensionless) d Oscillating hydrofoil vertical motion extent (m) E p Impounded water potential energy (J) g Acceleration due to gravity (taken as 9.81 m/s 2 ) (m/s 2 ) H Hydraulic head ( for all EU member states so that 20 % of EU energy will come from renewable sources by 2020 (European Parliament and Council 2009). Meanwhile, the UK Government has committed to legally binding targets to reduce carbon emissions by 34 % by 2020 and 80 % by 2050, as set out in the 2008 Climate Change Act (HM Government 2008). To achieve these targets, it is predicted that 30 % of UK electricity will need to be generated from renewable sources by 2020 (HM Government 2009), while the sector will need to be almost entirely carbon free by 2050 (HM Government 2011).
Of the many varieties of renewable energy, tidal power is one of the few that is almost perfectly predictable over long timescales (Denny 2009). As a marine renewable, tidal power deployments can be located in under-utilised locations (Fraenkel 2006), and so positioned out of sight and out of mind much more readily than large onshore devices, overcoming the so-called not in my backyard (NIMBY) problem that particularly affects wind power (Premalatha et al. 2014).
Since the tides are out of phase around the coast, power is likely to be available at one tidal installation while there is slack water and no tidal power available in another part of the country (Fraenkel 2002). However, energy availability during neap tides is significantly less than that during spring tides, regardless of location.
There are two methods of generating electricity from the tides: 1. tidal range devices, which utilise the difference in water level between high and low tide; 2. tidal stream devices, which utilise the energy of flowing water in tidal currents to generate electricity directly.
Range devices are the most well-known form of tidal energy, thanks in no small part to the 220 MW Rance River Tidal Power Station in Brittany, France, which opened in 1966 and has been operating at full capacity since 1968 (Lebarbier 1975). However, tidal range schemes are currently receiving less research attention from government, industry and academia than the second method of tidal stream, which is perceived to benefit from lower installation costs and reduced environmental and ecological impacts (Fairley et al. 2013). Tidal stream is often considered to be analogous to wind energy, as in both methods energy is extracted from a moving fluid. However, while the wind industry has seemingly settled on three-bladed axial-flow turbines as the energy convertors of choice [due to a general acceptance that they are more efficient at larger scales (Howell et al. 2010)] there are currently many different technologies in development in the tidal stream sector. These include a multitude of different turbine designs, as well as more unusual concepts such as oscillating hydrofoils and tidal kites. The bulk of current tidal energy research and development appears to be naturally focussed on developing larger scale schemes and devices to harness the greatest resources. As an example, Marine Current Turbines' (MCT) SeaGen device, arguably the most fully developed tidal stream turbine, is considered viable in 20-40-m-deep waters with peak spring tidal current velocities that are greater than 2.25 m/s (Denny 2009), while proposed tidal range schemes typically enclose tens or hundreds of square kilometres of water Rourke et al. (2010a).
Many of the locations that meet these size and speed requirements, for instance Pentland Firth off the northeast coast of Scotland, are located in sparsely populated regions many kilometres from areas of peak electricity demand. A review of the literature suggests relatively little consideration has thus far been given to exploring the potential of smaller scale tidal power from more diverse locations that are closer to populated areas. For example, the 2025 UK tidal stream resource estimates provided by The Crown Estate (2012) were predicted assuming tidal turbine rotor diameters of 40 m, operating in waters up to 60 m deep (The Crown Estate 2013).
There would be benefits to developing devices that could harness any potential resources in such locations. Aside from the obvious contribution to renewable energy targets (either through directly supplying renewable electricity to the grid or by reducing demand from other resources), efficiency savings in terms of reduced transmission losses would be made by generating power much closer to where it is needed. Additionally, connecting devices to the grid would be a more straightforward endeavour; near-shore sites would reduce the length of expensive undersea cabling required to transmit power back to land-a definite benefit given that the price of installing the cable can sometimes exceed the costs of the cable itself (de Alegría et al. 2009).
Small devices would by definition be cheaper to build and install than their larger counterparts (Bryden et al. 1998), and devices in shallow near-shore waters would also be more accessible, allowing easier installation and maintenance. These benefits could make small-scale tidal technology an appealing option for coastal communities as part of a distributed generation strategy, helping to overcome the variability of other renewables and fitting in to a wider context of sustainability and efficiency.
However, there are drawbacks that will need to be overcome if such sites are to be utilised. Chief amongst these is that these sites are likely to be less resource rich than those already under consideration. For example, in shallow waters the slower flow of the tidal current boundary layer will occupy a greater portion of the water depth, reducing the resource available to tidal stream devices (Polagye et al. 2010). Additionally shallow channels, which are already highly stressed due to bottom friction, produce proportionally less power and suffer a diminishing return as new devices are added due to increasing levels of drag (Vennell 2012). Lone devices may, therefore, be more cost-effective in such areas than larger arrays. Submerged devices in shallow waters will also be closer to the photic zone and hence subjected to a greater risk of bio-fouling; being closer to shore they are also likely to more readily impact highly complex and inter-dependent coastal ecosystems. Devices in shallow waters will also pose more of a navigational hazard to commercial and recreational marine traffic and, positioned close to shore, they may also impact other water users such as swimmers, and be more visible on land, potentially leading back to issues with NIM-BYism.
The objectives of this article are to provide a general overview of the current major tidal technologies and to identify key criteria that will govern the effectiveness of a shallow water tidal energy deployment. In Sect. 3 we then discuss the potential performance of an isolated device from each discussed technology class for the case of an open coastline (i.e. where tidal currents are not naturally constricted by bathymetry) for near-shore waters of less than 10 m depth, as shown in Fig. 1. With careful consideration, the tidal resources of such areas could be utilised for the benefit of local communities that live within close proximity to them.
Tidal turbines
Tidal turbines extract energy from a moving fluid; consequently they are somewhat analogous to wind turbines. Like wind turbines, most tidal turbines feature blades with aerofoil cross sections and operate according to the principles of aerodynamic lift, since this is more efficient than utilising aerodynamic drag (Hau and von Renouard 2013). However, there are major differences between the two technologies. The most immediately obvious are physical differences between the fluids; the density of seawater is approximately 1025 kg/m 3 , compared with around 1.25 kg/m 3 for 1 atm. of air at room temperature.
Tidal currents are typically much slower than the wind, though the much greater density of water compensates for this in terms of power, allowing tidal stream devices to generate similar levels of output to wind turbines (Bahaj and Myers 2003). In contrast to wind power, there are no extreme flow speeds underwater that could potentially damage devices or force them to shut down (Blunden and Bahaj 2006); however, tidal stream devices must still be durable to withstand the greater loading forces generated by water. Further distinctions between the two technologies include differences in Reynolds number, the occurrence of cavitation and the bi-directionality of tidal currents (Batten et al. 2006); however, the most major differences are arguably the effects of blockage (Garrett and Cummins 2004) and the free surface (Whelan et al. 2009).
Tidal turbines can be broadly classified according to their design as either axial flow or cross flow, as illustrated in Fig. 2. Axial-flow turbines sweep through a circular area of water by rotating about an axis that is parallel to the flow direction. Cross-flow devices sweep through a rectangular area by rotating about an axis that is perpendicular to the flow, with water flowing across each blade twice.
The amount of power a turbine can extract from an unbounded fluid flow can be described mathematically using momentum theory (also known as actuator disc theory): This allows for a simple calculation of turbine power density (the power produced per m 2 of total device area): There is a theoretical limit to the amount of energy that can be extracted from an unbounded fluid. For a single actuator Fig. 2 Tidal turbine rotor types. Adapted from Entec UK Ltd (2007) disc (e.g. an axial-flow turbine) this limit is 59.3 % (16/27) (Lanchester 1915;Betz 1966). The corresponding limit for a double actuator disc (e.g. a cross-flow turbine) has been shown to be slightly higher at 64.0 % (16/25) (Newman 1983); a general limit of 66.6 % (2/3) occurs for large number of discs (Newman 1986). Although these theoretical limits generally hold true for wind turbines, for underwater turbines the assumption of an unbounded flow is rather unrealistic (Garrett and Cummins 2004). This is due to the proximity of the seabed and the sea surface, which act to constrain the flow (Bryden et al. 2007;Whelan et al. 2009). As a result, tidal turbines that are of significant size relative to their surroundings can exhibit C p values that greatly exceed the Lanchester-Betz and Newman limits. As an example of this, the cross-flow turbine tested by McAdam et al. (2010) attained C p values of up to 160 %.
This effect, known as blockage, is most noticeable in tidal channels where the flow is bounded not only by the seabed and surface, but also by the channel walls. As a result of this constriction, the power coefficient of an unducted axial turbine situated between two rigid surfaces (such as the walls of a tidal channel) is increased by a factor of E (Garrett and Cummins 2007): With the blockage ratio B is defined as: For the single small-scale device located along the open coastline that we consider in Sect. 3, the blockage ratio is likely to be small (since A c A), meaning the efficiency increase (1 − A/A c ) −2 → 1. As a result we do not consider blockage effects on power density here. Instead we simply note that tidal stream designs that are capable of being stretched horizontally through the water column will be able to take advantage of blockage effects, should a particular site feature regions of constrained flow (e.g. an underwater channel).
The power densities of ideal axial and cross-flow turbines in unbounded flows of up to 2 m/s are plotted in Fig. 3, along with that of a theoretical 100 % efficient turbine. Although cross-flow devices have a slightly higher ideal power density than axial turbines due to their slightly higher theoretical efficiency, practical device efficiency is heavily influenced by turbine design and performance. This is governed by a variety of factors such as rotor solidity (Consul et al. 2009), blade profile (Kadlec 1978) and Reynolds number (Roh and Kang 2013), blade pitch (Myers and Bahaj 2006;Kirke and Lazauskas 2008), blade shape (Baker 1983) and blade surface roughness (Howell et al. 2010;Walker et al. 2014).
Axial-flow turbines
Axial turbines currently appear to be the most popular tidal stream design; at the time of writing the European Marine Energy Centre (EMEC) lists 45 horizontal-axis turbines in commercial development. Of these, 15 are cross-flow turbines (since their axis of rotation is perpendicular to the flow) operating in a horizontal configuration; the remaining 30 are true axial-flow turbines (with an axis of rotation parallel to the flow direction) (European Marine Energy Centre 2014a).
The MCT SeaGen device, which has been undergoing testing in Strangford Lough, Northern Ireland, since 2008 (Bahaj 2011), is the first commercial-scale tidal turbine to generate electricity for the grid outside of a test centre. The device has a reported design life of 20 years (Douglas et al. 2008). A final version with 20-m 3-bladed rotors and a generating capacity of 2 MW is currently in development.
The performance characteristics of the 3-bladed MCT device (MCT Ltd 2014) and four axial turbines: Atlantis Resource Corporation's AR1000 (European Marine Energy Centre 2014b); Bourne Energy's RiverStar (Bourne Energy These figures provide a possible indication of the potential of commercial axial-flow turbines; however, it is difficult to accurately verify performance data quoted by developers. Available experimental data on academically developed axial-flow turbines are presented in Table 2.
Comparing these tables shows that the commercially developed tested turbines appear to achieve similar levels of efficiency to those tested in the literature, despite the large differences in scale. The dependence of power output on flow speed and device size is also apparent.
Although tidal turbines are not predicted to have significant impacts on water levels, they have been predicted to noticeably impact water quality by reducing both upstream and downstream current velocities while increasing those along the side of an array ). This has implications for sediment transport, with a reduction in suspended sediment concentrations occurring upstream and downstream of an array while increasing significantly along its sides. In the region immediately around a tidal stream turbine the flow will be accelerated, which could lead to scouring of the seabed around the structure (Shields et al. 2011). Other models have also indicated that the extraction of energy from a tidal system will impact the sediment dynamics of a region, depending on the tidal asymmetry of the region (Neill et al. 2009). This can affect erosion and deposition a considerable distance from the point of energy extraction, and reduce the overall magnitude of bed-level change in comparison with non-extraction cases.
Marine energy devices such as tidal turbines can have many other impacts on their surrounding environment (Gill 2005). These include alteration of habitats for benthic organisms, noise pollution, the generation of electromagnetic fields and the striking of marine animals with rotor blades or other moving parts.
Cross-flow turbines
Cross-flow turbines (CFTs) rotate about an axis that is perpendicular to the flow direction. This axis can be positioned in either the vertical or horizontal plane relative to the flow, resulting in vertical-axis CFTs or horizontal-axis CFTs (also known as transverse horizontal-axis turbines (THATs)), as illustrated in Fig. 4.
Although less popular with wind developers, cross-flow turbines are not without their advantages. These are discussed by Eriksson et al. (2008) for vertical-axis wind turbines, and include omni-directionality, no need for pitch regulation or a gearbox, fewer moving parts, lower maintenance requirements, quieter operation and better performance in severe wind climates.
It is reasonable to suggest that many of these advantages also apply to cross-flow tidal turbines, which are the second most popular class of tidal stream technology with 26 different devices listed by EMEC at the time of writing (European Marine Energy Centre 2014a). There are various subcategories of cross-flow turbines, including straight-bladed Darrieus rotors, helical designs and devices designed to exploit channel blockage effects, all of which have differing levels of performance.
Performance data on commercially developed cross-flow turbines are less readily available than for axial turbines, perhaps indicating that the technology is currently at a lower technological readiness level (TRL). Table 3 details the performance characteristics of several academically developed cross-flow turbines.
The two lower efficiency devices (Shiono et al. 2000;Coiro et al. 2005) are traditional Darrieus rotor designs featuring straight blades. Although relatively easy and simple to build, one of the drawbacks of this design is that at low speeds, straight blades can suffer wildly varying angle of attack (Kirke and Lazauskas 2011). This makes fixed pitch straight-bladed devices liable to stall, reduces overall device performance, hampers their ability to self-start and imposes additional stresses on the device in the form of rapidly fluctuating torque pulsations.
In principle, it is possible to minimise these issues through the use of a pitch-control system (Kirke and Lazauskas 2011). Active pitch-control systems, where blade pitch is forced to follow a predetermined regime, are mechanically complex, however, the designs of passive pitch-control systems, where the forces acting on the blades control the pitch, are of dubious effectiveness. As a result of these problems, variablepitch Darrieus wind turbines have never reached commercial production (Kirke and Lazauskas 2011). Still, it does appear possible to make efficient hydrokinetic Darrieus rotors, as demonstrated by Kyozuka (2008), although it should be noted that the Reynolds number and blockage ratio that this design was tested do not appear to be defined.
The more efficient design of Yang and Shu (2012) features a helical rotor, which is claimed to have "all the advantages of a traditional Darrieus turbine without any of the disadvantages" (Gorban et al. 2001). Helical blades can even out Device designed to exploit blockage effects torque pulsations with changing azimuthal angle (Kirke and Lazauskas 2008), while their inclined nature means they also stall less abruptly, improving performance and starting torque (Baker 1983); however, these blades are more difficult and costly to construct. The most efficient device (McAdam et al. 2010) also features somewhat inclined blades; however, it is primarily designed to be large enough to take advantage of the blockage effects discussed in Sect. 2.1.
Ducted turbines
Ducts or diffusers can be added to both axial and crossflow tidal turbines to increase the mass flow rate over the rotor, allowing a given power output to be achieved from a smaller diameter turbine, as shown numerically by Shives and Crawford (2010). However, this definition of turbine diameter typically excludes the additional area of the duct itself. Although many ducted devices are claimed to be more efficient than their counterparts, a numerical study comparing devices of equal total size found that ducted turbines were less efficient overall than their unducted counterparts (Belloni 2013). The same study found that the performance of ducted and open-centre turbines was found to increase in yawed flow in contrast to unducted devices. Given the numerical nature of these studies, several assumptions were made, including simulating only a minimal turbine structure. Belloni (2013) primarily used an actuator disc model, which does not accurately model real turbine behaviour in comparison to a Reynolds-averaged Navier-Stokes blade element momentum (RANS-BEM) model, though performance trends remained consistent in both.
The additional structure of the duct and the stalled flow generated by diffusers will also increase the drag coefficient of a ducted turbine, which will in turn reduce the overall efficiency of ducted turbine arrays, particularly in regions of constrained flow (Shives and Crawford 2010). The effect of a diffuser on device performance is dependent upon its size and angle, with each having an optimum value (Gaden and Bibeau 2010).
EMEC currently lists eight different commercial developers working on ducted devices. Although there is little experimental data on ducted device performance available in the literature, details on two axial-flow commercial devices, a 200 kW OpenHydro demonstrator device (Polagye et al. 2010) and the small Clean Current CC035B (Clean Current Power Systems Inc 2012), are provided in Table 4 alongside a ducted cross-flow device, the 170 kW Davidson-Hill Venturi turbine (Tidal Energy Pty Ltd 2015). Two values of C p are provided, one calculated using the rotor diameter and a second assuming a total device diameter approximately 1.45 times that of the rotor. This figure is in line with the total/rotor area ratios for the devices reported in Table 2. (2013); however, this value can and will vary depending upon device design. It can be seen that the device C p values drop significantly if the additional area of a duct is considered.
Oscillating hydrofoils
An oscillating hydrofoil consists of a hydrofoil wing attached to a lever arm, as shown in Fig. 5. As a tidal current flows over the hydrofoil it generates lift, causing the lever to rise. At the peak of the rise the hydrofoil's angle of attack changes so that lift is generated on the underside, reversing the direction of motion. The resulting oscillations can be used to drive fluids in a hydraulic system to power a generator. With this range of motion, the blades of an oscillating hydrofoil require a simpler geometry than those of an axial turbine. This is because the flow speed, and therefore angle of attack, over the blades will be the same along their entire length, meaning they do not require twisting like axial-flow turbine blades do. Consequently they are likely to be easier and cheaper to produce.
An oscillating hydrofoil extracts energy from the tidal current in a similar manner to a turbine (Kinsey et al. 2011). The only difference is the definition of swept area, which for a hydrofoil is the product of the extent of the vertical motion and the wingspan: Since A = bd, power density is, therefore, the same as Eq. 2: Rourke et al. (2010b) mention that oscillating hydrofoil efficiency can be poor due to the time required to reverse the direction of oscillation; the Stingray device described in Table 5 was shelved due to poor performance. Pulse Tidal (2014) are currently developing another commercial oscillating hydrofoil device, although there appears to be little published performance data available at the time of writing. It is, therefore, difficult to draw any firm conclusions on the efficiency and power output of oscillating hydrofoils, as there is a lack of information available.
Tidal kites
A tidal kite consists of a relatively small turbine attached to a hydrofoil wing, with the entire arrangement tethered to the seabed. The motion of the tidal currents over the wing creates a lift force that pushes the kite forwards through the water. Through a combination of the tension in the tether and the use of a rudder, the tidal kite can be directed to "fly" along a given trajectory through the water column, as illustrated for the figure-of-eight trajectory of the Minesto Deep Green tidal kite in Fig. 6.
This movement increases the speed of the flow passing through an axial-flow turbine, allowing greater amounts of power to be generated from lower free-stream speeds using (2013) a smaller turbine. The mechanical energy of the turbine is converted to electrical energy using a direct-drive generator attached to the kite, which is then transmitted through a cable in the tether to the seabed and then to the shore via a subsea cable. Table 6 summarises the specifications of several Deep Green devices. The power coefficient estimates were calculated using Eq. 1, the rotor diameters and Minesto's claim that the device moves at a speed 10 times greater than the water current. For the calculated power densities, total device swept area was estimated as a semi-circle with the minimum tether length as its radius, resulting in the very low values shown. In practice, the swept area is likely to be much smaller than this. In water depths appropriate for the technology, tidal kites certainly appear capable of producing great amounts of power for the size of their rotors.
Tidal range
Tidal range devices make use of the difference in water level between high and low tide by impounding water within a basin before releasing it across turbines, as illustrated in Fig. 7. By closing sluice gates, water is trapped on one side of the device, creating a static head across it due to the movement of the tides. When the head is suitably large, the gates are opened and the excess water is directed across turbines for electricity generation.
Tidal range devices can use various generation schemes: ebb generation, flood generation and two-way generation (Xia et al. 2010a). These different schemes allow some flexibility in how the plant operates, depending upon the strength of the tide and grid requirements (Frau 1993).
• Ebb generation: power is generated when the flow is discharged across the turbines in the direction of the outgoing ebb tide, i.e. from the basin towards the sea. A full ebb generation cycle consists of four stages-filling, holding, generating and holding once again. • Flood generation: essentially the mirror image of ebb generation, though generally less efficient due to the shape of the seabed. Since the volume of water in the upper half of the basin (utilised first by ebb generation) is greater than the volume in the lower half (filled first during flood generation), the water level difference across the barrage reduces more quickly. • Two-way generation: an amalgamation of both ebb and flood schemes. Consequently the tidal range within the basin is closer to its natural range, which is thought to reduce environmental impacts in comparison to one-way generation.
By considering the volume of water impounded within the basin, a crude estimate of the potential energy (and subsequently power) available from a tidal range device can be obtained as follows (Lamb 1994): The factor of 1/2 arises due to the assumption of a linear reduction in hydraulic head as the basin empties (i.e. over half the tidal cycle). For a device utilising an ebb generation scheme, the average daily power generated by a range device is simply: where t is the length of the tidal period in seconds. The power density in terms of power per unit of basin area can then be calculated as: The maximum power density values for four theoretical range schemes of different efficiencies, calculated using Eq. 9, are shown in Fig. 8 for tidal ranges between 0 and 20 m. It can be seen from this simple analysis that barrages and lagoon devices only produce a small amount of power for their surface area. For the maximum calculated 20 m tidal range, the available power density for a 100 % efficient range device is approximately 45 W/m 2 , while for the 16 m tidal range found in Canada's Bay of Fundy [which has the highest tides in the world (Archer and Hubbard 2003)] the power density is approximately 30 W/m 2 .
There are relatively few locations in the world that have tidal ranges approaching this value, however, meaning range devices are unlikely to reach this figure in all but a handful of places. Tidal range projects are also typically characterised by low values of ε (Xia et al. 2012), with values typically ranging from 20 to 40 % and an average of 33 % often used in theoretical energy output estimations. At these efficiencies, the power density of a range scheme is clearly going to be lower. As stated above, however, the values provided by Eqs. 7-9 are only a crude estimate. For a specific location, integrating over the tidal curve, rather than averaging, will provide a more realistic estimate of power. Given that the shape of the basin and turbine positioning can also affect range scheme performance, numerical models, such as those of Xia et al. (2010b) provide an even more accurate method of assessing potential power output. It is also worthwhile to consider the power output of currently operational tidal range devices, which are presented in Table 7.
The economics of a range scheme scale according to the amount of water in a basin between mean high and low tide, which is known as the volume of the tidal prism (Frid et al. 2012). As a result, optimal tidal range structures are situated in estuaries with high tidal ranges, and feature large basins for water impoundment (Kadiri et al. 2012).
A particular advantage of tidal range structures is their very long lifespans; for example, the design lifetime of the proposed Severn Barrage is 120 years, with turbine replacement occurring every 40 years (Kelly et al. 2012). Range structures have such long operational lifetimes due to the fact that the turbines are enclosed within a sturdy concrete structure and are consequently less exposed to the marine environment, with relatively easy access provided for maintenance work.
Tidal barrages
Tidal barrages are long structures built across bays or estuaries, making use of the surrounding land to create a basin in which water can be impounded.
Barrages are mature and reliable; however, they do come with significant capital and environmental costs that can be difficult to overcome (Blunden and Bahaj 2006). This is evidenced by the fact that there are only five tidal barrages currently in operation globally, despite there being several sites across the world considered suitable for development (Charlier 2007) The largest example of a barrage is the Sihwa Lake Tidal Power Station in South Korea, which is rated at 254 MW and makes use of a seawall that was originally constructed in 1994 for flood defence (Bae et al. 2010). A list of operation tidal barrages, such as the Rance River Barrage shown in Fig. 9, provided in Table 7.
In regions of high tidal range, such as the Severn Estuary in the UK, large-scale range schemes would be capable of producing colossal amounts of power. For example, the proposed Cardiff-Weston barrage in the Severn Estuary, UK, would have an installed capacity of 8.64 GW and provide an estimated 17 TWh of electrical energy per year (Xia et al. 2010a). This power capacity is significantly greater than the Fig. 9 The Rance River Barrage, Brittany, France. Sourced from wikipedia.org (2015) maximum of 154 MW predicted by for tidal stream turbine arrays in that location. Indeed, the predicted 17 TWh of energy per year is greater even than the estimated 12.7 TWh/year resource available in the entire Pentland Firth, arguably the UK's top site for tidal stream power (Sustainable Development Commision 2007).
Such schemes can also have significant environmental impacts, both positive and negative. For example, Kirby and Shaw (2005) argue that a Severn barrage would reduce the strength of the tidal currents and thus reduce the suspended sediment load while providing greater bed stability, encouraging the colonisation of an otherwise highly suppressed ecosystem. Through a numerical study, confirmed that suspended sediment levels in peak spring tide conditions would reduce from 1200 to 200 mg/l upstream of a Severn barrage. More generally, the change in suspended sediment loads is related to the local geology of the estuary bed (Xia et al. 2010b); consequently it is difficult to say whether this reduction would occur in all locations. Barrages and lagoons are also likely to increase sediment deposition in certain areas, the location and magnitude of which will depend upon specific design and the prevailing source of the sediment (Kadiri et al. 2012).
Tidal barrages have also been found to impact on surrounding water levels. For example, Xia et al. (2010c) predicted that maximum water levels upstream of a Severn barrage would decrease by 0.5-1.5 m, reducing the risk of flooding along the estuary. Barrages, therefore, have additional use as flood defences, particularly in the context of climate change and rising sea levels ). This may be a huge benefit for certain locations, given that it will also prevent the need to spend money on conventional flood defences (Department of Energy and Climate Change 2010). In a similar vein they may also be used as bridges across bodies of water, leading to regeneration opportunities, as is the case at La Rance in France.
For ebb generation schemes, the reduction of water levels caused by a barrage also has negative connotations, particu-larly in terms of habitat loss. For example, Zhou et al. (2014) predicted through numerical modelling that the siting of an ebb-only barrage across the Severn Estuary between Cardiff and Weston-Super-Mare would result in the loss of 80.5 km 2 of intertidal habitats. Zhou et al. (2014) also predicted that salinity levels would be reduced by up to 5 parts per thousand at high water upstream of the device. This would reduce dissolved metal and nutrient concentrations in the water column (Kadiri et al. 2012), and would also impact on dissolved oxygen concentrations, again impacting on habitats within the area.
Local wave climate may also be affected, for example, (Fairley et al. 2014) found that the construction of a Severn barrage would increase net downstream wave heights over a tidal cycle by almost 20 % compared to pre-barrage conditions, which could have implications for coastal erosion and sediment transport.
These impacts on water levels and habitats can be reduced through the use of two-way generation, which allows the tidal range of the basin to remain closer to its natural cycle. The impacts may not be completely mitigated however; the numerical model of Xia et al. (2010c) indicated that water levels upstream of a two-way barrage are still reduced, which will lead to habitat loss.
Other methods for minimising the environmental impacts of barrages can be seen in the Evans Engineering REEF system (Evans Engineering and Power Company 2011), which maintains a small but constant head difference rather than delaying the tides to generate a larger head. This is also thought to reduce the effect on the tidal range of the basin, in turn reducing impacts on intertidal habitats. The proposed scheme also features low-head, low-speed turbines, with a small number of widely spaced blades, to assist the passage of migratory fish. The two-way Hafren barrage scheme for the Severn Estuary, featuring 1026 Very Low Head bidirectional turbines, has also been proposed as a way of mitigating impacts on wildlife, with up to 60 % less habitat loss in comparison to an ebb-only scheme. However, the case for the Hafren barrage is currently unproven (House of Commons Energy 2013). In addition to a failure to demonstrate value for money for consumers, this is still partly due to environmental concerns. These include the loss of some intertidal habitat due to a reduction in tidal range and apparent contradictions regarding impacts on flooding. These issues highlight the lack of empirical data surrounding the environmental impacts of tidal power in general, and demonstrate the need for further research before such impacts can be assessed with certainty.
Tidal lagoons
A tidal lagoon is similar to a barrage; however, instead of building a single structure across the width of a bay or estuary and using surrounding land to form the basin walls, a tidal lagoon consists of an entirely man-made basin with turbines embedded within the constructed walls. Consequently, since a larger structure is required to create a basin of comparable area to that of a barrage, tidal lagoons require higher capital expenditure than barrages of comparable power (Baker et al. 2006;Entec UK Ltd 2007).
Without blocking off an entire estuary or bay, the impacts of a lagoon on water flow, sediment transport, fish migration and shipping may be comparatively less than those of barrages outside the basin area and on the region as a whole (Kadiri et al. 2012). For example, the construction of a tidal lagoon on the Welsh coast of the Severn Estuary has been predicted to have little impact on the hydrodynamics of the region overall (Xia et al. 2010b), though the hydrodynamics within the lagoon itself can still be changed dramatically. Additionally, impacts in the area immediately surrounding a lagoon, particularly on sediment transport, may be more significant than for barrages. For example, the proposed Swansea Bay Tidal Lagoon is predicted to interrupt the transport of sand, creating a build-up of material along the outside of the structure's eastern wall (Tidal Lagoon Swansea Bay plc 2015). Consequently a monitoring plan is being developed to implement measures such as beach nourishment and dredging where necessary.
In the same region, it has also been predicted that coastally attached lagoons will cause a smaller reduction on upstream water levels in comparison to barrages (Kadiri et al. 2012). While this may be of benefit in terms of minimising habitat loss, it also indicates that lagoons are likely to not be as effective at reducing the risk of flooding. Even so, studies of proposed tidal lagoons off the coast of North Wales have shown that coastally attached impoundments can offer some level of reduced flood risk, by reducing peak tidal levels and reducing wave heights .
Inside the basin, sediment deposition is likely to occur, and periodic dredging may be required to preserve energy yield (Department of Energy and Climate Change 2010). This is due to the generation of strong recirculating currents within the impoundment area as a result of turbines being located in only part of the wall, inducing tidal eddies or large vortices . Spreading the turbines around the perimeter of the device, as is done along the length of barrages, can reduce the likelihood of such eddies, however, for lagoons this is likely to reduce their commercial viability.
At the time of writing no tidal lagoons have been built. However, negotiations for the construction of a tidal lagoon in Swansea Bay, which could produce 500 GWh/year of electricity, are underway (HM Government 2015). Figure 10 provides an image of the proposed lagoon area.
Methodology
To assess the suitability of the discussed technologies for small-scale shallow water applications, we identified six criteria that will govern the success of tidal energy deployments from the literature. The majority of these come from Bahaj and Myers (2003), who describe the fundamentals applicable to tidal energy deployments.
• Power density: a measure of how much power a concept can generate for its size. At this stage, only hydrodynamic efficiency and power are considered, i.e. the effects of gearboxes and electrical generators on efficiency and output are ignored. • Scalability: the ability to increase the size of a technology given the constraints of a shallow water site. Even at small scales, the ability to make a device large enough to produce a desired power output will be important. • Durability: the ability of a concept to survive in the harsh marine environment, where it will have to overcome corrosion, debris impacts, high loading forces and bio-fouling among other issues (Bahaj and Myers 2003). Some concepts will be more susceptible to these problems than others, depending upon their complexity, the number of moving parts they have and how exposed these parts are. • Maintainability: the ease with which a concept can be maintained and repaired. Measures can be undertaken during the design stage to reduce and simplify maintenance (Bahaj and Myers 2003), including the use of good-quality lubricants, seals, and strong materials, incorporating all the moving parts of the device into one specific feature and an ability to raise the device out of the water. • Economic potential: for a concept to be viable it must generate electricity at a competitive cost. However, given the many variables involved it is difficult to quantify the economic potential of each concept precisely; therefore, a more qualitative assessment based on the complexity, scalability and power density of each device is used. • Environmental impacts: many coastal regions are ecologically diverse areas, and are used for a wide number of commercial and recreational activities. Consequently, it is vital to consider the magnitude of the impacts each concept has on its surroundings, given that even relatively small interventions can cause significant impacts.
We perform a design-stage examination of the performance of each tidal technology class by comparing their performance against that of a reference concept, in this case the axial-flow tidal stream turbine. Given the site-specific nature of tidal power and the absence of large amounts of data for many of the technologies, we limit our evaluation to discussion only, using the particular benefits and drawbacks of each technology to highlight those most suited to small-scale shallow water generation.
Power density
Given the dependence of power output on device efficiency and site bathymetry, along with either flow speed or tidal range, it is difficult to directly compare the power density of technologies in the absence of specific site data. However, some broad points can be drawn from the data presented in Sect. 2. As shown in Fig. 3, axial-flow turbines are theoretically capable of generating over 1 kW/m 2 in unbounded flows greater than 1.5 m/s, while theoretical cross-flow devices should be capable of producing slightly more power for their size in those same flows. Given that the reported efficiencies of three of the experimental cross-flow devices presented in Table 3 are similar or greater than those of the axial-flow devices in Tables 1 and 2, it seems reasonable to suggest that cross-flow devices can produce more power for their size than axial turbines, which will be of benefit in smallscale deployments, particularly in shallow waters. Although oscillating hydrofoils may have issues with efficiency if the Stingray device is indicative of their performance, the device tested by Kinsey et al. (2011) suggests their efficiencies can also be comparable to those of unducted turbines. This means they should also be capable of producing large amounts of power for their size.
Other tidal stream concepts do not appear to compare so favourably however. Ducted turbines, both axial and cross flow, are capable of producing a given amount of power from a smaller rotor than an unducted device, while the duct also assists power generation in yawed flow. Once the extra area of the duct is taken into account, however, the power produced by the total device is likely to be comparatively less than that of an unducted turbine. Tidal kites, which are again capable of producing great power for the size of their rotors, presently require large installation depths, effectively making them unfeasible in shallow waters.
The power density of tidal range devices, though appearing to be much lower than that of axial-flow turbines, is defined differently, in that it is the power per unit of basin area, rather than power per unit of frontal area. Consequently, provided a location has a large enough tidal range to create sufficient head for generation, barrages and lagoons will be perfectly capable of operating in shallow waters. Their low power density in terms of surface area does, however, mean that their basins will likely need to be at least hundreds of square metres to generate comparable levels of power. This would somewhat defeat the purpose of a small-scale device, since the amount of investment and materials required to create such a basin, particularly for a lagoon, will be significantly higher than that required to install a turbine of similar output (assuming the resources are there for both technologies).
Scalability
In shallow waters, the maximum size of an axial-flow turbine will be constrained by water depth due to the circular swept area of its rotor, which would limit maximum power output. For both vertical and horizontal-axis cross-flow turbines, however, size would not be limited due to their rectangular area; consequently they could be scaled up to reach a specified output. Oscillating hydrofoils also benefit from a rectangular swept area, and consequently could be scaled according to a specific demand as well.
For a ducted device, the extra space required by the duct itself would limit the overall scalability in comparison to an unducted turbine. Tidal kites would be even more limited in shallow waters, given that they require long tether lengths so they can sail through their figure-of-eight motion.
Given that their power output is dependant on the difference in water level between high and low tide but independent of actual water depth, the maximum size of a tidal range device would not be constrained by water depth. While a barrage would require a natural basin, such as a bay or estuary, to serve as an impoundment area, in theory lagoons can be constructed on or off any open coastline with a tidal range large enough to provide a sufficient head. This would be a definite advantage, in terms of reaching a desired power output.
Axial-flow turbines
The rotor blades of unducted axial-flow turbines are completely exposed to the marine environment, making them susceptible to damage and bio-fouling, which would be an issue in shallow waters due to the proximity of the photic zone. To produce power from both flood and ebb tides, axialflow turbines also require some sort of yawing mechanism to ensure they are facing the prevailing current direction.
Vertical cross-flow turbines are omni-directional while horizontal designs are bi-directional, negating the need for yawing mechanisms and so reducing complexity in comparison to axial turbines. Blade tips will also be less exposed than those of an axial turbine, although the blades will be susceptible to damage and bio-fouling. Torque pulsation can be an issue for straight-bladed Darrieus rotor designs, which require pitch-control mechanisms to limit the blade angle of attack and minimise stalling at low speeds. The use of helical blades can also prevent this problem, however, without increasing mechanical complexity.
A duct would provide some protection for the blades of both axial and cross-flow turbines against debris and biofouling, while the need for a smaller rotor to produce a given amount of power would also help improve durability. In contrast, the fully exposed blades of an oscillating hydrofoil are likely to suffer similar effects as those of unducted axialflow turbines. Given their oscillatory motion, fatigue may also be more of an issue for hydrofoils than it is for axialflow turbines, and such problems would be exacerbated with increasing blade length. The additional complexity of multiple moving parts means tidal kites may also suffer in terms of durability.
Both barrages and lagoons will be much more durable than axial-flow turbines, given that their turbines are less exposed to the marine environment. This can be seen in a comparison of the lifespans of the structures: while the MCT SeaGen has a design life of 20 years, a proposed Severn Barrage has a design life of 120 years (with the turbines themselves needing replacing at 40-year intervals).
Maintainability
Maintenance of axial-flow turbines would be challenging, since all moving parts, including the generator, tend to be located underwater. This can be overcome in design, through the use of stronger materials, a geared transmission system to position the generator above the surface and lifting rotors such as those on the MCT SeaGen device. A vertically oriented cross-flow device would permit the generator to be located above the water line without the need for anything more complex than a drive shaft, which would permit easier maintenance. Axial-flow turbines and horizontal CFTs require a more complex transmission system to achieve this; however, it is eminently possible through the use of gearing or universal joints. In all cases, however, maintenance of the turbine itself will be challenging, given that it is located underwater. For ducted turbines this will be complicated further due to the duct potentially restricting access to certain parts, otherwise their maintenance needs would be broadly similar to those of unducted turbines.
The maintenance needs of oscillating hydrofoils will be different to those of turbines, given that they have a different power take off mechanism. The presence of additional hydraulic fluids underwater and the potential for leaks into the environment may make maintenance more difficult. As with turbines, however, the generator itself could be located above the surface for easy access. Meanwhile tidal kites would be relatively easy to maintain in comparison to turbines, since the kite itself can be detached from the tether and floated to the surface for repair work.
The turbines of a barrage or lagoon would be relatively easy to reach for maintenance work, however, depending on the site and generation scheme employed, dredging of the basin may be required to counter sedimentation and preserve energy yield.
Economic potential
Despite producing good amounts of power for the size of their rotors, the limitations on the maximum size of axial-flow turbines in shallow waters may hamper economic potential. In contrast, the power density and scalability of cross-flow turbines would suggest than an efficient design of either vertical or horizontal orientation is likely to have good economic potential. Ducted turbines, due to their more limited scalability and lower total power density, are likely to generate less power over the course of their lifespans, and therefore less of a return on investment.
Given their scalability in shallow waters, an efficient oscillating hydrofoil device is also likely to have good economic potential, comparable to that of a cross-flow turbine. Tidal kites, however, given their apparent requirement for deep waters, would not be capable of generating as much electricity, and consequently their economic potential is likely to be lower.
The comparatively large amount of materials and construction work required to construct a barrage, which by definition is even greater for a lagoon, means that the initial capital required for a range scheme will likely be significantly higher than it is for a tidal stream device. If a given amount of electricity is to be generated, this would signifi-cantly hamper the economic potential of a range scheme in comparison, at least at small scales. The longer lifespan of range devices (120 years with turbine replacement occurring every 40 years, in comparison to the 20 years for the MCT SeaGen) would go towards mitigating this, however. Whether it does so completely would depend upon the specifics of a project.
Environmental impacts
Given the similarities in how they extract energy from the flow, the environmental impacts of axial-flow and cross-flow turbines of both orientations are likely to be broadly the same. As discussed in Sect. 2.1.1, these will include impacts on water quality, such as reducing upstream and downstream flow speeds, potentially affecting sediment transport, and also accelerating the flow immediately around the device, potentially leading to scouring. Other risks include altering the habitats of benthic organisms, noise pollution, the generation of electromagnetic fields and the striking of marine animals with rotor blades or other moving parts. For ducted devices, it is possible that the higher drag coefficient caused by the extra structure of the duct may result in greater impacts on water quality, given that it has a greater impact on basin efficiency. The magnitude of the environmental impacts of an oscillating hydrofoil will likely be similar to those as well, though the potential for hydraulic fluids to leak into the environment will perhaps be a greater concern.
The small turbines of tidal kites would produce small wakes located higher in the water column, while the lack of a large superstructure would likely reduce scouring effects on the seabed compared with conventional axial-flow turbines. Depending on the range and speed of motion, however, tidal kites could present a greater danger of colliding with marine animals in comparison with static technologies. This may also result in a need for slightly larger exclusion zones around the device, to prevent collisions with marine traffic.
As discussed in Sect. 2.4, the environmental impacts of tidal range schemes can be positive as well as negative. For example, barrages may have uses in certain areas as flood defence schemes; however, changes in water level can adversely affect intertidal habitats, while the presence of large offshore structures also impacts sediment transport and the routes of migratory fish. From the literature, the regional impacts of barrages tend to be more significant than those of lagoons; however, lagoons may suffer more with recirculation currents within their basins due to their shape. Lagoons may also have more significant environmental impacts on the area directly surrounding them, particularly with regards to sediment transport and deposition. These impacts all tend to be proportional to the size of the scheme, and consequently will be significantly reduced at smaller scales. The generation scheme used, either ebb, flood, or two way, will also impact upon their magnitude; however, further research is required for these differences in impact to be assessed with certainty, due to a lack of empirical data. This is equally true of all tidal technologies.
Overall suitability
From the above discussion, tidal kites do not appear suitable for use in shallow water areas simply because they require deep waters to function correctly. Barrages and lagoons also appear unsuitable for small-scale generation, since they would require greater financial and material investment to install in comparison to a tidal stream device. Of the two, a lagoon would suffer more in terms of investment, while a barrage is likely to have greater impacts on the regional environment.
Ducted turbines do not appear as well suited as other technologies either. While their rotors have higher power densities than those of axial turbines, the additional area required for the duct means the total device power density is likely to be lower in practice. This additional structure also impedes the overall scalability of a device, which combined with lower overall power density would hamper economic potential. Axial-flow turbines also do not appear to be as suitable as some other technologies. Although they have higher power densities than the majority of the other concepts, their size is completely limited by depth, limiting the overall power output of a single device.
Oscillating hydrofoils have been described as being suitable for shallow water use (Kinsey et al. 2011;Pulse Tidal 2014), and certainly appear more suitable for shallow water applications than axial turbines. Although the mothballed Stingray device struggled with poor efficiency, the device tested by Kinsey et al. (2011) has a comparable efficiency to unducted turbines. This combined with their rectangular swept area means overall power output would not be constrained by depth, giving them good economic potential.
Both vertically and horizontally oriented cross-flow turbines appear to be very suitable for shallow water applications, due to their scalability and relatively high power densities. This means they should have greater generating capacity in shallow waters compared with axial-flow turbines, and consequently good economic potential. A vertical-axis configuration would be capable of operating in any flow direction, while the bi-directionality of a horizontally configured device would cope with ebb and spring tidal flows without the need for a yawing mechanism. The use of helical blades would also prevent the need for mechanically complex pitch-control systems.
Conclusions
The suitability of eight different tidal power technologies for small-scale power generation in shallow near-shore waters off have been discussed by examining device performances in a number of key criteria. Our discussion suggests that both vertically and horizontally oriented cross-flow turbines appear to be well suited to this application, since they have relatively high power densities and a maximum device size that is unconstrained by depth. Oscillating hydrofoils also appear suitable to shallow water applications for similar reasons, provided comparable levels of efficiency can be reached. Meanwhile tidal kites, which require deep waters, and tidal barrages and lagoons, which will require significantly more investment to generate comparable levels of power at small-scale in comparison to turbines, appear to be the least appropriate for this scenario.
It is important to note that this examination is a simple design-stage assessment. Due to the developing nature of the tidal energy sector, several of the assessment criteria (e.g. environmental impacts, economic potential) are qualitative in nature and have been discussed in the absence of large amounts of reliable data. Additionally, given the influence various device-specific factors (e.g. hydrofoil blade profile) may have on device performance, certain comments may not apply to every device within a given technology class.
Although we have provided a general technology review for one type of tidal energy deployment in the absence of large amounts of field data, for a given project at a specific site there will be further complicating factors that will affect the performance of a given technology (e.g. blockage). The influences of these factors on performance may vary even throughout the site itself, and consequently as much data on the site as possible should be gathered before attempting to identify the most appropriate technology for that area.
Despite the eight discussed concepts appearing to comprise the bulk of tidal technology currently in development, there are also other devices used for micro hydropower applications, such as Archimedes screws and gravitational water vortices, which have not been investigated and could be suitable for use in shallow water tidal applications. Furthermore, it is possible that with suitable adaptation the ideas behind some of the concepts (the tidal kite for instance) could prove very effective for shallow water generation, while there is also scope that completely novel ideas could also be suitable. Consequently this discussion is by no means exhaustive or definitive. Future work will look to explore the possibilities of adapting both the discussed technologies and others to develop an optimised device for shallow water tidal power applications. | 2023-02-01T14:32:45.275Z | 2016-03-18T00:00:00.000 | {
"year": 2016,
"sha1": "81196ba51751a8727b811716fc49d75371beaa39",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40722-016-0044-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "81196ba51751a8727b811716fc49d75371beaa39",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
14396036 | pes2o/s2orc | v3-fos-license | Growth of Human Colorectal Cancer SW1116 Cells Is Inhibited by Cytokine-Induced Killer Cells
Previous reports have suggested that treatment with cytokine-induced killer (CIK) cells may benefit patients with various types of tumor. The aim of this study was to evaluate the antitumor effects of CIK cells against the colorectal cancer line SW1116 in vitro and in vivo. CIK cells were generated routinely from peripheral blood mononuclear cells of healthy human donors, and the number of CD3+CD56+ cells was expanded more than 1300-fold after 14-day culture. At an effector : target cell ratio of 50 : 1, the percentage lysis of SW1116 cells reached 68% in the presence of CIK cells, Experimental mice injected with SW1116 cells subcutaneously were divided randomly into four groups: untreated, 5-fluorouracil (5-FU)-treated, CIK-consecutive treated (injected once/day) and CIK-interval treated (injected once every 5 days). CIK cells were injected abdominally five times in total. Compared with the untreated group, xenograft growth was inhibited greatly by CIK treatment, to nearly the same extent as with 5-FU treatment. We demonstrated that the necrotic area in the tumor xenograft was markedly larger in the CIK-treated groups than in the other groups. These findings suggest that CIK-based immunotherapy may represent an effective choice for patients with colorectal cancer.
Introduction
Colorectal cancer is the third most common cause of death due to cancer in the Western world [1]. In 2009, it was estimated that 75,590 men and 71,380 women were diagnosed with colorectal cancer in the United States [2]. Despite major advances in medical technology and therapy, colorectal cancer still only has an overall 5-year survival rate of 20%-50%. The disease is characterized by the development of a tumor in the large bowel that then spreads throughout the body. Although the primary tumor can be treated by only surgery, treatment of metastases requires some form of adjuvant therapy, such as radioimmunotherapy or chemotherapy. New therapeutic methods are needed to prolong survival.
Adoptive cellular immunotherapy involves the transfer of immune cells that have been expanded and activated ex vivo into patients to eliminate cancer cells. This approach is becoming an important effective method for cancer therapy. In recent years, the application of cytokine-induced killer (CIK) cells has evolved from experimental observations into early clinical studies. These cells have been shown to have encouraging preliminary efficacy towards susceptible autologous and allogeneic tumor cells in both therapeutic and adjuvant settings. CIK cells have a high rate of proliferation; they are derived from peripheral blood mononuclear cells (PBMCs) and are cultured with interferon-γ (INF-γ), anti-CD3 antibodies, and interleukin (IL)-2 [3,4]. Among CIK cells, CD3 + CD56 + cells are the main effector cells and demonstrate the most potent cytolytic activity [3,5]. They have been described as highly efficient cytotoxic effector cells that are capable of recognizing and lysing tumor cell targets in a nonmajor histocompatibility complex-(MHC-) restricted fashion [6,7]. CIK cells have been shown to target a variety of types of tumor and can exert their cytotoxic effects following systemic delivery [8].
CIK cells have been found to be highly effective at purging autologous bone marrow in patients with chronic myelogenous leukemia [9]. The antitumor effect of CIK cells has also been observed on many solid tumors, such as hepatoma, lung, and gastric cancers [10][11][12]. Furthermore, CIK cells can improve the immune function and clinical symptoms of cancer patients. Importantly, the toxicity of CIK cells is minimal, and there is no graft-versus-host reaction associated with their use [5]. In spite of their beneficial features, the cytotoxic activity of CIK cells against human colorectal cancer cells has not been clearly defined. In the study reported herein, we evaluated the antitumor activity of CIK cells in vitro against the human colorectal cancer cell line SW1116 and in vivo in a nude mouse xenograft model.
Materials and Methods
2.1. Cell Culture. Human colorectal cancer cells (SW1116) and human glioblastoma cells (U251) were originally obtained from the American Type Culture Collection (ATCC, Rockvile, MD, USA) and cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum, 100 U/ml penicillin and 100 mg/ml streptomycin in a humidified 5% CO 2 incubator at 37 • C.
Generation of CIK Cells.
After the healthy blood donor had given informed consent, 10 ml of blood was collected from each in evacuated tubes that contained heparin. Human PBMCs were isolated from fresh blood by Ficoll-Hypaque density gradient centrifugation. The PBMCs were washed three times, adjusted to a final concentration of 2 × 10 6 cells/ml with CIK medium (Takara, Japan) supplemented with 0.6% autogeneic serum, and then cultured in 75 cm 2 culture flasks that had been coated with 8 ml of PBS that contained 5 μg/ml antihuman CD3 monoclonal antibody (Takara, Japan) at 4 • C overnight. On day 0 of culture, we added 1000 U/ml recombinant human IFN-γ (PeproTech, USA) and 1000 U/ml recombinant human IL-2 (rhIL-2, PeproTech, USA) to the culture medium. The cells were cultured in a humidified 5% CO 2 incubator at 37 • C. The cells were transferred from the coated flasks to fresh flasks after four days. Every three days, fresh CIK medium and 1000 U/ml rhIL-2 were added. After culture for 14 days, approximately 1 × 10 9 CIK cells were harvested per flask, with a survival rate of >95%.
Phenotypic Analysis of CIK Cells.
A total of 5 × 10 5 CIK cells were harvested and washed twice with PBS. The cells were resuspended in 100 μl of PBS, labeled with 15 μl of antibodies against CD4/8/3 (FITC-conjugated anti-CD4, PEconjugated anti-CD8, and PerCP-conjugated anti-CD3; BD, USA) and 5 μl of anti-CD56 antibody (APC-conjugated anti-CD56; BD, USA) in the dark for 30 min at 4 • C, and then washed twice. Fluorescence-activated cell sorting (FACS) was then performed. The phenotype of PBMCs was analyzed as a control.
MTT Cytotoxicity Test of CIK Cells
In Vitro. SW1116 cells were plated in 96-well plates in triplicate at a density of 4 × 10 4 cells/well. After the tumor cells had adhered completely, CIK cells were added at different ratios of effector : target (1 : 1, 1 : 5, 1 : 10, 1 : 20, and 1 : 50), in 200 μl of medium without serum. After incubation for 24 hours, the supernatant in each well was removed and the cells washed three times. Aliquots of 100 μl of medium without serum and 10 μl of MTT were mixed and added to each well. After incubation at 37 • C for 4 h, the supernatant was removed carefully, and 150 μl of DMSO were added to each well. The cells were then shocked for 10 min in the dark. The OD was assessed by spectrophotometry at a wavelength of 492 nm. As a control, PBMCs were subjected to the same procedure. The amount of cell death was calculated according to the following equation: death rate = (OD control − OD sample )/OD control × 100%.
Pathological
Observation. SW1116 cells were placed on a slide putted in the culture capsule. When the cells reached 80% confluence, CIK cells were added at an effector : target ratio of 1 : 20. After incubation for 24 h, the slides were washed twice, stained with hematoxylin and eosin (HE), and sealed with neutral gum. We observed the shapes and aggregation of the cells by light microscopy. Tumor specimens were fixed with 10% neutral formaldehyde solution for 24 h, dehydrated in an ethanol gradient, made transparent with dimethylbenzene, embedded in paraffin, sectioned at a thickness of 3-4 μm, and stained with HE.
Nude Mouse Xenograft Assay.
Nude mice were obtained from the Chinese People's Liberation Army Academy of Military Medical Science. The biologic license number was SCXK-(Jun)2007-004. Nude mice were bred in an animal institute that complied with good laboratory practice (Chinese PLA General Hospital Animal Experiment Centre). On day 0, 5 × 10 6 SW1116 cells were injected subcutaneously into the nude mice. The nude mice were found to have developed 0.2 cm 3 tumor nodules after 5 days. They were then randomly divided into four groups: the untreated group, 5-fluorouracil-(FU-)treated group, CIK-consecutive treated group, and CIK-interval-treated group. In the 5-FUtreated group, 5-FU was injected intravenously at 50 mg/kg every day for 5 days in total. In the CIK-consecutive treated group, the mice were injected abdominally with CIK cells (5 × 10 7 cell/day) for 5 days. In the CIK-interval-treated group, 5 × 10 7 of CIK cells were injected abdominally into mice once every 5 days, that is, 5 times in 3 weeks. Tumor volumes and body weights were measured every 2 days. Tumor volumes were calculated by using the formula: length (mm) × width (mm) × height (mm). On day 30, the mice were sacrificed, and the tumors were weighed. To detect toxicity to the animals, the body weights of the animals were measured. After fixing in formalin, the tumor tissues were stained with HE. Pictures were taken randomly in 10 fields of vision, and image processing software (Image-Pro Plus Version 4.5, USA) was used to calculate the necrotic area.
Statistical Analysis.
The results are shown as the mean ± standard error of the mean (SEM) of triplicate determinants (wells). Data were plotted using GraphPad Prism version 5.00. Two-way analysis of variance (ANOVA) was used to determine the significance of the difference between the means of all experiments. A P value of less than.05 was considered to be statistically significant.
Phenotype of the CIK Cells.
Firstly, we established a stable system for the expansion of CIK cells in vitro. PBMCs from 15 individuals were cultured to generate CIK cells. The phenotypes of the PBMC and CIK cells were examined by FACS. The PBMC population contained 50% CD3 + cells, 4% CD3 + CD56 + cells, 27% CD3 + CD8 + cells, 22% CD3 + CD4 + cells, and 3% CD8 + CD56 + cells (Figure 1(a)). After culture for 14 days, the CIK cell population contained 98% CD3 + cells, 41% CD3 + CD56 + cells, 77% CD3 + CD8 + cells, and 20% CD3 + CD4 + cells (Figure 1(b)). After 14 days, the counts of the total number of cells was increased by 130-fold. The number of CD3 + CD56 + cells was increased by 1300fold, whereas the number of killer T cells (CD3 + CD8 + ) was increased by 390-fold. The counts of the two types of cells were evidently different between the PMBC and CIK cells (Figure 1(c)). The proportion of CD3 + CD56 + cells was <5% before culture, but 35% after culture. CD3 + CD56 + cells are the main effector cells; therefore the harvested suspension cells were mature CIK cells. The phenotypes of the CIK cells from the 15 individuals were not evidently different, which demonstrates that the method tested for the culture of CIK cells is reproducible.
Cytotoxicity of CIK Cells
In Vitro. Next, we examined the antitumor effect of CIK cells in vitro. CIK cells have been shown to demonstrate cytotoxicity in a non-MHCrestricted manner. CIK cells show strong anti-tumor activity against lung cancer, ovarian cancer, cervical cancer, and other types of tumor cells in vitro [11][12][13][14]. In this study, at an effector:target ratio of 100 : 1, the mean percentage lysis of SW1116 cells was 9% after the addition of fresh PBMCs (Figure 2(a)). At effector : target ratios of 1 : 1, 5 : 1, 10 : 1, 20 : 1, and 50 : 1, the mean percentage lysis after the addition of CIK cells was 3%, 23%, 42%, 62%, and 68%, respectively, for SW1116 cells and 2%, 13%, 32%, 48%, and 54%, respectively, for U251 cells (Figures 2(b) and 2(c)). The CIK cells were suspension cells and therefore could not adsorb to the slide on their own. The cells were observed by HE staining after coculture of the CIK and SW1116 cells for 24 h. The CIK cells were round and had a high proportion of nucleoplasm, whereas the SW1116 cells were irregular and had a low proportion of karyoplasm. The CIK cells adsorbed and aggregated around SW1116 cells (Figure 2(d)). Cytotoxicity tests showed that the CIK cells had a strong ability to kill SW1116 cells as compared with normal lymphocytes. HE staining showed that when CIK cells and tumor cells were cultured together, the CIK cells gathered around the tumor cells without MHC restriction or specificity.
Antitumor Effects of CIK Cells
In Vivo. Finally, we evaluated the inhibition of growth of colorectal cancer xenografts by CIK cells. The two groups of mice treated with CIK cells showed no signs of panic, irritability, weakness, or other symptoms after CIK cells were injected abdominally. Throughout the treatment period, there was no significant decline in the weight of the mice in these groups, whereas 5-FU showed evident toxicity. After being treated with 5-FU, the some symptoms, for example, moving slowly, urinary and fecal incontinence, were observed in the nude mice. On day 3, the weight of the mice decreased significantly, and two mice died within the treatment period (Figure 3(a)). Preliminary experiments showed that nude mice had significant side effects, in the abdominal cavity after injection of 5-FU, whereas injection of 5 × 10 7 CIK cells did not result in any toxicity.
Through the measurement of tumor volume and tumor weight, we demonstrated a powerful antitumor activity of CIK cells. The consecutive-treated and interval-treated groups showed a reduction in tumor volume of 41% and 52%, respectively, whereas the 5-FU group showed a decrease in tumor volume of 43% (Figure 3(b)). On day 30, the mice were sacrificed and the tumors isolated. In the control group, the mean value of tumor weight increased to 1.448 g at 30 days after injection. In the consecutive-treated and interval-treated groups, tumor growth was inhibited by 53% and 62%, respectively, whereas in the 5-FU-treated group tumor growth was inhibited by 54% (Figures 3(c) and 3(d)). In this experiment, we found that there was a tendency towards better efficacy in the CIK-consecutive-treated group as compared with the CIK-interval-treated group, but there was no significant difference between the two groups. HE staining demonstrated that the necrotic area of the tumor tissues was greater in the groups that had been injected with CIK cells than in the other groups; the necrotic area in the former measured up to 60%, as compared with 24% for the control group (Figures 4(a) and 4(b)). Interestingly, although the CIK cells and 5-FU both significantly inhibited tumor growth, the area of tumor necrosis after treatment with CIK cells was significantly higher than that observed after treatment with 5-FU. Therefore, in the future it will be of interest to investigate the mechanisms by which CIK cells inhibit tumor growth.
Discussion
Adoptive immunotherapy has now been available for nearly 30 years. One of the first prototypes was the lymphokineactivated killer (LAK) cells. In clinical studies, LAK cells demonstrated modest efficacy against metastatic cancers such as renal cell carcinoma and melanoma [15]. Subsequently, studies confirmed that standard IL-2, stimulated LAK cells had low antitumor activity, and it was difficult to generate large numbers of cells [16][17][18]. The emergence of LAK cells that accelerated the field of cellular immunotherapy with CIK cells was performed in Stanford [19]. CIK cells possess a higher level of cytotoxic activity and a higher rate of proliferation than LAK cells [20]. Over the past 20 years, the development of CIK cell immunotherapy for the treatment of cancer has received considerable attention. CIK cells exhibit a high rate of proliferation [21]; during the culture period, CD3 + CD56 + cells can expand by up to 4 Clinical and Developmental Immunology At an effector : target ratio of 100 : 1, the percentage lysis of SW1116 cells after the addition of PBMCs was 9%. At effector : target ratios of 1 : 1, 5 : 1, 10 : 1, 20 : 1, and 50 : 1, the percentage lysis after the addition of CIK cells was 3%, 23%, 42%, 62%, and 68%, respectively, for SW1116 cells and 2%, 13%, 32%, 48%, and 54%, respectively, for U251 cells. (d) Observation of the distribution of CIK cells by HE staining, ×100 and ×200. CIK cells adsorbed to and aggregated around the SW1116 cells. For cytotoxic assay, each experiment was performed in triplicate and was repeated at least three times, and the results are expressed as mean ± SD. ( * P < .05) 1000-fold [3]. In this study, we used culture dishes that had been coated with a monoclonal antibody against CD3. The isolated PBMCs were plated onto the coated dishes, and IFN-γ and IL-2 were added to the medium. After four days, the cells were removed from the coated dishes and cultured further in the presence of IL-2. The cells were harvested after 14 days. The total cells expanded by up to 130-fold, and the CD3 + CD56 + cells expanded by up to 1300-fold. The phenotype, composition, and quantity of the cells were as described in other reports [11,14,22].
Currently, the mechanisms of the genesis and development of colorectal cancer remain unclear, but it is generally agreed that the action of various factors in the tumor enables the immune system to be evaded and results in unlimited proliferation of tumor cells. Research has shown that in the animal model of colorectal cancer and in patients with colorectal cancer patients, antigen-specific cytotoxic T lymphocytes (CTL) are induced [23]. Therefore, colorectal cancer is immunogenic, but it is possible that colorectal cancer still develops for the following reasons: (1) in patients with colorectal cancer where expression of the histocompatibility leukocyte antigen (HLA)-I was decreased or absent [24], the lack of antigen presentation by the tumor cells led to the induction of CTL responses. (2) Mutations in peptide transporting molecules (TAP) may also affect the presentation of T cell epitopes [25]. (3) Colorectal tumors express not only functional Fas ligand (FasL), which can induce apoptosis in tumor infiltrating T cells that bear Fas, but also Fas itself, which although expressed at lower levels than in normal colon epithelium may make the tumor cells susceptible to apoptosis [26][27][28][29]. studies of the treatment of colorectal cancer by immunotherapy have been undertaken. For example, among 30 patients with colorectal cancer treated with LAK cells and IL-2, one complete anti-tumor immune response and four partial antitumor immune responses were seen [30]. In addition, 7 patients with metastatic colorectal carcinoma resistant to chemotherapy were treated by transfusing autologous IL-2 modified CIK cells [31]. This pilot study demonstrated an anti-tumor immune response of this approach in at least partial patients.
Here, we provide evidence that CIK cells might be a good candidate for colorectal cancer therapy. The major effector cells of CIK cells, namely those that show the greatest cytotoxicity, are the subset of CD3 + CD56 + cells [6], which express both the T-cell marker CD3 and natural killer cell marker CD56 and are termed non-MHC-restricted T cells. These cells are capable of killing both autologous and allogeneic tumor targets. The cytotoxicity of CIK cells might be mediated via contact between the adhesion receptor lymphocyte function-associated antigen-l (LFA-l), which is found on all T cells, and its counter receptor, intercellular adhesion molecule-l (ICAM-l), which is located on the surface of the target cells [7]. CIK cells possess cytoplasmic granules which contain the protein perforin (cytolysin). The mechanism of destruction of target cells includes the vectorial exocytosis of the contents of the cytoplasmic granules into the intercellular space by the effector cell at the site of target-cell contact [32,33]. In the vitro study by HE staining, the result showed CIK cells adhered to the surface of SW1116 cells, and they possessed powerful killing capacity for SW1116 cells. This result was consistent with the killing of tumor cells by CIK cells. CIK cells are effective against FasL-positive malignant cells and cells with multidrug resistance (MDR), and it has been observed that a population of CIK cells migrated to tumor sites by the 7th hour after injection and remained detectable at these sites for an additional 9 days [8,34]. In agreement with this, we observed in vivo that abdominal injection of only 5 × 10 7 CIK cells resulted in strong inhibition of colorectal tumor growth. Although a similar effect was observed with 5-FU, the CIK cells had low toxicity and few side effects as compared with the chemotherapy drug. In contrast to chemotherapy drugs, which have a suppressive effect on immune cells, CIK cells not only kill tumor cells directly, but also can themselves secrete many cytokines, such as IL-2, TNF-α, and granulocyte macrophage colony-stimulating factor (GM-CSF), which enhances the systemic anti-tumor activity of the body. HE staining of tissue biopsies after treatment with CIK cells revealed the presence of a large area of necrosis around the tumor. We speculate that after CIK cells have been injected into the body, they gather first around the tumor, and then a large number of CIK cells infiltrate the tumor and cause necrosis. Further experiments are required to confirm this hypothesis.
Conclusions
There are several studies concerning adoptive cellular immunotherapy with many solid tumors, but this is the first study to address colorectal cancer SW1116 cell lines.
In this paper, we observed the antitumor activity of CIK cells against human colorectal cancer in vitro and in vivo. Cells with the phenotype CD3 + CD56 + are rare (1%-5%) in uncultured peripheral blood lymphocytes [6,19]. The CIK cells possessed the strong proliferation capacity; the CIK cells derived from 50 ml peripheral blood can be amplified three folds as much as the quantity of normal human lymphocytes. The results of the nude mouse xenograft assay showed that the CIK-consecutive-treated and CIKinterval-treated groups experienced a similar inhibitory effect to that observed with 5-FU. These results show that immunotherapy with CIK cells is a suitable adjuvant therapy for colorectal cancer. We propose that different CIK cells treatment programs will be used for different patients in clinical practice. | 2014-10-01T00:00:00.000Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "e03cf391bcba04f0f0ac85e97b604e6e982f874d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jir/2011/621414.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfe13c097e1f78c35476e58b6b6c8a20608afbf2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
271336605 | pes2o/s2orc | v3-fos-license | Exploring the use of grammarly in assessing English academic writing
This study explores the extent to which Grammarly can be a reliable assessment tool for academic English writing. Ten articles published in high-status scholarly Q.1 journals and written by specialist English native speakers were used to evaluate the accuracy of Grammarly's flagged issues. The results showed that Grammarly tends to over-flag many issues resulting in many false positives; besides, it does not take into consideration optional usage in English. The study concluded that although Grammarly can identify many ambiguous instances of language use that writers would do well to review and consider for revision, it does not seem to be a reliable tool for assessing academic written English.
Introduction
Automated Writing Evaluation (AWE) tools are widely used nowadays by many students and teachers [1].Grammarly is one of these tools, and it is used by 30 million people and 30, 000 teams.Grammarly is claimed to be the world's most accurate grammar checker [2].In fact, Grammarly claims its ability to provide correct and reliable instant feedback for different aspects of writing including correctness and clarity.Still, little research has been done to examine to what extent Grammarly is reliable in assessing academic writing.Earlier studies focused on non-native speakers' written work, where mistakes are expected to occur, e.g., Ref. [3].Other studies compared the performance of Grammarly with other automated engines such as Word and Ginger, e.g., Refs.[4,5].Still, other studies compared the feedback given by Grammarly with that given by human raters, e.g., Refs.[6,7,8].However, no study has examined how Grammarly deals with well-written English produced by competent native speakers.In this study, we attempt to find out the reliability of Grammarly's feedback by examining whether the flagged errors are actual errors or not (see Section 2).
Grammarly checks the language of written work and lists all issues in the order of occurrence within it, and it scores each written work on a 100-point scale after comparing it with other documents written with the same goals and in the same domain [9].Scores are given depending on the number of words and the types and numbers of detected issues [9].
Grammarly comes in two versions: a free one and a premium one.The difference between them lies in the number and types of issues they detect.The premium version offers over 400 types of checks and features, while the free version checks 150.
Upon submitting a file into Grammarly, users can select their goals, the intended audience of their writing and other features that fit their documents such as level of formality, academic domain, and the language preference (e.g., American/British) [9].
The suggestions/issues given by Grammarly fall into four types (3).First, correctness issues relate to grammar, spelling, and punctuation.Second, clarity and conciseness issues (e.g., passive voice use and unclear/wordy sentences) relate to the degree to which the written material is smooth and easy to understand.Third, engagement issues offer synonyms for bland and overused words.Finally, delivery issues relate to formality, politeness, usage conventions, and friendliness [9].Here are some screenshots and examples taken from Grammarly's website to explain how Grammarly works (Gramamrly.com).
Assessment of academic writing is not restricted to surface level issues such as grammar, punctuation, and spelling; it involves other areas such as diction, coherence, and ideas/content [9].Grammarly claims that it can help not only with surface-level issues but also with clarity, engagement, and delivery issues (but not content issues, which are still beyond the capabilities of any tool (11).
Differentiating between errors and other issues relating to style or optional use is in order here.In this study, errors refer to "morphological, syntactic, and lexical forms that deviate from rules of the target language, violating the expectations of literate adult native speakers" [10].This means that issues relating to style such as the use of a serial comma (see Section 5.2) are not errors, and they cannot be considered as such in assessing a piece of writing.If Grammarly flags these cases as language issues/errors, this is considered as a case of over-flagging, which refers to flagged errors that are not actual errors [11].That said, we believe that teachers or AWE tools should not penalize a writer for using/not using an optional case.Giving a score (by a teacher or an AWE tool) that counts such cases is problematic (see Section 5.1).Finding out how reliable Grammarly can be as an assessment tool is crucial as it has several repercussions in the field of English as a second/foreign language [3,6,12].A learner of English using Grammarly to check the accuracy of their writing could probably assume that the feedback given by Grammarly is accurate and would accept all suggestions, which could result in negative consequences if the feedback was not totally correct (10).A teacher of English, under the pressure of time constraints and especially with the wide use of online teaching, is very likely to use Grammarly to check their students' writing assignments.If Grammarly was not accurate, it would be confusing, and even harmful to learners and teachers by giving them wrong feedback (9).
Moreover, Grammarly might be used to assess the writing of non-native English researchers and teachers [7,8,13,14].It could be the case that journal reviewers, especially some non-native speakers of English might be tempted to use Grammarly to assess the English language of articles (personal experience). 1 If Grammarly incorrectly flags many issues and therefore gives the article a misleading low score, reviewers might reject the paper assuming that Grammarly's feedback was accurate.Finally, students or even colleagues, might use Grammarly to assess the accuracy of the language of English teachers.We believe that any score that is not close to 100 % might be interpreted by students or colleagues as poor competence on the part of teachers.To recapitulate, if Grammarly was not a reliable assessment tool with accurate feedback, it would result in serious problems for all stakeholders [6,13,7].The current study thus aims at exploring the accuracy of Grammarly's feedback by analyzing Grammarly reports for ten articles written by specialist English native speakers and published in high-status scholarly linguistics journals.Assuming that these researchers use academic English professionally, and the articles were peer-reviewed and proofread by the journals, we expect the articles to receive a very high score on Grammarly and to be without any noticeable writing issues.The two main questions that the current study attempts to answer are: 1) Is the overall score given to a piece of writing by Grammarly reliable, and 2) Are all flagged issues actual errors.Answering these two questions will show to what extent Grammarly is a reliable assessment tool for academic written English.More specifically, the main research objectives of this study are.
1. Find the overall score given by Grammarly to articles written by specialist English native speakers.2. Determine the types of writing issues Grammarly flag and the extent to which these errors are actual errors.
By achieving these objectives, our study aims to contribute to identifying the extent to which Grammarly can be a reliable assessment tool, and more importantly, we identify the areas where Grammarly needs to improve its performance.
Literature review
There seems to be a consensus that using Grammarly is beneficial to English language users in general, although it has some weaknesses.Grammarly can catch typos, grammatical errors, and punctuation mistakes that human eyes might miss, especially in long documents; in addition, Grammarly can suggest improvements to sentence structure, word choice, and clarity, leading to easier-tounderstand writing [6,6,12,15,16,17].For example [3], exploring the reactions of three Indonesian postgraduate students, reported that the three students believed that Grammarly had several strengths, including valuable feedback with explanations and examples, high speed, and ease of access, but it provided misleading feedback sometimes.33 It has also been found that Grammarly could help even low-level English learners to improve their writing, although it was not beneficial in avoiding errors related to word forms and word usage (15).Grammarly may suggest unnecessary changes or miss genuine errors, requiring user judgment.Also, Grammarly primarily focuses on mechanics and may not address higher-level writing concerns like argument flow or logical fallacies [18].
Comparing the feedback given by Grammarly and the feedback given by writing centers and academic writing consultants, researchers argued that the feedback given by human raters was more useful and effective than that of Grammarly [7,19].For instance Ref. [19], compared Grammarly's and 10 consultants' feedback given to three essays (about 700 words each) written by first year students in a writing center.She argued that the feedback given by human raters was better than Grammarly's feedback due to the number of repeated comments and the complex and inaccurate terms given by Grammarly.She added that while Grammarly offers valuable feedback, human raters still outperform automated tools in terms of providing subtle feedback and understanding the context of the text.In addition, human raters can provide more tailored suggestions, taking into account the specific requirements of the writing task and the intended audience.
Likewise [8], compared the accuracy and type of Grammarly's feedback with that given by human raters by using 56 essays written by first-year students majoring in English in an Armenian university.The results showed that Grammarly's feedback was mostly accurate with few inconsistencies, while some errors, detected by human raters, were undetected by Grammarly.The study recommended using the feedback given by both Grammarly and human raters.
Studies that compared Grammarly with other engines such as Word and Ginger concluded that Grammarly tended to outperform other engines [e.g., 18,19].For example [7], explored the perceptions of academic advisors of Grammarly's feedback compared with the feedback given by Word.They reported that Grammarly's feedback was perceived to be more effective than that of Word; however, Grammarly was not without problems and therefore the authors recommended that Grammarly be used together with academic advisors' feedback.They added that Grammarly offers a more advanced error-checking system compared to Word's basic spellchecker; Grammarly goes beyond basic grammar, suggesting improvements for sentence clarity and conciseness.
Likewise [4], examined the efficiency of five free grammar checkers, namely Grammarly, ProwritingAid, Ginger, After the Deadline, and Language Tool.To check their efficiency, the study used a collection of 500 sentences containing grammatical errors, and the results showed that Grammarly did better than the other checkers in detecting errors.The researchers concluded that Grammarly was the best grammar checker, a finding that agrees with the results of [12], who reported that Grammarly was useful in detecting and reducing grammatical errors. 1 One of the researchers of this study submitted a paper to a well-established linguistics journal.One of the reviewers (a non-native speaker of English) used Grammarly to check the accuracy of the language of the paper and that reviewer rejected the paper claiming that the paper received a low score on Grammarly (which was misleading information).Luckily, the paper received positive feedback from the other reviewer and the researcher had the opportunity to defend the paper.
A. Abu Qub'a et al.
Finally [5], comparing Grammarly with Microsoft Word spelling and grammar checker, reported that Grammarly was generally better than Microsoft Word with a precision rate of 0.88 (i.e., 88 % of the errors it identified were, in fact, errors), while its correction rate was slightly lower (0.83); that is, 17 % of the errors were not flagged.
To sum up, Grammarly tends to be a useful engine that helps improve its users' English, albeit it has some weaknesses.However, no study has been wholly devoted to examining the reliability of Grammarly as an assessment tool.
Bearing in mind that AWE tools are in constant development, this study is an attempt to establish the extent to which Grammarly is a reliable assessment tool for academic writing.Unlike previous studies, it uses academic English in use as written by specialist native speakers of English and published in academic top-tier refereed journals (cf.Methodology).
Methodology
Ten articles were used in this study. 2The criteria for selecting these articles were as follows.First, they must be written by English native speaker researchers; second, they must be published in Q1 Scopus-indexed well-established high-status journals in the field of linguistics.We used Scopus as a database, and we selected Q1 journals in the field of linguistics.We selected papers written by potential native speakers (by name and affiliation).This was further verified by checking their profiles and CVs on the Internet.When it was not evident that the writer was a native speaker, s/he was not selected.Note that selecting native speakers rather than non-native speakers does not entail that all native speakers are better academic writers than non-native speakers.Rather, we assume that specialist native speakers are more likely to be competent academic writers due to their long exposure to the language [20].More than 30 papers were first checked and all of them were published between 2004 and 2021.Of these 30 papers, we randomly selected 10 papers: five were written by American native speakers and 5 were written by British ones.
We selected articles published in Q1 Scopus-indexed well-established high-status journals in the field of linguistics to ensure that the articles are of the highest quality.The 10 articles appeared in the following journals: Applied Linguistics, International Journal of Bilingualism, Journal of Writing Research, Language Learning and Technology, Language Teaching, Natural Language and Linguistic Theory, Sage journals, Studies in Second Language Acquisition, and The Modern Language Journal.Note that we limited our analysis to the field of linguistics for three reasons: first, the field of linguistics is concerned with language and therefore it is expected that researchers writing in this field pay more attention to language issues.Second, the paper is concerned with academic writing, which is closely related to the field of linguistics.The final reason relates to space limitations.Covering other disciplines will render the study rather long and therefore we leave this for future research.
The PDF articles were converted into word formats using online converting tools, as Grammarly does not accept PDF formats.This was followed by careful cross-checking by the researchers for any converting issues.The researchers uploaded the articles (excluding tables, footnotes, endnotes, and references) one by one to Grammarly Premium to receive a complete analysis including correctness, clarity, delivery, and engagement.Recall that correctness covers grammatical, spelling, and punctuation mistakes.Clarity relates to the use of wordy and unclear sentences including the use of passive voice.Delivery issues refer to the appropriate use of English with respect to the level of formality, politeness, usage conventions, and friendliness.Engagement issues relate to the use of synonyms and avoiding the use of bland and overused words.
The following goals were selected: knowledgeable audience, formal language, academic domain, and American or British English (according to the writer's choice).The authors selected formal language and academic domain as this study aimed to assess Grammarly in academic writing within the field of linguistics, where formal, academic language is used.Note that the references were not included as they contain proper names and might be written with different formatting styles.Few footnotes in the articles were not included for technical reasons.
All the issues in the reports were checked one by one to verify that no errors were caused by the conversion of the PDF files.A dataset of all issues/suggestions given by Grammarly for each writer was compiled and classified.The 10 writers are referred to in this study as W1, W2 ⋯ W10.No noticeable differences were found between the American and British writers; therefore, all issues were combined and analyzed for all writers.
No sophisticated statistical tools were needed as the paper's focus was on frequencies and percentages and no comparisons between writers were needed.We counted the number of mistakes and categorized them using an Excel sheet.The researchers then checked the flagged issues to find out whether they were correctly identified or not.False positives are used in this study to refer to flagged errors that are not actual errors and accepting Grammarly's suggestions results in an error.For example, spotting the use of 'in' as an error in the phrase 'our focus in teaching grammar should be on functions rather than on forms', is considered a false positive, and changing the preposition will yield a grammatical error.Optional usage refers to issues that are not true errors but represent cases where writers have more than one choice, e.g., spelling 'non-native' with or without a hyphen.Note that false negatives (cases where Grammarly fails to identify errors) were not dealt with in this study.This is because the articles were written by specialist English native speakers and proofread by the journals.In fact, we did not come across any undetected errors in these articles.Besides, false positives are more detrimental than false negatives in second/foreign language acquisition, and AWE developers give priority to false positives [5,21].
Uneasily identified issues were checked in grammar usage references and in online corpora.Google scholar and the Corpus of British Academic Written English (BAWE) [22] were used to further explore the usage of articles, hyphenation, wordy phrases, and collocations/concordances. BAWE is a British corpus with eight million words representing academic English written at UK universities.All issues were checked on Google Scholar and BAWE on the 10th and March 11, 2022.Note that the use of Google Scholar here does not mean that written work on Google Scholar is always correct.We resort to Google Scholar to show cases where we know that the flagged item is not an error and to show that this is commonly used on academic search engines.To further check such cases, we requested three English native speakers who teach academic writing at a university level to judge whether Google Scholar usage is correct in academic English to ensure that they are not common mistakes.This was done without telling them that these cases come from Grammarly or Google Scholar.
Results and discussion
In this section, we first present the overall results of all articles and then detail the results according to the type of issue.
Overall results
The scores given to each article and the number of writing issues in each article are shown in Table 1.
As is clear from Table 1, Grammarly reports show that all the writers had writing issues that affected their overall scores.The average score for all writers stood at 83, which is unexpected given that the writers represent well-qualified, accomplished English writers.It is very likely that such a score would be disappointing to non-native speakers, let alone specialist English native speakers.
Correctness issues
The total number of correctness issues was 1196.This number is alarming, and it could suggest that these articles' writers have serious problems in their grammatical competence, had these issues been true errors.We will show here that few of these issues were actual errors (57 errors (4.8 %)), and Grammarly has a serious problem of over-flagging.For ease of exposition, we will group these issues according to their types.
Punctuation issues
Punctuation issues were the most common totaling 522.The following table presents the most common punctuation issues with their frequency for all writers.In the last three columns, we present the number of actual errors in the articles, the number of issues that relate to optional usage, and the number of false positives.
As is clear from Table 2, on closer inspection, we found that of all these 522 cases, only two were true errors, and one of them appeared in a participant's quotation, which means it was not an error made by the article's writer.All the other flagged issues related to optional uses of punctuation marks except for four cases that were false positives.A false positive appeared in 'with recent events regarding ICE and the camps immigrants are being put into, the mindset … ' (W4).Grammarly incorrectly suggested using a comma after 'camps', which will render the sentence ungrammatical.An example of a false positive with the semicolon appeared in '[a]n expository essay and a discussion essay are classified together in the Essay genre family; a book review and … in the Critique genre family; and an annotated bibliography … classified in the Literature Survey genre family' (W10).Here, the semi-colon is used correctly to link independent clauses and the use of 'and' is necessary to link the third clause to the other clauses.Grammarly incorrectly suggested using a comma before 'and' and a semicolon after it, which results in an error.The closing punctuation false positive related to a heading in a paper where Grammarly suggested using a period after the title thinking it was a sentence in the body of the article.
All the other 516 cases can be related to optional usage of punctuation marks.That is, these issues flagged by Grammarly are not true errors; however, accepting Grammarly suggestions will not result in errors.The following table presents illustrative examples of the most common optional cases.
As is clear from Table 3, in the first five examples, Grammarly did not take into consideration that using commas here is optional [23,24].[23] explained that using a comma after an introductory word/phrase is optional, using a comma with short prepositional phrases also depends on the writer's preference, and using a comma with sentence adverbs like 'therefore' relies on the writer's judgment if these adverbs are parenthetical insertions or well-integrated into the sentence.In example 3, although the comma is not needed, it cannot be considered an error especially if the dependent clause is not closely related to the proposition in the main clause [23].All these cases of using commas are optional and depend on 'the writer's judgment and aesthetic considerations' [23].Hence, they cannot be considered errors.In the other examples, the punctuation marks were correctly used by the writers, but Grammarly spotted them as errors without giving any suggestions.The optional use of these punctuation marks needs to be taken into consideration, and such cases should not be counted as errors that affect the overall score.This is because the rules governing the use of punctuation marks are much debated in English, even among professional editors and major style guides and dictionaries; very often using or leaving a punctuation mark is a matter of choice or style [23].
Grammatical issues
The total number of grammatical issues was 320.Table 4 gives more details on grammatical issues.
As can be seen from Table 4, only 40 issues were true errors.22 of them were made by the writers (many of them were typos), and 18 appeared in participants' quotations or resulted from the word-by-word translation of example sentences from other languages.→, therefore, 3.This is a puzzling pattern, since neither the (W1), →pattern since 4. Certain types of student writing, for example proposals and (W10).
→writing for example 5.The hidden power of media discourse and the capacity of… power-holders to exercise this power (W7).
… (Grammarly spotted the use of the three dots used by the writer for ellipsis as an issue.) This use is the norm in English to show ellipsis and cannot be considered an error.Here, we classify this as an optional case, rather than a false positive because Grammarly did not provide a suggestion that can result in an error.6.In class, participants were instructed to: (a) (W9).
: (The use of the colon was spotted as an issue.) The writers correctly used the colon to introduce a list, but Grammarly flagged this as an error without providing any suggestions.7. Comparative forms ansab 'more appropriate', (W2).
appropriate',→ appropriate,' Grammarly considered the quotations marks around words given as translation of words form other languages as errors.These are not errors, and they follow the standard way of using words from other languages in academic journals.Note that Grammarly sometimes flagged such cases as 'improper formatting' (c f.Section 5.2.4).For errors appearing in participants' quotes, these cannot be counted as errors made by the writers, and they should not affect the overall score.The writers almost always indicated that the sentence had an error using [sic], as in '[i]t contributed to stress but resulted [sic] a simple rough draft' (W10).Grammarly identified this as an error but failed to take into consideration that this was a quotation given by a study participant and cannot be changed in research.Similarly, the word-by-word translations cannot be considered errors because the writers used them on purpose following journals' guidelines that require example sentences from other languages to be reported in this way.For example, writer 2 used the following sentence from Arabic 'Ahmed taller from Basem'.This sentence is not grammatically correct in English; nevertheless, counting it as an error affecting the writer's score is not justified; the writer did not make an error here and did not mean to write an English sentence; in fact, the writer gave the correct English translation next to the word-by-word translation.If this was checked by a human rater, it is very likely that it would not be marked as an error.
The remaining 81 issues were false positives.Grammarly incorrectly flagged these as errors and offered suggestions to fix them, which will result in explicit grammatical errors.For example, Grammarly incorrectly flagged the use of 'who' as a pronoun use error in 'if he were teaching students who all spoke English … ' (W5).Grammarly suggested using 'whom', which will render the sentence ungrammatical.More illustrative examples of false positives are given in Table 5.
As is clear from Table 5, the above examples clearly show that accepting Grammarly's suggestions results in explicit grammatical errors.The rest of grammatical issues can be classified as optional cases.The majority related to determiner use (119 cases), as in 'the thinking of L2 users is not interpreted as failure to achieve … ' (W6).Here using 'failure' to mean lack of success with or without an article is optional as it can be countable or uncountable.In another example, Grammarly suggested adding the definite article 'the' means→ mean Grammarly incorrectly suggested using the past participle instead of the direct object assuming that a verb is needed after 'have'.
4.
A sudden transition from teaching in their native language to teaching in English (W5) teaching→ teach Grammarly mistakenly assumed that 'to' must be followed by the infinitive, failing to realize that 'to' here is a preposition that needs a noun/gerund.Grammarly failed to realize that the verb is 'may be' and 'occurring' begins a reduced relative clause.
12.An initial methodological point is that the participants studied in these articles are mostly instructed students of an L2, for instance Tomczak and Ewert's (2015) Polish university students of English, rather than so-called natural L2 learners with little contact with language teaching, such as ….(W6) teaching→ teachings Grammarly incorrectly suggested 'teachings', which changes the meaning from 'the practice of language teaching' to 'language principles'.
A. Abu Qub'a et al.
before 'transfer' in 'a direct result of transfer of the native language' (W1).Such cases represent optional usage in English and therefore cannot be counted as true errors.To further explore the use of zero article with singular nouns, the researchers randomly selected five nouns that were used without articles and flagged as errors in the ten articles and searched their frequency on Google Scholar and BAWE and found that these nouns were frequently used without articles (Table 6).This was further confirmed by the three English native speaker consultants who were asked to judge the correctness of these cases.
As is clear from Table 6, the use of such nouns without articles, especially the definite article cannot be counted as errors, especially because there are no watertight rules about using the definite article in English; the question of definiteness in English is rather complicated and requires considerable semantic, pragmatic, and contextual information [25,26].More illustrative examples of optional cases are given in Table 7.
Spelling/word issues
The total number of issues here was 312.Only nine were true errors; many of them appeared in participants' quotes.More details are presented in Table 8.
From Table 8, we can see that 111 of the flagged words were false positives, and 48 words were examples from other languages such as Arabic waati 'low' and Mandarin gao 'tall'.Sometimes Grammarly flagged them as misspelled words; other times it flagged them as confused words.These words were used as examples in the articles, and Grammarly was unable to recognize them and considered them as misspelled words.The problem here is that Grammarly offered other words to replace these words, which results in explicit, offcontext words.For example, Grammarly suggested replacing gao with 'ago/go' and waati 'low' with 'water'.Had Grammarly flagged these as unknow words, as it did sometimes, it would have been safer and better for its users.
The other 22 cases related to misspelling individual words or confusing them, as shown in Table 9.
The rest of issues (192) were optional cases.51 of them related to spelling compound words with a hyphen when used attributively, as in 'Additionally, journalistic features, for example, the order of the information, agenda setting and space allocation, in general, and quotation patterns, in particular, play an important role in implementing particular perspectives, and hence, ideologies.' (W7).Grammarly always required a hyphen in such cases (→agenda-setting).Although this is correct, it cannot be considered an error.It is well-known in English that the spelling of compound words goes through three overlapping stages: spelled as two separate words, spelled with a hyphen and finally spelled as one word [23].It is very common to find the same compound word spelled with and without a hyphen interchangeably.All the 51 flagged words were checked for their frequency on BAWE (Table 10), and most of them were used both with and without a hyphen, with a higher frequency for spellings without a hyphen.This is in line with [24] advice not to overuse hyphens in familiar phrases and when there is no risk of confusion.Note that Google Scholar was not used here as it lists both words with and without a hyphen simultaneously.
As is clear from Table 10, 58 optional issues related to unknown words.The majority were words from other languages such as Arabic, Spanish, and Mandarin; some were technical words such as 'templatic' 'bisyllabicity, 'genre', and 'ethnoracial'.Recall that Grammarly detected these words as confused words in some cases and as unknown words in others.Note that Grammarly did not provide suggestions to replace these words; it only flagged them as unknown words.Therefore, we classify them as optional cases rather than false positives (unlike confused words, where Grammarly provided suggestions to replace them).
Another common issue of optional confused words related to using single quotation marks by the writers, as in "Mautner (2007) argues that 'what large-scale data are not well suited for, on the other hand, is making direct, text-by-text links between the linguistic evidence and the contextual framework it is embedded in'.Theses …" (W7).Strangely, Grammarly usually flags the use of one of the quotation marks.Finally, a few cases related to variant spellings of some words, cf.'focusing', which is a less common variant of 'focusing' in British English and should not be considered an error.
Other issues
A small number of other issues related to improper formatting (21 issues), text inconsistencies (15 issues), and mixed dialects of English (6 issues).
Improper formatting issues were not errors; all of them related to word-by-word translations from other languages or to spelling out numbers as in 'patients with diabetes and who require long-term (at least 1 month)' (W10).Grammarly flagged '1' as an error; however, this use is not wrong as it is accepted to use numerals for numbers when the numbers precede a unit of measurement, with statistical functions, percentages, and ratios [28].
Table 6
Nouns without articles (phrases checked are underlined).
Google Scholar BAWE their courses and percentage of time 751,000 6 arise only after confrontation with input data 1160 12 as a direct result of transfer of the native 679,000 22 in which a monosyllabic input is augmented by addition of a vowel 3480000 12 mental representations … must be captured in language before they are lost.2340000 111 A. Abu Qub'a et al.
Of the 15 text inconsistency issues, six were flagged correctly by Grammarly as the writers used both American and British spellings or used compound words with and without a hyphen (e.g., nonnative and non-native).The other nine cases were not errors.Some words came at the beginning of new sentences, so they were capitalized, and some words came at the end of the line, so the writer hyphenated them.In other cases, some words were used technically, e.g., 'condition' in 'for each dependent variable, two nested models were built … and the model adding a fixed effect for Condition (control vs. intervention)' (W6).The six errors of mixed dialects of English are not errors as they relate to variant spellings of words in British English, e.g., 'precent/per cent'.
To conclude our discussion of correctness issues, the large number of false positives and optional issues flagged by Grammarly shows that Grammarly tends to over-flag, and its reliability as an assessment tool seems to be questionable.
Clarity (wordy sentences) issues
Clarity issues flagged by Grammarly were 578, ranging from 37 to 81.All these issues related to using words instead of phrases, e.g.,
Table 7
Optional grammatical issues.
Writer's use Grammarly's suggestion
Remarks 1.There is general consensus (W9) → a general Both are correct and used in English as reflected by the high frequency of both on Google Scholar: 43,200 (without a); 89,300 (with a).
Structures which violate (W1)
which→ that Although the use of 'that' in restrictive clauses is recommended (cf.APA Publication Manual), using 'which' is optional and cannot be considered an error.3. The articles included in this issue divide between (W6).
divide→ are divided
Although the suggestion is grammatically correct, the use of 'divide' as an ergative verb is perfectly acceptable.4. I argued that this ranking can be seen (W1) can→ could Using the modals 'can, will, may' with past sentences should not be considered errors as mixing of tenses here is accepted to convey different levels of tentativeness and hedging [27].The phrases 'I argued that this can', 'explained how they will', and 'explained how he can' appeared 54, 136, and 68 times, respectively on Google scholar. 5. Patients with diabetes and who require (W10) and This phrase appeared 96 times on Google Scholar.
Table 9
Spelling/word issues false positives.
Writer's use Grammarly's suggestion 1. Emotional arousal, and not just valence as more commonly investigated, may be a telling facilitator of vocabulary learning (W4) may be→ maybe 2. This sequence to minimize any one subtest (W4) any one→ anyone 3. A higher total score is predicted on the immediate posttest (W4) predicted→ predicated 4. Though few studies explore questions specifically surrounding affect within the context (W4) affect→ effect 5. Are marked in boxes (W2) → inboxes 6.The methodology was extended to investigate variation among university registers, where marked differences emerged (W10) where→ were 7. Extracts 2a-c are (W10) c are→ Care 8.Should contain a foot (W1) →afoot
Table 10
Words spelled with/without a hyphen on BAWE.using 'a number of', as in 'the literature on second language acquisition contains a number of reports', and 'with respect to', as in 'we can compare the rankings of the faithfulness constraints with respect to the two markedness constraints'.Grammarly flagged these phrases and suggested replacing them with 'some/many' and 'concerning', respectively.Another example is using 'so as to', as in '[i]t was explained to the participant that the goal of this manipulation was to provide feedback … and encourage them to write fluently so as to avoid losing sight of their text' (W6), where Grammarly suggested 'to' instead of 'so as to'.Although 'to' can be used here, it does not express the same level of formality, and it will result in many repetitions of 'to'.In fact, this use of 'so as to' in English is very common (see Table 11 below).Even though shorter words are better in academic English, using their longer counterparts should not be regarded as 'errors', especially when they are used to add variety and avoid repetition.To find out whether Grammarly's suggestions to avoid wordiness reflect actual use in academic English, we further checked the frequency of the most common flagged phrases on BAWE and Google Scholar (Table 11).
The frequencies in Table 11 show that these wordy phrases are common in academic English.Also, the three native speaker consultants confirmed that the use of these phrases in academic English is not wrong.Given that such phrases are frequent in academic English, and they are used by native English writers, it is not justified to adopt a too prescriptive attitude toward using them; they should not be considered as errors affecting the overall score of submitted writing to Grammarly.
Moreover, some of these wordy phrases are used purposefully by the writers to add emphasis or variety.Therefore, replacing these phrases as per Grammarly's suggestions may result in a loss of emphasis or variety.Consider the use of the underlined words to add emphasis in the following example: '[t]he idea of linguistic relativity put forward by Benjamin Lee Whorf often comes up in discussions of bilingual cognition, though he himself is actually cited in only two of the articles here' (W6).Likewise, some wordy phrases should be used to avoid repetition.For example, 'a number of' would be used to replace its one-word synonyms to add variety when the need arises.
Before closing this section, it is worth mentioning that Grammarly flagged 638 issues of using the passive voice.Grammarly flags any use of the passive voice and suggests rewriting it in the active voice.We will not analyze the legitimacy of using the passive voice in the articles in this study as it is a complicated issue that is beyond the scope of this paper; however, it suffices to say that using the passive voice in academic English is an integral part of academic written English [29], although its use has declined over the years [30,31].
Word choice/engagement issues
Grammarly flagged 289 word-choice issues, supposedly to avoid overused words and repetition.However, we found that in some cases accepting Grammarly's suggestion could change the meaning of the sentence, as in 'because two words of each type of emotion were included in each text, the maximum total score for each word type (positive, neutral, negative) within each text is 6' (W4).Grammarly suggested replacing 'negative' with 'harmful'.In another instance, Grammarly suggested using 'antagonistic' instead of 'negative' in 'the CDA notions described earlier enabled the assignation of more explicit and finer semantic/discourse prosody values than merely assigning a general positive/negative bias' (W7).A final example is Grammarly's suggestion to replace 'representative' with 'figurative' in 'which may not be a representative language for many reasons' (W6).These examples show that some Grammarly's suggestions are misleading and could result in unnatural English phrases.
To further examine Grammarly's engagement suggestions, we compared the frequency of some phrases in the articles with the frequency of the phrases suggested by Grammarly on Google Scholar and BAWE (Table 12).
From Tables 12 and it is clear that four of the phrases used in the articles are more common than those suggested by Grammarly, while the other three phrases are in very common use.Moreover, the three English native speaker consultants confirmed that the use of the phrases in the articles is correct in academic English.This shows that the use of these words is not erroneous, hence, they cannot be regarded as issues that affect the overall score of written work.
Delivery issues
Grammarly flagged 238 delivery issues related to inappropriate colloquialisms (231 issues), tone suggestions (five issues), and sensitive language (two issues).
Most inappropriate colloquialism issues were about using first and second person pronouns such as 'I' and 'you'.Although using such pronouns is generally avoided in academic writing, many publishers and journals tolerate their use and therefore these should not be regarded as errors [32].More importantly, most issues flagged by Grammarly here are different as these first/second-person pronouns in the articles were used in quotations by the study participants, as in '[y]our wife also works at Jal' (W6).Sometimes, the writers used 'I' in example sentences to explain an issue as in '[t]he relationships between the elements of the scene are expressed in terms of physical motion, the manner in which it takes place, and the path it follows, say, in I walked along the road or I ran around the track' (W6).
Other flagged issues were related to using the coordinating conjunctions 'and', 'yet', and, 'but' at the beginning of the sentences, where Grammarly suggested using other conjunctions such as 'however' 'nevertheless' and 'furthermore'.Grammarly explains that using these conjunctions at the beginning of sentences is not a grammatical error, but it is a matter of style.This further confirms that counting these issues toward the overall score awarded to pieces of writing is unwarranted [33].reports that using conjunctions to start sentences is quite common in written English when writers want to add something new to what they have just said.Other flagged issues were about using contractions such as 'don't', 'didn't', and 'isn't'.These words again appeared in participants' quotes.
For tone suggestions, Grammarly suggested that using 'sort of', 'simply', and 'just' weakens the writer's message.This is true; however, not all cases where such phrases/words appear are mistakes.Consider the use of 'sort of' in '[l] languages frequently impose restrictions on what sorts of consonants may occur in syllable codas, ranging from a prohibition against any sort of coda to a prohibition' (W1).Here Grammarly could not differentiate between the noun 'sort' that means a type and 'sort' in the adverb 'sort of' meaning 'somewhat'.
Finally, Grammarly flagged 'foreign' and 'freshman' in 'foreign students' and 'university freshman' as issues of sensitive language.Grammarly advised that such terms might be considered offensive, outdated, non-inclusive and disrespectful.In fact, these words were not used by the writers themselves; rather they appeared in participants' quotations.
Concluding remarks and implications
It has been shown that Grammarly over-flagged a considerable number of writing issues.The huge number of over-flagged issues is misleading as it suggests that these writers have serious problems in their writing.Concerning correctness issues, only 57 (4.8 %) were true errors.Most of the other cases were either optional cases (911/76 %) or false positives (228/19 %).For clarity, Grammarly's performance was not without problems; the flagged issues were not errors and accepting some of Grammarly's suggestions will change the meaning or could affect variety or emphasis.That said, these issues should not be counted toward the overall score of written work.
For engagement issues, most issues were suggestions that were supposed to avoid overused words and repetition.Again, these cases should not be regarded as issues that affect the overall score especially because the words used by the writers are in common use in academic English.Moreover, accepting some suggestions would result in unidiomatic language.Finally, the most common delivery issues flagged by Grammarly related to inappropriate colloquialisms that were used in participants' quotations and therefore they do not represent writing issues and should not be counted toward the overall score.
As an answer to the first research question (is the overall score given to a piece of writing by Grammarly reliable), our results show that the score given by Grammarly does not seem to be reliable for assessing academic written English.Concerning the second question (are all flagged issues actual errors), the results confirm that Grammarly over-flags many writing issues.Three factors might explain why Grammarly tends to over-flag such issues.First, Grammarly was misled by examples from other languages.In linguistics research, it is very common to use such words and examples, and Grammarly developers should find a way to deal with this case.Second, Grammarly failed to recognize technical terms and special uses of certain words in reporting research findings (e.g., Condition, Section 5.2.4).Th ird, many issues related to optional cases of usage.These optional cases should not be considered errors as they are more linked to style, which Grammarly does not seem to be able to account for.
Our results are similar to previous research on Grammarly in that they show that Grammarly gives instant feedback on many language issues but this feedback can be wrong sometimes [6,12,4], see Section).Moreover, our results lend support to Refs.[8,19], and [7] who advised that Grammarly should be used along with academic advisors' feedback as it has some inconsistencies.However, our findings contradict those of [5], who reported that Grammarly had a high precision rate of error detection.The results of our study are also different from all previous research in that they show precisely the extent to which Grammarly can be reliable and how much its feedback is correct.
To recap, our results suggest that using Grammarly as a summative evaluation tool is questionable [13], but it can be used in conjunction with human raters to yield better results [8,34].This does not mean that Grammarly cannot be a useful tool.Grammarly does a good job at identifying many ambiguous instances of language use that writers will do well to review and consider full revision.However, Grammarly tends to over-flag and therefore Grammarly users should be cautious about interpreting Grammarly's scores and accepting its suggestions.Accepting all suggestions blindly could result in ungrammatical or unidiomatic sentences and distortion of intended meaning.If Grammarly's developers take this into consideration and give more room to optional usage, Grammarly could be an invaluable tool.Furthermore, Grammarly should clearly point out that its suggestions do not mean that the words/structures in question are incorrect, unless they relate to explicit, undisputable grammatical errors.More importantly, these suggestions should not count toward the overall score given to a piece of writing, as this could result in undesirable, unjustified consequences.We suggest that Grammarly should give more than one score to a piece of writing, based on the type of issues involved.In this way, users will not be misled by the overall score given to submitted writing.These findings have pedagogical approaches as well.Educators and users of Grammarly should deal with Grammarly's scores and suggestions cautiously.They need to double check whether these suggestions are true errors or not before accepting and integrating Grammarly's suggestions into their written work.
One limitation of this study is that it restricted itself to false positives and optional cases.A future study that focuses on false negatives (undetected errors) and the reasons that lead to them is highly recommended.This can be achieved by submitting texts with different types of mistakes to find the types of mistakes that Grammarly may not detect.Another limitation relates to the field.This paper was concerned with writing in the linguistics field.It is recommended to explore the performance of Grammarly in other fields such as science, business, and medicine.
3 .
Fig. (2) shows how the disciplines of Food Science, Chemistry, Engineering, and Meteorology cluster at the positive end of Dimension 1, together with the Methodology Recount and Design Specification genre families, all of which have means greater than +8.(W10)
Table 1
Overall results.
Table 3
Punctuation optional cases.In the first four examples here, the use of the comma is optional.2. Coefficients therefore represent (W4).
Table 11
Frequency of wordy phrases.
Table 12
Frequency of phrases with engagement issues (phrases in the articles are on the left).→ 0 (the first frequency relates to the writers' phrases, while the second relates to Grammarly's phrases) | 2024-07-24T15:05:08.022Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "fb75a0b2f9763b23ef94db5b73c327c7c481c9fa",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.heliyon.2024.e34893",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb39ac2aa3c4eff2d1fbabc9264e7411c38b7f96",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255449495 | pes2o/s2orc | v3-fos-license | A Stereotactic Probabilistic Atlas for the Major Cerebral Arteries
Improved whole brain angiographic and velocity-sensitive MRI is pushing the boundaries of noninvasively obtained cerebral vascular flow information. The complexity of the information contained in such datasets calls for automated algorithms and pipelines, thus reducing the need of manual analyses by trained radiologists. The objective of this work was to lay the foundation for such automated pipelining by constructing and evaluating a probabilistic atlas describing the shape and location of the major cerebral arteries. Specifically, we investigated how the implementation of a non-linear normalization into Montreal Neurological Institute (MNI) space improved the alignment of individual arterial branches. In a population-based cohort of 167 subjects, age 64–68 years, we performed 4D flow MRI with whole brain volumetric coverage, yielding both angiographic and anatomical data. For each subject, sixteen cerebral arteries were manually labeled to construct the atlas. Angiographic data were normalized to MNI space using both rigid-body and non-linear transformations obtained from anatomical images. The alignment of arterial branches was significantly improved by the non-linear normalization (p < 0.001). Validation of the atlas was based on its applicability in automatic arterial labeling. A leave-one-out validation scheme revealed a labeling accuracy of 96 %. Arterial labeling was also performed in a separate clinical sample (n = 10) with an accuracy of 92.5 %. In conclusion, using non-linear spatial normalization we constructed an artery-specific probabilistic atlas, useful for cerebral arterial labeling.
Introduction
Cerebrovascular imaging is critical in diagnosing several neurological disorders (Kronzon and Tunick 2006;Mueller et al. 2005) as well as in research investigating cerebrovascular physiology and pathophysiology (Amin-Hanjani et al. 2015;Muller and Van Der Graaf 2012;Rivera-Rivera et al. 2015;Zarrinkoob et al. 2015). Several recent technological advancements, embodied in 4D flow MRI, allow for acquisition of data such as velocity, flow rate, turbulence patterns and pulsatility in brain arteries (Frydrychowicz et al. 2011). Therefore, post-processing methods that can transform these datasets into meaningful standardised quantitative descriptions of blood flow in the individual will soon be needed.
Probabilistic tissue maps are commonly used for tissue segmentation in the whole brain (Ashburner and Friston 2005) and in specific areas such as the cerebellum (Diedrichsen et al. 2009;van Baarsen et al. 2016) in both humans and animals (Love et al. 2016). However, such methods have not been explored for labeling and segmentation of cerebral arterial branches.
An atlas describing the spatial distribution of individual arteries could enable a high degree of automation in applications that require labeling of vascular segments. Pioneering work has provided detailed descriptions on individual segments (Nowinski et al. 2009a(Nowinski et al. , b, 2013, as well as probabilistic information on the morphology of the cerebrovascular tree (Forkert et al. 2013;Mut et al. 2014). Recently, we combined these two properties into a probabilistic artery-specific atlas and an automatic, atlas-based artery-identification method (AAIM) (Dunås et al. 2016). The promising results from that proof-of-concept study motivate the construction of an atlas based on a large population-based sample, which would give a more representative and improved spatial coverage of the anatomic variation between individuals.
To validate such an arterial atlas in a meaningful way, the purpose of the atlas must be considered. Therefore, atlases developed for segmentation and labeling are generally evaluated for that specific task (Dunås et al. 2016;Forkert et al. 2013;Passat et al. 2005). Another important evaluation is based on describing the underlying spatial alignment of the structures that are included in the atlas.
The aim of this study was to create a stereotactic and probabilistic cerebral arterial atlas by manually labeling 167 high-resolution 4D flow MRI angiographic scans. This atlas was validated based on its applicability for arterial labeling, and the impact of the normalization process was investigated by comparing non-linear normalization with rigid-body alignment.
Materials and Methods
In this study, 2360 cerebral arteries were manually labeled in 4D flow MRI from 167 healthy elderly subjects and used to create the atlas. The workflow had five main steps: 1. Data acquisition; 2. Spatial image normalization and preprocessing; 3. Atlas construction; 4. Comparison to rigid-body alignment; and 5. Atlas validation.
Data Acquisition
Subjects from the Population COBRA (Cognition, Brain and Aging) (Nevalainen et al. 2015) is a large, population-based, prospective MR imaging study. In summary, subjects between 64 and 68 years of age were randomly selected from the population registry of Umeå, Sweden. Subjects with medical conditions or medical or surgical interventions that could alter brain function or cognitive performance such as history of brain trauma or stroke, dementia, diabetes, functional impairment or movement disorders (e.g., Parkinson's disease), epilepsy, intellectual disability, psychological disorders, and ongoing malignancy treatment, were excluded, as well as subjects with contraindications for MRI.
As a part of the COBRA study, we collected 4D flow MRI data from 181 subjects (age 66.2 ± 1.2, M = 100, F = 81). Out of these, thirteen subjects were excluded due to constraints regarding data quality (e.g. motion artifacts) and one due to a vascular malformation. This resulted in a sample of 167 subjects (age 65.8 ± 1.2, M = 97, F = 70) on which the atlas was based.
Clinical Sample
For a pilot test on a clinical sample, ten patients with transient ischemic attacks (n = 6) or lacunar infarcts (n = 4) were included. This diagnosis was based on case history, neurological and brain MRI examination. All patients were also investigated with CT angiography (0.6 mm slices). CT did not reveal any stenosis or occlusion of internal carotid, vertebral or basilar arteries, or in the middle, anterior or posterior cerebral arteries.
Normalization
Tissue probability maps for white matter, grey matter and cerebrospinal fluid were generated from the structural M T1w using SPM8's New Segment. From these tissue probability maps, a study-specific brain template was generated using SPM8's DARTEL (Ashburner 2007). In that process, a subject-specific transformation field that described the nonlinear transformation from each subject to the study template was also generated. With these transformation fields, the AI of each subject was transformed to match the group template and then normalized to stereotactic Montreal Neurological Institute (MNI) space by aligning it with the MNI152-template (Evans et al. 2012) using an affine transformation.
Vascular Segmentation and Skeleton Construction
The AI was smoothed with a low-pass box filter with a kernel size of three voxels. To give complete vessel coverage without including neighboring static tissue (Wåhlin et al. 2012), the image was binarized by thresholding at 18 % of the maximum intensity value.
To separate the branches of the vascular tree a vascular skeleton was extracted from the binary image using an automatic method containing three steps: 1. The binary vessel tree was gradually thinned until a one-voxel thick skeleton was obtained (Palàgyi and Kuba 1998); 2. The vascular skeleton was pruned to remove loops and branches shorter than eight voxels; 3. The vascular skeleton was divided into branches separated by junction-points, and each branch was assigned an identification number (Chen and Molloi 2003). Figure 1 illustrates the segmentation and skeleton construction.
Included Arteries
The arteries included in the atlas were selected based on their fundamental role in the cerebral arterial circulation.
The included arteries were: left and right internal carotid artery (ICA); basilar artery (BA); left and right vertebral artery (VA); left and right posterior cerebral artery (PCA); left and right, proximal and distal middle cerebral artery (MCA); left, right and distal anterior cerebral artery (ACA); and left and right posterior communicating artery (PCoA).
Manual Arterial Labeling
The arteries were manually labeled using an in-house tool developed in Matlab (Mathworks, MA, USA). The vessels of the brain were visualized as a rotatable 3D volume. The arterial segments forming each artery were selected, and the corresponding vascular skeleton branches were labeled and saved. The border between two arteries was primarily defined according to junction points in the vascular skeleton. When no such junction points were present (e.g. due to an absent branch) the border was manually determined based on morphology such as changes in diameter or direction (Osborn 1999).
To re-inflate the labeled arteries, the vascular skeleton branches with the corresponding labels were dilated with a kernel of size seven voxels, and the resulting volume was multiplied with the binary vessel tree. The labeled arteries were then visually inspected and approved by a neurologist (LZ). When there was uncertainty regarding which branches that should be included (primarily in MCA), a consensus decision was made by LZ and TD (5 and 3 years of experience in neurovascular anatomy). An example of manually labeled arteries from one subject is presented in Fig. 2.
A probability map was constructed for each artery by adding together the binary volumes of the re-inflated arteries for all subjects and dividing the value of each voxel by the number of included arteries. This resulted in a 3D volume with values between zero and one, corresponding to the proportion of included arteries that overlapped in each voxel. These 16 probability maps together form the atlas, which was denoted Umeå Brain Arteries (UBA167).
Rigid-Body Atlas
To determine the contribution from the DARTEL normalization, the UBA167 was compared with an atlas where the vascular trees were aligned using a rigidbody transform. The obtained volumes were transformed to native space and aligned to the MNI152-template using a rigid-body transform in SPM8. This procedure was required since the manual labeling of each subject was performed after DARTEL normalization. Probability maps for each artery were calculated in the same way as for UBA167.
Since the arteries were extracted after DARTEL normalization, the arterial volumes had to be transformed back to native space before aligning them to the MNItemplate in order to create the rigid-body atlas. Due to partial volume effects, the images had to be re-binarized after the rigid-body alignment. This was done using a threshold adapted for volume conservation of each probability map.
Atlas Characterization
The normalization procedure was evaluated by comparing the UBA167 to the rigid-body atlas. Four measurements were used to describe the spatial distribution of arteries in terms of alignment between subjects and separation between arteries in the probability maps. For each artery probability map, the measurements were: 1. Concatenated volume calculated from the number of non-zero voxels, i.e. voxels occupied by an artery in any of the 167 subjects; 2. The arterial volume ratio (AVR) obtained by dividing the concatenated volume of the probability map by the average arterial volume across the included subjects; 3. The dominating volume of each probability map calculated as the percentage of voxels where the artery in question had higher probability than any other overlapping artery; 4. The maximum value of each probability map.
A low AVR equals a high spatial agreement between subjects, and hence a high specificity for separating arteries from background. The dominating volume describes how well the probability maps are separated, which could be translated to the specificity for separating different arteries.
Leave-One-Out Validation
To evaluate the sensitivity of the UBA167, it had to be applied to a new sample of subjects. This was done using a leave-one-out approach where a target subject was removed from the UBA167 and the modified atlas based on the remaining 166 subjects was used to label the arteries of the target subject, using the previously described automatic identification method (AAIM) (Dunås et al. 2016). This process was repeated for all subjects in the cohort. The labeled images were approved or disapproved according to the following criteria: For ACA distal , all main branches distal to the anterior communicating artery were selected (1-3 branches of the pericallosal artery depending on morphology, A2-A3 level). PCA was cropped at P3 level, distal to pons, to get a uniform length (Osborn 1999). MCA was divided into a proximal (MCA) and a distal (MCA distal ) part. The proximal part consists of the M1 segment, pre-and post-bifurcation. The MCA distal includes the full visible length of MCA, or until it reaches the cortex (M2 and M3 segments). The border between the proximal and distal part was set at the genu where the MCA takes a turn in the posterior direction (Osborn 1999). Only the branches that extend posteriorly (M2) and laterally (M3) were included. For M1, branches forming/preceding the main M2 branches, or having the same direction as those doing so, were included. The direction and continuity of the arteries were decided by visual inspection. Since MCA distal consists of several branches, the individual variation at M3 level was too large for it to be useful to construct a separate probability map. Note that in the vascular segmentation process, gaps sometimes arise in low-flow arteries, here seen in the MCA distal on the left side of the figure Agreement between automatic and manual labeling was described and divided into six categories: & Correctly identified existing: The artery was present in the manual reference and was localized. & Correctly identified nonexisting: The artery was not present in the manual reference and was marked as not found by the AAIM. & Mislabeled existing: The artery was present in the manual reference but at least a part of the labeled volume was inaccurate. A subjective decision in each case was used to determine whether the degree of mislabeling was clinically relevant. & Mislabeled nonexisting: The artery was not present in the manual reference but was marked as existing by the AAIM. & Not identified: The artery was not labeled, even though it did exist in the manual reference. & Too short: The identified segment belonged to the correct artery but did not include the required segment.
Clinical Validation
For the UBA167 to be clinically relevant, it has to be applicable to vascular diseases. As a proof-of-concept of this generalizability, the UBA167 was applied to a clinical sample of ten patients with transient ischemic attacks or lacunar infarcts. The labeling was performed with the AAIM. Again, the labeled images were evaluated using the previously mentioned criteria. In addition, results were compared to a reference obtained from computed tomography (CT) angiograms.
Results
The arterial atlas, UBA167, is presented in Fig. 3, and the specific contribution of applying non-linear image normalization over rigid-body co-registration is seen in Fig. 4. As revealed by the more confined probability maps, the non-linear image normalization generated a higher degree of arterial alignment than the rigid-body transformation.
Prevalence of Arteries
The number of arteries included in UBA167 and their average volumes are presented in Table 1. In total 2360 arteries were manually labeled. ICA, MCA and ACA distal were present in all subjects. BA and PCA were present in over 98 % of the subjects. VA, ACA and MCA distal were present in at least 90 % of subjects, while only 38 % (63/167) had a PCoA on any side, and 28.7 % (48/167) had a fetal PCoA (missing P1).
Arterial Volume Ratio
The AVR of each probability map is presented in Table 1 and reflects how well the normalization method works and to which extent each artery permits such normalization. A perfect normalization would result in an AVR of 1.0, and higher values indicate a less effective normalization. The AVR for the probability maps in UBA167 were significantly lower than for the rigid-body atlas (p < 0.001, Wilcoxon signed-rank test). When looking at the whole atlas, the AVR for the UBA167 was 13.7, and the corresponding value for the rigid-body atlas was 29.3.
In general, the proximal arteries of the anterior circulation (ICA, MCA, ACA and PCoA) had a low AVR (6.4 to 14.8) compared with posterior (PCA, VA and BA, AVR 19.0 to 26.1) and distal arteries (MCA distal and ACA distal , AVR 18.2 to 42.8). The AVR was negatively correlated with the accuracy of the leave-one-out validation (p = 0.035, Spearman correlation, rho = − 0.46, one-tailed probability).
Dominating Volume and Maximum Probability
The dominating volume of each probability map and the maximum probability value are also presented in Table 1. The average dominating volume for UBA167 was 85.8 %, compared with 74.9 % in the rigid-body atlas, and this difference was statistically significant (p < 0.005, Wilcoxon signed-rank test), indicating that the DARTEL normalization did indeed improve the spatial alignment between subjects.
The maximum probability value is also a measurement of spatial distribution. For UBA167, many arteries had voxels where over 80 % of subjects overlapped, ICA even had 100 % overlap in some voxels. For the rigid-body atlas, no more than 53 % of the subjects overlapped in any single voxel.
Correctly Identifying Cerebral Arteries
The result from the leave-one-out validation can be found in Table 2. The average labeling accuracy for the leave-one-out validation was 96 %. Lower values were observed for VA, PCoA and left MCA distal . The specificity could not be calculated for all arteries because the number of true negatives was zero for most arteries.
The average labeling accuracy in the clinical sample was 92.5 % (Table 3). In two patients, the identified right PCA was too short to fulfill the evaluation criteria. This was also the case for the right VA in two patients. One ACA distal and one MCA distal on each side were not identified. The existing PCoAs were not identified in the clinical sample. In the CTangiography that serves as the reference in this evaluation, PCoA was found bilaterally in one subject and unilaterally in three subjects.
Discussion
The probabilistic atlas Umeå Brain Arteries (UBA167), which contains 16 major brain arteries, was constructed based on manually labeled 4D flow MRI angiography from 167 subjects. This is the first project describing the Bartery-specific3 D spatial distribution within the human brain. UBA167 can be used to automatically label and segment the major cerebral arteries for future automated blood flow quantification. Similar to successful brain tissue atlases (Diedrichsen et al. 2009;Tzourio-Mazoyer et al. 2002;van Baarsen et al. 2016), UBA167 is provided in MNI space. We believe that multiple additional neuroimaging applications can be developed around this atlas. For example, the atlas can be used to extract geometrical measures such as size and tortuosity of brain arteries. In addition, the atlas can be used to detect and correct for signals in large arteries that can degrade the physiological interpretation of MRI data (e.g. arterial spin labeling, functional MRI, diffusion imaging). For example, in functional MRI, removal of vascular contamination is of known importance (Kiviniemi et al. 2003).
Achieving sufficient spatial inter-individual alignment of brain arteries cannot be taken for granted, as demonstrated by the fact that DARTEL normalization outperformed rigid-body transformation. This finding is analogous to results obtained for brain structural normalization (Klein et al. 2009). In the current paper we hypothesized that normalization prior to atlas labeling would provide a feasible pipeline for automated labeling. Our results supported this hypothesis by showing that all major arteries were labeled with high accuracy (>95 %), indicating that the achieved inter-individual spatial variability permitted efficient atlas construction.
UBA167 was constructed from a population-based sample (Nevalainen et al. 2015), and can thus be expected to represent a wide range of vascular morphology. This may have contributed to our finding of a high accuracy when using UBA167 on the ten stroke patients. An atlas based on a more selected sample (for instance only using subjects without hypertension, hyperlipidemia and obesity) could have resulted in less generalizability as such cardiovascular risk factors may affect the anatomy and tortuosity of cerebral vessels (Bullitt et al. 2009;Hiroki et al. 2002).
The atlas is developed from data on individuals from a limited age span. We still expect a high degree of generalizability of the atlas, assuming that future studies normalize data according to the present study. However, future studies using the atlas have to verify its functionality for that particular sample. Fig. 3 Visualization of the probabilistic (a-c) and arteryspecific (d-f) properties of the UBA167 shown in axial, coronal and sagittal view. The probability values (a-c) are visualized with a heat map, min = 0, mid = 0.1 and max = 1.0. In (d-f), each voxel is labeled as the artery with the highest probability Fig. 4 Visual comparison of the two atlases and the volumes of the probability maps. A maximum value projection of a) UBA167 and b) the rigid-body atlas. Each probability map is presented in a separate color The main innovation of the UBA167 is the combination of the artery-specific probabilistic properties with the fully automatic labeling of the major cerebral arteries. The UBA167 is thus prepared for automatic blood flow quantification in 4D flow MRI data. Pioneering work on characterization of the cerebral arterial system has been based on small sample sizes (Nowinski et al. 2009a(Nowinski et al. , 2011(Nowinski et al. , 2013 or has not included specific probabilities for individual arterial segments (Forkert et al. 2013;Mut et al. 2014;Wright et al. 2013). Knowledge of vascular morphology has also been used for semiautomated segmentation and labeling (Bogunović et al. 2012;Ghanavati et al. 2014;Passat et al. 2006).
Our finding of a maximum probability value of 1.0 in the bilateral ICA means that there exist one or more voxels in each ICA where all 167 subjects overlapped. These voxels provide the compelling option to explore seeding points, combined with region-growing segmentation schemes, since the probability that those voxels will be within the ICA for a new subject is very high (Passat et al. 2005). UBA167 as a whole can also be used to specify the region of interest or provide spatial information for arterial segmentation to reduce computational time (Passat et al. 2006).
UBA167 enabled a high labeling accuracy (Tables 2 and 3). Although the input data for the labeling validation was from 4D flow MRI, a comparable imaging technique can be expected to produce equally accurate results when combined with the AAIM.
We used a velocity encoding of 110 cm/s for the 4D flow sequence. This value was selected to avoid aliasing in large arteries. This option is not optimal to visualize and quantify the slow-velocity blood flow of the posterior communicating cerebral arteries (Dunås et al. 2016). Indeed, this was clearly evident when the CT angiogram images from the patient group were reviewed. Here CT angiography detected additional PCoA arteries that were not visible on the thresholded AI. This is not a strict limitation of UBA167 and labeling procedure per se, but rather a manifestation of differences in the underlying measurement techniques. However, with respect to the development of an automatic flow assessment of 4D flow MRI data, the high labeling accuracy for arteries that had detectable velocities was very promising. Future developments could improve the labeling accuracy of VA segments by improving the labeling criteria. The evaluation conditions for the labeling were set to ensure that segments needed for future blood flow quantification were correctly identified, therefore some existing arteries could be marked as too short.
The UBA167 included the major cerebral arteries, therefore it does not allow investigation of more distal arteries. Such distal arteries could also be added, but the increasing inter-individual Table 1 For each probability map, the number of included arteries and their average volumes are presented, as well as the concatenated volume, the ratio between concatenated volume and average arterial volume, the percentage of the concatenated volume where no other probability map had a higher value, and the maximum value of each probability map. variations in anatomy at that depth in the arterial tree would potentially limit the usefulness of such expansions. This effect can be seen in the MCA distal in Fig. 3, appearing with lower spatial alignment between subjects compared with more proximal arteries. Since the P1 and P2 segments were defined as the same artery, and the anterior communicating artery was not included, UBA167 cannot differentiate between some of the typical morphological variants. The standard variations that can be automatically identified are missing A1 or PCoA (Krabbe-Hartkamp et al. 1998).
Conclusion
UBA167 is an artery-specific probabilistic atlas based on 16 manually-labeled major cerebral arteries from 167 subjects. The UBA167 enables a high accuracy in automatic arterial labeling in both population-based subjects and in ischemic patients. Comparison to rigid-body alignment showed a large improvement in spatial alignment for non-linear normalization. Taken together, this study provides compelling first evidence for the usefulness of a probabilistic stereotactic atlas of the major cerebral arteries.
Information Sharing Statement
The atlas (RRID:SCR_016319) will be made publically available at time of publication at http://www.nitrc. org/projects/brainarteries
Compliance with Ethical Standards
Conflicts of Interest The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
The local ethical review board approved the COBRA study. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.
Oral and written information about the study was given and written consent was obtained from all subjects.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2023-01-06T15:38:52.818Z | 2016-11-21T00:00:00.000 | {
"year": 2016,
"sha1": "ee50a79d1fafcd0996c9553e78c490b76acc504e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12021-016-9320-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "ee50a79d1fafcd0996c9553e78c490b76acc504e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
55701330 | pes2o/s2orc | v3-fos-license | INDEBTEDNESS OF FINANCIALLY VULNERABLE HOUSEHOLDS: WHAT DO SOCIAL WORKERS DO?
Household debt in South Africa has grown significantly relative to income over the past twenty years under the guise of financial liberalisation (FinMark Trust, 2007). Currently households spend approximately 60c to 70c of every rand of their income on repayment of debt (Gous, 2008; Van Rooyen, 2008a). During the past decade the disposable income, financial assets and net prosperity of households have therefore not accrued to the same extent as their debt obligations. For this reason households’ savings are urgently needed to contribute to a lessening of the country’s current account deficit in order to sustain economic growth and job creation (Van Tonder, 2008). On the macro level, economic growth and job creation form part of the ideal underpinning the social development philosophy in South Africa. This ideal is to combine social welfare assistance with developmental strategies, thereby promoting both economic and social development in order to strengthen people’s capacity to enhance their social and economic inclusion and alleviate poverty (Patel, 2005:118). Social work, as a profession within the social development paradigm, is primarily focused on the poorest of the poor households (Department of Social Development, 2006). In the context of a relative poverty line, set in relation to changing standards of living (Statistics South Africa, 2007), a poor household is to be understood when the household’s condition of poverty endures over a period of time, when the household has an inability or lack of opportunity to improve its circumstances over time, or to sustain itself through difficult times (Aliber, 2001:2). Research (Collins, 2007; Rand, 2004) has shown that poor households have the highest debt to income ratio. For this reason social work intervention focusing on household debt is also essential on the micro level within the social development paradigm in order to reduce households’ financial vulnerability (Engelbrecht, 2008a).
INTRODUCTION
Household debt in South Africa has grown significantly relative to income over the past twenty years under the guise of financial liberalisation (FinMark Trust, 2007).Currently households spend approximately 60c to 70c of every rand of their income on repayment of debt (Gous, 2008;Van Rooyen, 2008a).During the past decade the disposable income, financial assets and net prosperity of households have therefore not accrued to the same extent as their debt obligations.For this reason households' savings are urgently needed to contribute to a lessening of the country's current account deficit in order to sustain economic growth and job creation (Van Tonder, 2008).On the macro level, economic growth and job creation form part of the ideal underpinning the social development philosophy in South Africa.This ideal is to combine social welfare assistance with developmental strategies, thereby promoting both economic and social development in order to strengthen people's capacity to enhance their social and economic inclusion and alleviate poverty (Patel, 2005:118).Social work, as a profession within the social development paradigm, is primarily focused on the poorest of the poor households (Department of Social Development, 2006).In the context of a relative poverty line, set in relation to changing standards of living (Statistics South Africa, 2007), a poor household is to be understood when the household's condition of poverty endures over a period of time, when the household has an inability or lack of opportunity to improve its circumstances over time, or to sustain itself through difficult times (Aliber, 2001:2).Research (Collins, 2007;Rand, 2004) has shown that poor households have the highest debt to income ratio.For this reason social work intervention focusing on household debt is also essential on the micro level within the social development paradigm in order to reduce households' financial vulnerability (Engelbrecht, 2008a).
Research into the debt levels of social work service users and specific social work intervention in this regard, however, is non-existent in South Africa.To shed more light on this issue this article reports on research grappling with the question of what social workers, working within a social development paradigm, do to assist clients who are caught in a debt trap.
BACKGROUND TO THE RESEARCH
As background to and point of departure of the research question as set out above, the context of the indebtedness of poor households in South Africa, will be explored.
Impact of the South African economic situation on the debt of the poor
In South Africa high inflation, weakening of the rand and increased interest rates, implying that expenditure exceeds production, arise partly from the global economic meltdown, but also from government initiatives to appropriate ever more of the state revenue for social assistance to the poor rather than improving infrastructure and production skills (Buys, 2008).For example, over the past few years lower-income groups have received strong support through tax incentives and social grants.About 26% of South Africans already receive social welfare grants, among the highest percentages of grants in the world (Van Tonder, 2008).As a result the disposable income of welfare recipients has risen considerably, which also means that they simply become more creditworthy to incur more debt (Gous, 2008).A 1% increase in social welfare grants could, for example, cause a rise of 0,6% in household expenditure (Van Tonder, 2008).Consumers earning less than R3 500 per month, most of whom are welfare recipients, represent 23,5% of active consumers of credit in South Africa (Watson, 2008).More than half of these households are already in arrears with debt repayments.In view of South Africa's current economic instability, it is therefore of the utmost importance that debt be managed effectively, not only on the macro level, but also on the micro level (Cillie, 2008).
The role of retail banks in debt of the poor
It seems, however, that banks are relatively unsuccessful in contributing to effective money management by people in South Africa (Schoombee, 2000).Half of the South African population, for example, do not make use of banking services (Harris, 2008).The use of banking services by very low-income households is estimated at only 22% (Porteous, 2003).Research (FinMark Trust, 2007) also indicates that poor households mostly do not use their savings accounts, and withdraw wages or grants in total, as they are deposited (Van Zyl, 2008a).These statistics on the "un-banked" or the "under-banked" (Sandlant, Harris & Barker, 2005) confirm research findings on the direct correlation between financial illiteracy and poverty in terms of financial exclusion (Bell & Lerman, 2005;Jacob, Hudson & Bush, 2000;Sandlant et al., 2005).
Several reasons explain the under-utilisation of banks in South Africa.During apartheid, for example, banking services were not focused on black South Africans (Foyer, 2000), resulting in a distrust and lack of knowledge of banks on the part of many poor people.A South African Reserve Bank and National Treasury Report, furthermore, points out that South African bank charges are among the highest in the world, which contributes to the unwillingness of the poor to make use of banking services (Du Preez & Clayton, 2004).
The government is currently applying pressure on banks to enter the poor segment of the South African population.Most banks are, however, still unable or hesitant to offer extensive services to the poor on account of the risks involved, transaction size and operational costs.This situation, of course, is not unique to South Africa, because generally speaking, unless there is profit to be made from the poor, they will not be served (Schoombee, 2000).The focus of banking initiatives is also largely on loans and not on savings (Fair Share, 2001).This scenario led Tito Mboweni, the Reserve Bank governor, to express his concern about the spiralling household debt of South Africans (Loxton, 2004).In response the National Credit Act No. 34 of 2005(RSA, 2005) and National Credit Regulations (RSA, 2006) were promulgated.
The National Credit Act (Act No. 34 of 2005) and National Credit Regulations (2006) as the government's response to household debt
The Department of Trade and Industry in South Africa is responsible for overseeing the credit market to promote a stable, efficient and competitive credit market in which consumers' rights are adequately protected, and in which access to finance (i.e.credit) is improved, particularly for development purposes (RSA, 2004).With the introduction of the National Credit Act (RSA, 2005) and National Credit Regulations (RSA, 2006), the South African government is thus moving aggressively against predatory lending, consumer abuses and outdated and ineffective legislation on consumer credit.Suppliers of credit will be subsidising part of debt counselling.Debt counsellors will be acting as mediators between creditors and over-indebted consumers (Van Zyl, 2008b).Over-indebtedness is a key concept in the National Credit Act (RSA, 2005), as well as an international judicial concept which ultimately determines whether a consumer is creditworthy (Goodwin-Groen & Kelly-Louw, 2006).The extent to which households will be able to make use of debt counsellors will, however, in future be determined by the context of indebtedness of the poor.
Contextualisation of over-indebtedness of poor households
Over-indebtedness is defined by the National Credit Act (RSA, 2005: section 79), in the light of the preponderance of information available at the time a determination is made, as the inability of the consumer to satisfy in a timely manner all the obligations under all the credit agreements to which the consumer is a party.Over-indebtedness thus implies using loans to pay off other loans, and allocating more than fifty percent of monthly net income to loan repayments (Mashigo, 2006).According to this definition, over-indebtedness of South Africans has increased since 1995, especially amongst the poorest households, who are the most vulnerable to becoming permanently indebted (Micro-Finance Regulatory Council, 2003).This context suggests, moreover, that welfare grant recipients have the highest debt service to income ratio of all the livelihood profiles, because their financial management consists of making money last until the next grant payment (Collins, 2007;Rand, 2004) and they are borrowing against their state grants (Nagdee, 2004).Grant recipients form part of the poorest 40% of households in South Africa, earning less than 7% of the country's total household income (Ueckermann, 2008).
Poverty in South Africa must, however, be viewed within a wider context than solely financial income and cannot be examined realistically without a consideration of household assets.For example, the Development Indicators of 2008 (The Presidency Republic of South Africa, 2008) reveal that 82,2% of households in South Africa occupy formal dwellings or traditional structures.In addition, a total of 90% of the population have use of cell phones (Eybers, 2008).These examples imply features of money in circulation, but as a result of its informal nature, it is contentious to quantify and reflect statistically the money in circulation.The debt behaviour of poor households should therefore be examined.
Debt behaviour of poor households
The fact that most South Africans are trapped in a culture of debt simply means that a pattern of debt behaviour is sustained (Cillie, 2008).A simplistic cause and effect analysis of the reasons for households' debt, however, would be unscholarly.A combination or non-hierarchical interplay of a complex set of interwoven hypothetical determinants giving rise to overindebtedness seems to be more to the point for the majority of South African households.These determinants could be integrated, synthesised and explained on the basis of expositions by Nash, Jenkinson, Vayro andSandlant (2005) andINSOL International (2001), in terms of a metaphorical "Bermuda debt triangle" absorbing household incomes.One determinant, or angle, of the triangle consists of survival debts which are unavoidable and out of control, arising from the lack of income and basic commodities for survival.The second determinant of the triangle deals with cognitive debt, which can largely be ascribed to a lack of knowledge, or financial illiteracy.The third determinant can be typified as affective or emotional debt, arising from, for example, individual disposition or peer pressure.The following figure is a graphic elucidation of the Bermuda debt triangle.
Debt
Survival (Nash, Jenkinson, Vayro & Sandlant, 2005;INSOL International, 2001) Unemployment and abject poverty in general are undoubtedly the greatest cause of survival debt, as postulated by such researchers as Schoombee (2000) and Mashigo (2006).What is questionable, however, is the ability of research respondents who are in debt themselves to report on their cognitive and affective debt.The Bermuda debt triangle is thus operationalised when researchers report that "Everybody in the focus group bought on credit at their local 'spaza', even though they paid double the price and/or interest on goods purchased" (Nagdee, 2004:76).The fact that everybody buys at the "spaza" (informal shop) could indicate affective debt as a result of peer pressure.Paying double the price and/or interest could indicate cognitive debt resulting from financial illiteracy.Shopping at the "spaza" could well mean that it is the only place where respondents qualify informally for credit buying of basic foodstuffs for survival -thus considered as survival debt.To place these determinants of the Bermuda debt triangle into a specific hierarchy would therefore be plain speculation.
One aspect which cannot be ignored, and which further complicates the Bermuda debt triangle in South Africa, is that, according to Schoombee (2000), the excitement and jubilation of the newly experienced freedom following the first democratic election in South Africa in 1994 created the expectation in many poor people's minds that luxury consumer goods previously thought to be within reach of only the oppressors could also be acquired by poor households.This euphoria was further stimulated by the political transition and economic transformation in South Africa which resulted in an increase in the ability of a larger proportion of households to borrow (Hurwitz & Luiz, 2007).Especially deceptive advertisements like "We can loan you enough money to get you completely out of debt" (Goodwin-Groen & Kelly-Louw, 2006:47) contributed to the vulnerability of the poor.As a consequence credit-active consumers in South Africa currently hold 3,02% accounts per consumer, while 37,7% of consumers with credit accounts already have a bad record on one or more of their accounts (Gous, 2008).
It is clear therefore that more accessible loans, especially as part of initiatives to start up small businesses, are rather used for consumption purposes.Poor households are currently spending, for example, between 50% and 80% of their income on food (Muller, 2008).Lower-income groups experience greater pressure from inflation than do higher-income groups, as food which has risen sharply in price recently, accounts for a much greater part of their budget (Van Rooyen, 2008a).This explains why most of the loans to poor households in South Africa are spent on basic necessities (Black Sash, 2000).Determining what poor people consider to be basic necessities, however, is context bound.A bank for low-income groups claims, for example, that 55% of their loan clients possess a credit card of a well-known retail group in South Africa and 46% of their clients have accounts at furniture stores (Duvenhage, 2008a).
The debt behaviour of the poor and especially of welfare grant recipients is further characterised by informal borrowing from relatives, neighbours, friends, credit at the local store and credit from informal sellers of goods (Collins, 2007).Whenever there is a need for money or basic necessities, most people in these communities turn to those closest for help, who expect them to return the favour some time in the future (Mashigo, 2006).The result is that people sometimes keep one another in the poverty trap, as the focus is on borrowing and not saving (Roy Morgan Research, 2003).Giving to others could thus be defined as loaning, as an expectation is created of "I give today, but tomorrow I must receive".
However, the biggest proportion of low-income borrowers in South Africa makes use of the services of micro-lenders, as their income is too low to meet basic expenses for survival (Schoombee, 2000).Micro-finance is generally considered and promoted as part of poverty alleviation and as a job-creation strategy (Gelinas, 1998).For this reason the South African government has since the late 1990s cooperated actively to extend the micro-finance sector.This had the unintended consequence of the dependence of low-income households on formal micro-lenders and informal local money lenders for the provision of credit.The desired outcome of micro-finance for emerging business and job creation in South Africa was thus not achieved (Mashigo, 2006).The debt collection methods of some money-lenders are furthermore unorthodox and dubious, and are now adding to the vulnerability of low-income groups.Especially welfare grant recipients could be easy prey, as well as low-income workers with a regular source of income.The Micro-Finance Regulatory Council (1999) attempted to regulate and protect consumers, but it seems that this Council has so far been unable to educate consumers as intended.
Savings by poor households
It is evident from the preceding exposition that the culture of saving in South Africa is weak and is not being encouraged as rigorously as credit is sought.Change could only be effected through legislation, putting pressure on people to save (Mittner, 2008).South Africans are currently saving at a rate of only 14% and, according to Trevor Manuel, a former Minister of Finance, people will have to save more for the economic growth rate of the past few years to be continued.In this regard the South African Treasury is working on an extended campaign to motivate South Africans to save more (Van Rooyen, 2008b).Savings initiatives are considered an empowerment tool for the poor, but should be appropriate to the context of a specific income group (Collins, 2007) as illustrated by the Individual Development Accounts (IDA) in the United States (Sherraden & Sherraden, 2004).IDAs are special savings accounts designed to assist low-income people on their path toward asset ownership through matched savings and financial education.
However, willingness to save has been positively correlated with education levels and more specifically financial literacy, which should be based on general theories of household savings behaviour (FinMark Trust, 2007).Examples of these theories of household savings behaviour are the following: neoclassical theories which assume that individuals are rational and forwardlooking and that savings reflect inter-temporal utility-maximising behaviour; psychological and sociological theories which emphasise inter alia the impact of motivation and aspiration; behavioural theories, which focus on behavioural incentives and constraints; and institutional theories, which highlight the role of institutional processes and mechanisms in a household's savings behaviour (FinMark Trust, 2007).These theories, however, all suggest that the poor can save.
In terms of the question as to whether the poor can save, Trevor Manuel (Manuel, 2004:6) declares: "(W)e have seen a proliferation of micro-loans to this segment.Repaying debt implies potential ability to save", a viewpoint echoed by several influential commentators (FinMark Trust, 2007;Human Sciences Research Council, 2002;Rand, 2004;Rutherford, 1999).Statistics South Africa also points out, for example, that the lower income groups can save up to 20% of their income (Van Rooyen, 2008b).It is therefore a myth that the lower-income groups cannot and will not save.Financial institutions thus need to design relevant and appropriate products for lower-income groups, which would enable them to save.A concrete example of savings by the poor is the use of a rotating credit scheme or informal savings organisation in the African community, which is known as a "stokvel" (Mashigo, 2006).
Correlation between financial and human well-being
Unfortunately, for the majority of people financial hardship is an overwhelming experience of feeling out of control, which could be extremely stressful.Feelings of helplessness, isolation, despair and depression are thus common and tend to lead to dysfunctional intra-and interpersonal relationships (Foyer, 2000;Nash et al., 2005).In this sense financial well-being, as described by Braunstein and Welch (2002) and Mavrinac and Ping (2004), is equivalent to human well-being, as both concepts constitute "a state of successful performance throughout the life course … that results in productive activities" (Pollard & Davidson, 2001:10).Financial literacy education should thus be facilitated as a poverty alleviation tool with the primary mission of promoting social change in order to enhance the well-being of financially vulnerable people (Engelbrecht, 2008b).
Financial literacy education as a poverty-alleviation tool
Goodwin-Groen and Kelly-Louw (2006) point out that debt counselling, as prescribed by the National Credit Act (RSA, 2005), is important, but preventing over-indebtedness is equally important.Therefore Goodwin-Groen and Kelly-Louw (2006) recommend public education campaigns to promote saving and financial literacy education as a way to prevent overindebtedness.The extent of people's indebtedness could be an indication of their financial literacy.However, financial literacy education to low-income communities is currently inadequate in spite of the obviously low level of financial literacy of poor households in South Africa (Piprek, Dlamini & Coetzee, 2004).In this connection Engelbrecht (2008a;2008b) comprehensively elucidates financial literacy education as a poverty alleviation tool within a developmental social work paradigm.
RESEARCH METHODOLOGY
The aim of this research was to gain an understanding of what social workers do regarding the indebtedness of their clients.To this end a research process consisting of the literature study and an empirical study was followed.This process requires an interchange between inductive and deductive logic and reasoning (Siepert, McMurty & McClelland, 2005:232) in order to identify related aspects and ensure a coherent exposition.The financial business rubrics of relevant South African newspapers were particularly consulted to reflect cutting-edge data and expert views on the issues at hand.The goal of the research was further realised by a combined qualitative and quantitative research approach (Fouché & Delport, 2005:79) and by employing a cross-sectional exploratory and descriptive design (Grinnell, Unrau & Williams, 2005:16-18).Non-probability sampling (Schutt, 2005:164, 166) was utilised, because the idea was merely to obtain as much data on the research question as possible to develop an understanding.A purposive sample targeted a universe of 85 front-line social workers who are all attached to an NGO operating in three provinces in South Africa (Western Cape, Eastern Cape and Northern Cape), doing generalist social work according to an integrated service delivery model (Department of Social Development, 2006) within South Africa's social development paradigm.A structured self-administered questionnaire with open and close-ended questions to generate both qualitative and quantitative data was utilised as a data-collection method.Creswell's (1998) and Marshall and Rossman's (1999) processes for the analysis, interpretation and presentation of qualitative data were employed as guidelines to structure and to reflect the responses of social workers into meaningful themes (Delport & Fouché 2005:354).
FINDINGS
The response rate from the social workers was relatively high, as 74 (87%) of 85 questionnaires were completed and returned.
Biographical details of respondents
An analysis of the service area of the respondents indicates that 38 (51%) of them work in an urban and urban-rural areas, while 36 (49%) work only in a rural area.The respondents from these service areas therefore present a balance of both urban and rural conditions, although this research study was not aimed at an analysis of discrepancies in service areas.The respondents deliver social welfare services to 10 951 households, which implies an average caseload of 148 households per social work respondent.The respondents' reflections on the total number of households, financially vulnerable households and over-indebted households in their caseloads are shown in the next figure.The main reason supplied by respondents for the financial vulnerability of households in their caseloads is unemployment, largely as a result of a scarcity of job opportunities, but also because of lack of skills.This concurs with Buys's (2008) view on the deficient skills levels of the people in South Africa.According to the respondents, a lack of skills contributes to jobs being only temporary or seasonal.This results in a considerable number of households being without any income for parts of the year.In addition, poor financial planning and misapplication of funds are considered a leading cause of financial vulnerability, especially in the absence of a culture of saving, as also found by Cillie (2008).Eybers (2008) points out that the misapplication of funds implies that poverty in South Africa should also be viewed in terms of households' assets, and that there is in fact money in circulation, albeit difficult to quantify because of the informal nature of such circulation.
Social problems such as substance abuse and health issues such as HIV and Aids further contribute to financial vulnerability.Other determinants pointed out by respondents include single-parent families, too large families, unplanned pregnancies and elderly people dependent on state pensions.This profile of financial vulnerability corresponds with local and international research (Nash et al., 2005;Piprek, et al., 2004;Rand, 2004;Sandlant et al., 2005) indicating that the determinants of financial vulnerability are universal.
Over-indebtedness of households
According to the respondents, altogether 6 441 (59%) of households in their caseloads are overindebted.This agrees with findings by Watson (2008) that more than half of the poor households in South Africa are in arrears with debt repayments.Respondents indicate that they are especially conscious of their clients' over-indebtedness when they deal with statutory assessments, for example, foster care applications, children's court procedures or rehabilitation reports, when an exposition of households' finances is a statutory requirement.The assessment of households' financial positions is traditionally an essential part of social work intervention, as described in the first contributions of social work pioneers such as Mary Richmond (1917:380).
According to respondents, households mostly require direct material assistance with their overindebtedness, as illustrated by the following remarks: "The clients will inform the social worker about their debt…even show the account slips.""Most families are honest about debt, as they want assistance from social workers with, for example, a food parcel, school uniforms for children, help with housing, etc."It seems therefore that survival debt is the main motivation for households to accept social work intervention.Cognitive and affective debt is not easily made known, as illustrated by the fact that only an insignificant number of respondents indicate that households' indebtedness in this respect is reported by friends, family members or community members.
Debt behaviour of vulnerable households
The debt behaviour of vulnerable households is in correlation with the determinants of the "Bermuda debt triangle" (Nash et al., 2005;INSOL International, 2001).An example of a comment on survival debt is the following:
"My clients even have to borrow the basic necessities."
Respondents also perceive that clients are uninformed and lack financial planning skills, which relates to cognitive debt:
"My clients manage to obtain accounts at various stores which encourage them to spend, but do not explain the fine print of how a credit account works." "In this area, borrowing is part of the social culture … everybody borrows and is in a debt trap … the perception is that saving is done by people with lots of money, who have everything their hearts desire."
The desire to own luxuries, as illustrated in advertisement campaigns and described by Goodwin-Groen and Kelly-Louw (2006: 47), is given by respondents as another reason for their clients' over-indebtedness, and corresponds with affective debt: "Clients also want to spoil themselves sometimes…" The nature of the above-mentioned debt behaviour means, according to the respondents, that households in their caseloads are so caught up in a debt trap that the interwovenness of the various creditors is difficult to untangle.Respondents, echoing findings by Cillie (2008), Collins (2007), Gous (2008) and Mashigo (2006), indicated that most households have at least three creditors, borrow to repay other loans, secure informal loans which do not have to be given account of, and reside in a culture where borrowing and indebtedness are the norm.
To whom are the households indebted?
Most respondents, namely 49 (66%), reported that households in their caseloads owe the most money to furniture stores, mostly for luxury or brand name articles and electrical appliances.This differs from other research findings (Black Sash, 2000) that poor people in South Africa are most indebted just trying to buy basic necessities, but correponds with the postulation by Duvenhage (2008b) that a considerable percentage of low-income groups also hold accounts for furniture and clothes.
The findings of Gelinas (1998) that a large number of low-income borrowers owe money to micro-lenders are also supported by respondents.Micro-loans are used to obtain money to pay other debts.Considerable debt is also incurred at informal shops, as they are easily accessible and no creditworthiness needs to be proved.Friends, acquaintances and employers are further sources of credit.Other institutions owed considerable amounts of money are municipalities, schools, banks, landlords and second-hand shops, correlating with research by Collins (2007), FinMark Trust (2007) and Mashigo (2006).It seems therefore that sources where debt is incurred are determined by the environment, accessibility and availability.
The impact of households' over-indebtedness on their social functioning
According to respondents, the stress of households' over-indebtedness, contributes greatly to family violence, substance abuse and child abuse.A lack of food, malnourishment and undernourishment are obvious consequences.Other social problems such as theft, prostitution and begging could also be directly attributed to over-indebtedness, as described by Nagdee (2004) and Nash et al. (2005), thus confirming the correlation between financial and human wellbeing, as expounded by Braunstein and Welch (2002), Mavrinac and Ping (2004), and Pollard and Davidson (2001:10).
Out of the 6 441 households identified by respondents as over-indebted, 1 674 (26%) were referred by the respondents to other sources.Most respondents refer their over-indebted clients to the Department of Social Development to apply for welfare grants, if they are not already recipients of applicable grants.A few respondents refer clients to legal aid clinics to obtain legal aid, but this service is not freely available.Referrals to financial institutions are done only in exceptional circumstances, as indicated by only two respondents.Not one respondent mentioned referral for debt counselling, as prescribed by the National Credit Act (RSA, 2005).
A collation of the total responses of the social workers shows that 2 641 (41%) over-indebted households (n=6 441) are handled by them.These responses indicate that most social workers try to teach these clients some or other form of budgeting skills, and then mainly on an individual basis.Few or no other sources are utilised or facilitated.Furthermore, merely temporary material assistance in the form of food parcels or soup kitchens is offered.The repertoire of the social workers' intervention activities in the matter of clients' overindebtedness seems to be confined to merely teaching budgeting skills and does not focus on the whole spectrum of financial literacy education which could be appropriate for the households concerned, as suggested by a host of international and local institutions and authors (Braunstein & Welch, 2002;FinMark Trust, 2007;Mavrinac & Ping, 2004;Nash et al., 2005;Piprek, et al. 2004;Rand, 2004;Sandlant et al., 2005).
The respondents reported that no services in the matter of indebtedness are delivered to 2 126 (33%) of the households identified by them as over-indebted (n=6 441).These households which receive no social work intervention pertaining to their indebtedness, and those that are referred, therefore constitute more than half of the total number of over-indebted households involved in this research, namely 3 800 (59%).The main reason given by respondents for this situation is that the identified over-indebted households show resistance to social work intervention, other than direct material assistance: "These clients expect only material assistance and do not accept their own responsibility.""As everybody in the community is in debt, clients do not consider their overindebtedness to be exceptional -it is part of life."This resistance of households also requires scholarly analysis within the context of the various theories of household debt behaviour (FinMark Trust, 2007) and should be integrated with theories of human behaviour peculiar to social work in order to effect social change (Hare 2004:414).Social change is similarly reflected in the rationale of financial literacy education, which is to empower people to exercise informed financial choices (Mavrinac & Ping, 2004).
Respondents furthermore experience a total lack of resources in the matter of intervention regarding their clients' indebtedness and mention other priorities in their caseloads: "I do not have enough time to help people in debt."It seems that the respondents' lack of time could be blamed largely on their huge caseloads, but the services of volunteers who could make a great contribution to intervention are, according to the responses, not optimally utilised.Respondents also do not refer to their role as facilitators (Engelbrecht, 2008b) in intervention with over-indebted households, and they seem to assume that their involvement with the over-indebted households should only be on the level of direct service delivery and specifically in terms of case work intervention.The conclusion by Engelbrecht (2008c: 172) that "it seems that social workers in general are so caught up with trying to deal with clients' presenting problems that there has been little conscious effort to develop their economic literacy, except as a by-product of 'normal' casework intervention" is thus also confirmed by this study.Some respondents justify their lack of involvement in intervention to over-indebted households by arguing that this type of service delivery does not pertain to a social worker: "I was not trained to give financial advice.""I don't feel it should be the responsibility of the social worker to sort out the financial issues of clients."The first quotation above supports research by Masemola and Taute (2006) that social workers themselves could benefit from honing their financial management skills and that such skills should be included in the training of social work students.The second quotation could imply that social workers still do not have clarity on their role and place in the South African social development paradigm and that the underlying social development theory of "marrying" (Gray, 2006: 59) social and economic development goals does not figure fully in the respondents' social work practice.
However, upon invitation to respondents to make recommendations on social work intervention in respect of their clients' debt, 62 (84%) of them suggested that financial management skills should be facilitated and/or taught by them to clients.This certainly indicates that social workers feel a need to offer financial literacy education to financially vulnerable households.
CONCLUSIONS AND RECOMMENDATIONS
The findings of this research study clearly indicate that economic growth in South Africa cannot be handled on the macro-level only, but should also be managed by means of social welfare assistance on a micro-level.It is therefore imperative that the financial vulnerability of households is addressed by social workers, through social service delivery within the social development paradigm.Currently social workers do focus on the poorest households in South Africa, in accordance with the Integrated Service Delivery Model of the Department of Social Development ( 2006), but are uncertain about their role in addressing the indebtedness of these households, who present the highest debt levels in the country.Social workers are still caught up in their clients' individual psycho-social problems to the extent that sustained services directed at reducing households' financial vulnerability are neglected.Financial literacy education as a tool for poverty alleviation is thus largely not employed, even though the connection between financial education and poverty alleviation has been scientifically confirmed (Bell & Lerman, 2005;Jacob et al., 2000;Sandlant et al., 2005).This fact should be viewed against the finding of this research study that more than half the households making up the caseloads of participating social workers are, according to them, financially over-indebted, but fewer than half of the social work respondents are doing anything about it.Social welfare organisations therefore ought to clarify in a concrete manner the role of the social workers whom they employ with respect to their clients' indebtedness.
One of the main reasons for the lack of financial education provided by social workers to vulnerable households seems to relate to their own uncertainty about the nature and scope of this financial education.In addition, social workers are trying, mostly on an individual level and in spite of huge unmanageable caseloads, which often entail dealing with statutory crises, to assist over-indebted households with their budgets.Available resources and services by financial institutions (for example, financial literacy programmes), statutory bodies (for example, debt counsellors of the National Credit Regulator) and volunteers in communities are not utilised optimally and facilitation is minimal.This suggests that social workers' and students' notions of financial literacy education ought to be stimulated, so that financial education as part of intervention to vulnerable households could be integrated by them with informed confidence.Methods of financial education should be extended to employ not only casework as a social work method, but should also integrate group and community work.Social workers should become more aware of which resources in the community could contribute to the financial education of households and they should develop ways in which to facilitate them.
Resistance by households to accept intervention in respect of their indebtedness, other than direct material assistance, is also handled with resignation, probably as a result of social workers' ignorance of theories of debt behaviour and on how to integrate these with social work intervention theories.Indebtedness of clients is furthermore largely assessed as survival debt by social workers, which is indirectly dependent on macro-economic variables.The nonhierarchical interplay of cognitive and affective debt, which can be managed on the micro-level through social work service delivery based on social work theories for changed behaviour is thus not realised and/or considered.The myth that lower-income groups cannot or will not save also plays a part in social workers' lack of involvement in debt reduction intervention, because the culture of debt, debt behaviour, and the interwoven nature thereof, disguise the fact that there might be substantial amounts of money in circulation in many financially vulnerable households.This statement is complicated by the context-bound consideration of what is essential for survival by financially vulnerable households.Theories pertaining to the debt behaviour of financially vulnerable households should therefore be developed further within a social work perspective, to be integrated meaningfully with social work theories leading to social change.The dynamics of survival, cognitive and affective debt, as well as the capacity of financially vulnerable households to save, ought also to be explored and accessed multidimensionally within specific contexts to serve as basis for social work intervention.This research confirms the correlation between financial and human well-being, as well as the role of social workers practising within a social development paradigm on the micro-level towards integrated service delivery, which includes both economic and social development.Therefore, social workers' participation in interdisciplinary education campaigns to promote savings and financial literacy education is of cardinal importance to the profession in order to respond to indebtedness as a phenomenon in communities.This recommendation is based on the finding that social work respondents in this research study expressed the need to become involved in the financial education of vulnerable households in their caseloads.Ultimately this conclusion could act as mandate for social work intervention in respect of financially vulnerable households' over-indebtedness.
The majority of social work clients in South Africa are un-banked and/or under-banked, and are at present insufficiently served by financial institutions.If social workers do not become involved with the indebtedness of financially vulnerable households in their caseloads … who will?
FIGURE 2 OVER-INDEBTED, FINANCIALLY VULNERABLE AND TOTAL NUMBER OF HOUSEHOLDS IN RESPONDENTS' CASELOADS | 2018-12-05T10:21:31.519Z | 2014-06-18T00:00:00.000 | {
"year": 2014,
"sha1": "26af03e8cc83ee95935ba269c31907aa425a0f72",
"oa_license": "CCBY",
"oa_url": "https://socialwork.journals.ac.za/pub/article/download/212/197",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "81fe49393cd54c355d4c840c0a58986a4be60815",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
150338771 | pes2o/s2orc | v3-fos-license | THE EFFECT OF FEATURE VIDEO ON THE ABILITY OF PRE-SERVICE TEACHERS TO APPRECIATE INDONESIAN TRADITIONAL ARTS
This paper is the research result that aimed to determine the effect of video-feature learning media on the ability of pre-service elementary school teachers to appreciate Indonesian traditional arts. The research method used was true experiment with a between-subject design (two experiment control group design). The population of this study was 600 students of elementary school teacher education at a university in Surakarta with a sample of 120 students assigned to the experimental and control groups. The sampling technique used was random sampling. The instrument validity used content validity and construct validity to test the validity of test instruments and media. The data were collected using tests and interviews. They were then analysed using t-test with SPSS 16 for windows. The results of the study showed that there is a difference in the ability to appreciate arts between the students in the control and experimental groups. This can be seen from the mean pre-test score of the experimental class of 13.727 which experienced an increase from the initial test (pre-test) of 10.045. On one hand, the mean post-test score of control class was 9.909 which experienced a slight increase from the mean of the initial test (pre-test) of 9.818. The conclusion of this study is that the video-feature media influences the ability of the preservice teachers of elementary school teacher education program. © 2018 Muhammadiyah University of Ponorogo Correspondence Address: E-mail: Karsono80@staff.uns.ac.id ISSN 2579-7255 (Print) ISSN 2524-004X (Online) DOI: http://dx.doi.org/10.24269/jin.v3n3.2018.pp191-211 Karsono, J.Daryanto, Rukayah. The Effect Of Feature Video On The Ability Of Pre-Service Teachers To Appreciate Indonesian Traditional Arts. Jurnal Indria Volume III, No 3, Desember 2018 192 INTRODUCTION In the contents of the curriculum for elementary school education from time to time, even up to the 2013 curriculum, art education is still considered important. This is evidenced by the existence of the arteducation material in every curriculum change. Art-education materials in the elementary school are in one package of subjects named Cultural Arts and Skills or often abbreviated as SBK. It means that there is awareness that the actual content of art education has important values in line with the other subject matters to support the achievement of holistic children's education goals, namely the development of personal and social competencies. Based on the results of a standard document review of the contents of elementary school education, both the 2006 school-based curriculum and the revised 2013 integrated thematic curriculum, art-education materials in elementary school specifically consist of several concentrations of fields such as the arts, music, dance, literary arts, and performing arts (Permendikbud No. 64/2013 & Permendikbud No. 21/2016). Since a long time ago, the ideal design at the curriculum level has still left implementation problems in the field, especially from the perspective of material delivery. The fact that occurs in the field, art-education materials in elementary schools are not taught by teachers who indeed have an educational background from one of the art fields. This fact is not comparable, for example with physical education in the elementary school taught by special teachers. In elementary school, art education is generally taught by the class teacher whose competence from college is constructed to teach various subjects. The aim of art education in elementary school is not only to develop the aspects of art skills, but also to integrate aspects of art into the daily lives of students. The aim is in line with the view of Campbell, Kassner, and Kassner (2010: 3-13) which formulates the opinion that art education for children should lead to three estuaries, namely the formation of mental attitude aspects, strengthening the aspects of thought, and mastering art skills. Hence, it's not just developing art skills but also forming characters. From this, it can be seen that art education in elementary schools actually has a Karsono, J.Daryanto, Rukayah. The Effect Of Feature Video On The Ability Of Pre-Service Teachers To Appreciate Indonesian Traditional Arts. Jurnal Indria Volume III, No 3, Desember 2018 193 strategic potential and position to contribute to children's character education. One form of character education through art education is the growing attitude of loving the country. This attitude can be fostered through the art-appreciation-education material that introduces the diversity of traditional arts in the archipelago. In fact, there are many kinds of traditional arts in the archipelago that need to be introduced to students in elementary school. If observed, the Indonesian archipelago or often called Nusantara has approximately 500 ethnic groups that inhabit "only" 3,000 islands out of about 17,000 islands owned by Indonesia (Suanda, 2000: 2). If only each ethnic group had just one or two traditional arts, then the number of traditional arts would be very large with diverse expressions. The wealth of Indonesian traditional culture, of course, is a socio-cultural capital that is important for the strategy to build the country's progress in the future. Unfortunately, the variety of traditional arts seems forgotten by the younger generation. Young people who are generally at school age, from elementary to high schools, do not get information as much and as diverse as Indonesian traditional arts. Even Suharni (2015) explained the current reality that school-age children are an "easy target" for the cultural colonization process. In this case, the most dominant is westernisation. The process of westernisation in the field of cultures can be seen from the tendency to strengthen the influence of western culture in all aspects, from lifestyles to arts. The culture of the music industry, for example, with capitalism and liberalism has alienated the young generation of Indonesia from the love of traditional Indonesian music. The trend of K-Pop music, or Korean Popular, has become very intense in today's young community. Likewise in the fields of dance, fine arts, literary arts, dress, film, and so on, the reality is not very different. The reality of the cultural marginalization of the archipelago tradition in the realm of industry is of course "somewhat" understandable because it involves capital, power, lifestyle, and uniformity of tastes. However, what should be watched carefully and even confronted immediately, of Karsono, J.Daryanto, Rukayah. The Effect Of Feature Video On The Ability Of Pre-Service Teachers To Appreciate Indonesian Traditional Arts. Jurnal Indria Volume III, No 3, Desember 2018 194 course, is the westernisation in the field of education. The process of division in the field of education can be seen for example in the content of art education material from elementary to high school. There are still a lot of materials that make western cultures the main material and not as a reference. For example, in the education of music arts from kindergartens to universities, the dominance of the use of western music and its diatonic scales is very strong. It is inversely proportional to the introduction of music and traditional arts that are so minimal. It is also the same in the field of performing arts, literature, fine arts, and dance. The problem of "westernisation" in the content of art in the world of education should be immediately challenged by collecting materials from the archipelago as much as possible to be disseminated to students at every level, including of course at the college level. This dissemination becomes strategic when it is linked to the learning situation of arts in elementary schools that still need improvement. One alternative improvement is to prepare elementary school teacher candidates who are studying at the PGSD (elementary school teacher education) study program to get to know and understand a variety of cultural diversity, especially the archipelago traditional arts. So far, the constraints of the dissemination of traditional arts to the younger generation, including elementary school teachers and preservice teachers are in terms of disconnection of information and limited information on archipelago traditional arts. It must be admitted that products related to the archipelago's traditional arts do not promise large financial benefits, including information media products. This is why capital owners are rarely willing to invest funds to produce archipelago traditional art information media packaging. From this process, finally the richness of the archipelago's traditional arts and their various aesthetic, ethical, and moral values gradually disappeared and became extinct, and are no longer recognized by their future generations. Even in an appreciation test of knowing various archipelago musical instruments for PGSD students of the faculty of teacher Karsono, J.Daryanto, Rukayah. The Effect Of Feature Video On The Ability Of Pre-Service Teachers To Appreciate Indonesian Traditional Arts. Jurnal Indria Volume III, No 3, Desember 2018 195 training and education in UNS Surakarta, it was found the fact that out of 120 students only 25% of them were able to properly mention 15 various Indonesian musical instruments and their origins. The rest were only able to mention fewer than 15 various Indonesian musical instruments, with some errors in mentioning their origins. This condition may make the initial representation that students as younger generation and successors of elementary education are less familiar and less appreciative of the arts and cultural products of the archipelago. In fact, if we look closely in the future when they have to teach arts in elementary schools, there are a lot of art learning contents related to the culture of the archipelago. If appreciative skills and the expansion of the archipelago art insight are not improved,
Results
The implementation of this research activity began with the try out, pre-test, treatment, and post-test.
In the try out, 80 items about the knowledge of art appreciation that would be used for the pre-test and post-test of the experimental group and the control group were obtained.
After the try out was done, the 80 abbreviated as SBK.It means that there is awareness that the actual content of art education has important values in line with the other subject matters to support the achievement of holistic children's education goals, namely the development of personal and social competencies.Based on the results of a standard document review of the contents of elementary school education, both the 2006 school-based curriculum and the revised 2013 integrated thematic curriculum, art-education materials in elementary school specifically consist of several concentrations of fields such as the arts, music, dance, literary arts, and performing arts (Permendikbud No. 64/2013 & Permendikbud No. 21/2016).Since a long time ago, the ideal design at the figure of instructor or teacher or presenter in the show.In a more specific topic, the research on the use of video especially in the education of preservice teachers has been widely done.Even, Gaudin and Chalies were sorted into 2 sets of testitems to be used for the pre-test and post-test.After the pre-test, the content of the video.After the discussion and reflection, post-test was conducted.After carrying out the pre-test, the implementation of treatment, and the post-test, data processing using SPSS 16 for windows was carried out.Prerequisite tests were carried out by using normality and homogeneity tests.The following is the analysis of the data of treatment results: Data Normality Test
Feature-video packaging
by using presenters who are the same age as the students makes the video more "alive" and interesting.The closeness between feature videos and viewers (students) is built because of the proximity of the narrator's age.In addition, it was also constructed with the typical greeting "Hello Sahabat Nusantara ...!" The word "Sahabat (best friend)" show the figure of instructor/teacher/presenter as the featured video in this study.The research results of the Archipelago Art Video Feature are also in line with the results of a survey by Christ, Arya, and Chiu regarding the practice of video use in the education of preservice teachers in several countries.The results of the research show that the use of video in the education of pre-service teachers has an ones.However, in the future it is necessary to conduct further research, which focuses on the affective aspects of the students when accessing or using video as a learning resource.Further research is important to map whether students feel pleasure or enjoyment when studying with video compared to other media.The follow-up research also serves to retest the results of Wang & Antonenko's (2017) study of the powerful effects of videos in attracting students' attention and presenting learning satisfaction.In addition, further research on the ideal video content used in learning and the duration of access to the video during learning also becomes an interesting issue to be studied in the future.The results of the research on the feature video of Indonesian traditional art can certainly be a reference for the implementation of art appreciation learning in Indonesia, especially in universities.Feature video media can be a means to provide information about traditional arts along with their wealth of meaning to the younger generation.However, the challenge that needs to be answered in the future is the difficulty of documenting a wide variety of unique Indonesian traditional arts from various regions.This difficulty happened in the feature video production in this research.Due to the various limitations, this research finally produced only 2 featured-video prototypes with the theme of Javanese Gamelan and Banyumas Bamboo Music.The vastness of Indonesia's territory and the diversity of ethnics, arts, and cultures are the challenge in creating a learning media of art appreciation in the form of video.Based on the findings and challenges going forward, the research on Indonesian traditional art can be further enhanced, especially in the effort to introduce the richness of forms, expressions, and meanings of arts to the younger generation, general students, pre-service teachers, and society in general.
Table 1
Based on the analysis test of pre-test and post-test of both groups, it was found that the data were normally distributed because p > 0.05
Test and Post-Test Results of the Groups
The followings
Table 4 .
The Results of Independent Sample t-Test | 2019-05-12T14:24:39.916Z | 2018-12-10T00:00:00.000 | {
"year": 2018,
"sha1": "9ea419efc7599713b1541f06856ba1d8eee03b6f",
"oa_license": "CCBY",
"oa_url": "http://journal.umpo.ac.id/index.php/indria/article/download/1434/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ea419efc7599713b1541f06856ba1d8eee03b6f",
"s2fieldsofstudy": [
"Education",
"Art"
],
"extfieldsofstudy": [
"Psychology"
]
} |
247412985 | pes2o/s2orc | v3-fos-license | Compliance of functional exercises in school-age children with limb fractures: implication for nursing countermeasures
Background Functional exercises is very essential to the recovery of patients with fracture. We aimed to evaluate the compliance of functional exercises in school-age children with limb fracture, to provide evidence to the clinical management and nursing care of children with limb fracture. Methods School-age children with limb fractures treated in our hospital from January 1, 2020 to June 30, 2021 were selected. The characteristics and postoperative functional exercise compliance of included children were analyzed. Pearson correlation and Logistic regression analysis were conducted to analyze the influencing factors of compliance to functional exercises. Results A total of 328 children with limb fracture were included, the incidence of compliance to functional exercise was only 35.98%. Pearson correlation analysis showed that age(r = 0.707), only child of family(r = 0.537), guardians(r = 0.642) and type of temperament(r = 0.635) were correlated with compliance to functional exercises in school-age children with limb fractures (all p < 0.05). Logistic regression analysis indicated that age ≤ 10y (OR2.913, 95%CI2.091 ~ 3.611), only child of family (OR2.006, 95%CI1.683 ~ 2.558), guarded by grandparents (OR1.512, 95%CI1.201 ~ 2.118), non-easy-going temperament (OR4.127, 95%CI3.811 ~ 4.902) were the influencing factors of non-compliance to functional exercises in children with limb fracture (all p < 0.05). Conclusions School-age children have poor compliance with functional exercises after limb fractures, and there are many influencing factors. For children with those risks, health care providers should actively intervene in nursing to improve children’s exercise compliance and the rehabilitation effect.
Background
Previous studies [1][2][3] have reported that fractures in school-age children are very common in clinical practice, with an incidence ranging from 2.04% to 11.46%. Schoolage children are in the stage of gradual bone ossification [4]. The bone composition contains more colloids, less calcium, and full of elasticity, but they are still not strong and are prone to fractures [5]. Fractures of the limbs are more common in school-age children, mainly due to the higher physical and psychological maturity of children during this period, and heavier curiosity about things, less self-protection ability and life experience [6,7]. Besides, those children spend a relatively long time in school during this period, and parents' awareness of protection is relatively relaxed, resulting in children are prone to fractures during daily activities [8,9]. Therefore, the prevention and care of school-age children's fractures have an important influence on the prognosis of children.
School-age children are the most vigorous period of the development of various physiological functions in their lives [10]. There is a big difference in the anatomy, injury mechanism and healing characteristics of adults [11]. Fracture injuries in children are often accompanied by bone damage. Improper treatment will lead to shorter bone development and deformities such as crooked, oblique, etc. [12,13]. Children have a high disability rate and are accompanied by varying degrees of dysfunction, but their healing speed is faster than that of adults [14]. Therefore, how to promote the functional recovery of children with fractures is particularly important. With the development and improvement of medical technology, people have higher requirements for the treatment of diseases. They are not only satisfied with curing the disease and injury itself, but also pay more attention to the long-term prognosis of the disease. Therefore, in the process of fracture treatment, the functional exercise of the affected limb is particularly important, which directly has a serious impact on the later functional recovery [15]. The latest rehabilitation concept proposes that in the acute phase, symptoms including congestion, swelling, increase in the number of fibers and collagen cells, and shedding of necrotic cells at the acute stage will cause local adhesions, which will affect the later functional exercises [16,17]. Effective functional exercises in the early stage of the fracture will promote local blood circulation, reduce swelling, prevent compartment syndrome, and promote functional recovery [18]. Previous studies [19,20] have shown that children with early fractures suffer from severe pain, coupled with the unfamiliarity of the environment during hospitalization, which makes them feel nervous, scared, and crying, resulting in poor exercise compliance, missing the best exercise opportunity, and directly affecting the effect of surgery and quality of life of children in later stages. Therefore, it is necessary to explore the factors affecting the compliance of functional exercises in school-age children with limb fracture, to provide evidence support for clinical treatment and care of children with limb fracture.
Ethics
In this study, all methods were performed in accordance with the relevant guidelines and regulations. This present study was a prospective cohort study design, the study had been checked and approved by the ethical committee of our hospital with approval number: (202001011-2). Furthermore, the written informed consents had all been obtained from the guardians of included children, and children's assent to participate in the study was obtained accordingly.
Population
This study selected school-age children with limb fractures who admitted to our department from January 1, 2020 to June 30, 2021 as the research population. The inclusion criteria for children were as following: ①The children were school-age children aged 6 to 15 years old; ②The children could understand language expressions; ③The children could express their ideas and thought on their own; ④The children have accompanied guardians during their hospitalization; ⑤The guardian of the child was aware of the purpose of this study and voluntarily signed the informed consent form. Exclusion criteria for children were as following: ①Children with old fractures; ②Children with congenital diseases, such as congenital heart disease, congenital bone malformations; ③Children with visceral damage, such as liver rupture, spleen rupture, etc.; ④Children or their guardians were unwilling to participate in this study.
Definition and evaluation of functional exercise compliance
Exercise compliance was defined as the degree of consistent behavior of children performing rehabilitation exercises according to the requirements and recommendations of medical staff. We used the previously reported postoperative functional exercise compliance questionnaire [21] to analyze the children's exercise compliance. The questionnaire involved postoperative functional exercise knowledge, doctor's execution, daily functional exercise time, daily functional exercise times, conscious exercise and daily life behavior ability. Each item used a 5-level scoring method according to the content of the answer, corresponding to 0 ~ 4 points, the higher the score indicated the better the compliance, and scores of 3 points or above of all items indicated full compliance, otherwise it was rated as non-compliance.
Data collection
Two investigators collected following data from the communication to the guardians or children and related medical records: age, gender, body mass index(BMI), whether the child was the only child of family, place of residence, guardians, type of fracture, site of fracture, causes of fracture and type of temperament. Any disagreement was solved by further discussions.
This study used the Chinese School-age Children Temperament scale (CSTS) [22] to assess the temperament characteristics of children. The questionnaire has a total of 99 items belonging to 9 temperament dimensions, and each item is scored 1 to 6 points in 6 levels: almost never, very rare, rare, common, very common, and almost always. According to the score, the children were rated as easy-going, troublesome, initiate slow and intermediate temperament accordingly. Professionally trained nursing staff used CSTS communicating with family members or guardians who know the children's life habits best. Family members who understand the children's life habits have the longest contact time with the children, and observe the children's behavior for a long and careful time, and fully understand the personality characteristics of children. The nursing staff explained the purpose of the investigation to the family members, and after obtaining their understanding and consent, we correctly guided the guardians to fill in.
Statistical analysis
In this study, SPSS 23.0 statistical software was used to process the data. The count data were expressed as percentage (%), the comparison between groups was compared by chi-square test, the continuous variables were expressed as mean ± standard deviation, and the comparison between groups was conducted by t test. We selected the statistically significant outcome variables of univariate analysis as independent variables, and took the factors that affect children's functional exercises as dependent variables, and analyzed the influencing factors of children's functional exercises by Logistic regression analysis. Besides, Pearson correlation analysis was conducted to evaluate the compliance to functional exercises and related characteristics. In this study, P < 0.05 was considered as the difference between the groups was statistically significant.
The characteristics of included children
A total of 328 children with limb fracture were included, of whom 118 children had good compliance to functional exercise, the incidence of compliance to functional exercise in school-age children with limb fractures was 35.98%. As presented in Table 1, there were significant differences in the age, only child of family, guardians and type of temperament between compliance group and non-compliance group (all p < 0.05). No significant differences in the gender, BMI, place of residence, type of fracture, site of fracture and causes of fracture between compliance group and non-compliance group were found (all p > 0.05).
Pearson correlation analysis
As indicated in Table 2, Pearson correlation analysis showed that age(r = 0.707), only child of family(r = 0.537), guardians(r = 0.642) and type of temperament(r = 0.635) were correlated with compliance to functional exercises in school-age children with limb fractures (all p < 0.05).
Logistic regression analysis
The variable assignments of multivariate logistic regression were showed in Table 3. As indicated in Table 4
Discussions
Early functional exercises for fractures are mostly within 2 weeks after surgery, mainly to exercise muscle contraction and relaxation, and mid-term functional exercises are mostly within 3 to 6 weeks after surgery [23]. The exercise intensity can be gradually strengthened and large joint activities can be attempted, but activities that are not conducive to fracture connection and stability still need to be restricted [24,25]. Later functional exercises are mostly carried out after 6 weeks after surgery, and the limb function is gradually restored through comprehensive joint and muscle exercises [26]. It is very important to actively carry out nutritional supplements and scientific functional exercises after limb fractures. Postoperative functional exercises can not only avoid complications such as postoperative joint stiffness and muscle atrophy, but also promote the healing of the patient's fractures [27]. However, functional exercise after surgery is more challenging in school-age children since they are lively and active, and their safety awareness and self-control ability are not strong enough [28,29]. Currently, the status and influencing factors of compliance to functional exercise in school-age children with limb fractures remain unclear. The results of this study have found that the incidence of compliance to functional exercise in school-age children with limb fractures is 35.98%, and for children with age ≤ 10y, only child of family, guarded by grandparents, non-easy-going temperament, they may have less compliance to functional exercises after surgery, early targeted nursing care are needed for those children to improve the compliance to functional exercises of children.
Functional exercise after limb fracture operation needs to last a relatively long period of time, so functional exercise compliance is a key factor. The results of this study show that school-age children's compliance with functional exercises after limb fractures is Table 3 The variable assignments of multivariate logistic regression
Factors Variables Assignment
Non-compliance Y Yes = 1, no = 2 Age(y) X 1 ≤ 10 = 1, > 10 = 2 Only child of family X 2 Yes = 1, no = 2 Guardians X 3 Grandparents = 1, parents = 2, others = 3 Type of temperament X 4 Non-easy-going temperament = 1, easy-going temperament = 2 low, which suggests that compliance with functional exercises after school-age children's limb fractures is affected by multiple factors. Individualized interventions can effectively improve the compliance of school-age children with limb fractures [30]. Functional exercise compliance helps children recover as soon as possible. Compliance refers to the extent to which patients accept and obey the prescribed medical and nursing measures and their behaviors [31]. This study has investigated and analyzed the compliance of school-age children with limb fractures, and has found that children under 10 years of age have poor early exercise compliance, mainly due to their relatively young age, relatively poor understanding and communication skills, and self-care ability, the importance of functional exercise cannot be fully understood for those children. Therefore, for children with age ≤ 10y, the health care providers should use simple and understandable language to communicate with them when they perform functional exercises, and integrate the exercise with games to increase the children's interest in exercise, and then improve their compliance to functional exercise. Children with limb fractures will still have obvious pain, swelling and movement dysfunction in the early postoperative period. The children will be disturbed by physical discomfort or emotional state of tension and fear [32,33]. The older the school-age children can be more adapt well to the postoperative state of fractures and better cooperate with functional exercises. Children whose only children and their main caregivers are grandparents receive more attention in their daily lives, they have relatively poor tolerance to pain [34]. When they cannot maintain functional exercises, their caregivers are more likely to compromise, give up, thereby reducing the compliance [35]. Temperament characteristics are the psychological characteristics of a person's personality, and it is reported that it interacts with physical diseases and the treatment of physical diseases [36,37]. School-age children with easy-going temperament have a weaker stress response to fractures and surgical treatments, they have strong receptivity, positive emotions, and compliance with functional exercises [38]. Therefore, in the functional exercise work of school-age children after limb fracture surgery, it is necessary to integrate the above influencing factors and pay attention to the relevant characteristics of individuals. The fun of functional exercise can be improved through methods such as children's game-style functional exercise and goal completion reward programs to help children improve their compliance.
Studies [39] have shown that the complexity of the treatment plan may influence the patients' compliance. Although school-age children have developed cognitive abilities, they have poor understanding of things and self-control, things and treatment programs cannot be understood easily [40]. Therefore, when formulating early functional exercise programs for children with affected limbs, attention should be paid to explanations as simple as possible, focus on demonstrations, and proceed in a planned and segmented manner, so that the children can easily accept and understand. The general exercise method is from simple to complex, time from short to long, frequency from less to more, the intensity of exercise gradually increases, and the transition from passive exercise to active exercise gradually [41,42]. It is appropriate for the child to accept and not cause severe pain, and if necessary, give appropriate analgesics in accordance with the doctor's instructions before the exercise [43,44], so that the children can gradually adapt and not have fear of the pain caused by the functional exercise of the affected limb, and increase their compliance with the early functional exercise.
Several limitations in this present study must be considered. Firstly, participants with ≥ 12 years should be considered as adolescents in a separate group since there are important differences between these two groups that affect compliance, however, in China age ≥ 16 years is generally considered as adolescents, most of our included children are in the age range of 9-12 years old, it may underpower to detect the group between this two group. Secondly, our study is a single centered observational study, the sample size is small, and there may some other variables that affect postoperative functional exercises. Therefore, multi-centered studies with larger sample size and rigorous design in different areas are needed to further evaluate the children's compliance with postoperative functional exercises in the future.
Conclusions
In summary, school-age children's compliance with postoperative functional exercises for limb fractures is low, which should arouse great attentions from health care providers and related guardians. Compliance with postoperative functional exercises is affected by many factors including the age of the child, the only child, the main caregiver, and the characteristics of temperament. Those influencing factors should be considered to take targeted intervention measures to help children and their parents correctly and actively carry out postoperative functional exercises, to improve the compliance with postoperative functional exercises, thereby improving the recovery of children.
Abbreviations BMI: Body mass index; CTST: Chinese School-age Children Temperament scale. | 2022-03-14T13:36:34.038Z | 2022-03-14T00:00:00.000 | {
"year": 2022,
"sha1": "c888510ec64db429c2ba46853893c665e6998d70",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c888510ec64db429c2ba46853893c665e6998d70",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85529255 | pes2o/s2orc | v3-fos-license | Effect of Pesticides on Biological Control Potential of Neoscona theisi (Araneae: Araneidae)
Abstract The present study was designed to record the effect of λ-cyhalothrin, Bifenthrin, and Glyphosate on the mortality, avoidance behavior, foraging activity, and activity of Acetylcholine esterase (AChE) and Carboxylesterase (CarE) in Neoscona theisi (Walckenaer, 1841). Highest mortality (70%) in N. theisi was recorded against λ-cyhalothrin. However, Glyphosate was found to be least toxic. Spider spent less time on insecticides/herbicide-treated surfaces. Insecticides/herbicide-treated N. theisi consumed less prey than untreated control spiders. Similarly, when N. theisi were offered insecticide/herbicide-treated prey, they consumed significantly less. Increased AChE and CarE activities were recorded in insecticides/herbicide-treated spiders as compared to control group. Total protein contents were less in insecticides/herbicide-treated spiders than control group. The results revealed that λ-cyhalothrin is more harmful to spiders as compared to Bifenthrin and Glyphosate. It is suggested that the effect of all pesticides used in agro-ecosystem on beneficial insects should be evaluated before using them in the fields.
Spiders (Arachnida: Araneae) are the most important group of natural predators in the agro-ecosystem representing about 47,771 described species (WSC 2018). They are highly diversified and significantly suppress insect pests in different agricultural fields (Anis Joseph and Premila 2016). They feed on large numbers of small sized and soft bodied prey but harmless to field crops being not herbivorous (Pearce and Zalucki 2006, Chatterjee et al. 2009, Rezac et al. 2010, Hakeem et al. 2018.
Pesticides are highly successful at killing pests, but they also unconsciously reduce the nontarget organisms and natural predators of insect pests including spiders (Amalin et al. 2000, Deng et al. 2006, Cole et al. 2010). Spiders are highly at risk to pesticides that are being used in agricultural fields injudiciously (Pekar 2013). These chemicals affect their longevity, reproduction, defense, development, physiology, mobility, and activities of enzymes (Stark and Banks 2003, Moura et al. 2006, Tahir et al. 2012, Miao et al. 2014, Ndakidemi et al. 2016.
Spiders act as a buffer in agro-ecosystems and keep the pest densities below economic injury level but pesticides reduce their efficiency as biological control agent. Pesticides not only cause direct mortality in spiders but also diminish their efficiency as natural predators (Ataniyazova et al. 2001, Nazarova, 2006, Desneux et al. 2007, Marko et al. 2009, Hanna 2012. Sublethal doses of pesticides weaken the sensory system of spiders and alter their prey choice in the agro-ecosystem (Wrinn et al. 2012, Leccia et al. 2016, Petcharad et al. 2018. After pesticides exposure, these natural predators are unable to differentiate among different types of insect pests. Due to the effect of pesticides, they are unable to capture the most beneficial prey or they feed on toxic prey. Toxic prey consumption diminishes the predator fitness and prey capture potential (Toft 1999, Schmidt et al. 2012.
Acetylcholine esterase (AChE) is a key enzyme that catalyzes the acetylcholine (Oehmichen andBesserer 1982, Wang et al. 2004). Carboxylesterases (CarE) present in insects also detoxifies various chemicals and act as a metabolic activator to various drugs, carcinogens, and ecological toxins. Changes in the activity of AChE and CarE in insects produce metabolic resistance against insecticides (Van Leeuwen and Tirry 2007, Ross et al. 2010, Miao et al. 2016, Jouni et al. 2018, Zeng et al. 2018. The present study was undertaken to evaluate the effects of λ-cyhalothrin, Bifenthrin (two pyrethroid insecticides), and Glyphosate (herbicide) on the mortality, avoidance behavior, and foraging behavior of Neoscona theisi (Walckenaer, 1841), an orb-web spider. This spider species belongs to the family Araneidae and commonly found in agro-ecosystems of Punjab Pakistan (Tahir and Butt 2009). We also studied the effect of insecticides and herbicides exposure on AChE, CarE, and protein contents in spiders.
Orb-web Spiders Collection and Maintenance
Live adult orb-web spiders were collected from agriculture fields of University of the Punjab Lahore. Spiders were captured by hand picking method. Sampling was conducted from April to August 2018. Adult spiders (only females) from fields were brought to the laboratory in the Department of Zoology, Government College University Lahore. In the laboratory, they were placed individually in separate plastic jars (3 cm wide and 12 cm long) to avoid cannibalism. Mouths of jars were covered with mesh net cloth. Each jar also contained wet soil to maintain humidity at the bottom. Spiders were fed on house flies (Musca domestica) in the laboratory. They were acclimatized in the laboratory for 7 d before using them for experiment (Hof et al. 1995). Spiders were identified by Dr. Hafiz Muhammad Tahir, Department of Zoology, GC University Lahore. He is working on spiders from last 15 yr.
Chemicals
In the present study, we evaluated the toxicity of three commercial pesticides, λ-cyhalothrin, Bifenthrin, and Glyphosate. Recommended field concentrations of λ-cyhalothrin (200 ml/100 liters per acre), Bifenthrin (40 ml/100 liters per acre), and Glyphosate (100 ml/100 liters per acre), respectively, were used in the experiment. The field-recommended doses were used as given in a hand book for agriculture extension agents on the pesticides registered with recommendations for safe handling and use in Pakistan (www.parc.gov.pk).
Susceptibility Tests
For conducting susceptibility tests against Bifenthrin, λ-cyhalothrin, and Glyphosate, 40 spiders were divided into four groups, i.e., 1) λ-cyhalothrin group, 2) Bifenthrin group, 3) Glyphosate group, and 4) control group. The number of spiders in each group was 10. Whatman filter papers were taken and dipped in the recommended field dose of λ-cyhalothrin (0.5 ml/250 ml of water) Bifenthrin, (0.25 ml/625 ml of water), and Glyphosate (1 ml/100 ml). Filter papers of control group were dipped in distilled water. Separate filter papers were used for each chemical. Filter papers were allowed to air dry for 1 h at room temperature and then placed in petri plates. A single spider was released in each petri plate and allowed to expose to the insecticide, herbicides, or water impregnated filter paper for 1 h. After exposure of 1 h, spider was transferred to the clean jar. No food was provided to spiders during the experiment. The mortality was recorded after every 4 h till 24 h.
Avoidance Behavior
To investigate the avoidance behavior of spiders, round Whatman's filter papers were cut into two equal halves. One half of each filter paper was dipped in recommended field concentration of λ-cyhalothrin (0.5 ml/250 ml of water), Bifenthrin (0.25 ml/625 ml of water), or Glyphosate (1 ml/100 ml) while other part of filter paper was dipped in distilled water. Filter papers were air dried for an hour and then both parts were again joined with scotch tape and placed in petri plate. In each petri plate, single spider was released and time (in seconds) spent by each spider on pesticide or distilled water treated part of filter paper was recorded. Spiders were allowed to acclimatize for 15 min before recording the data. For each spider, data were recorded for 30 min (1800 s). Thirty spiders were used in this experiment, 15 for each pesticide. Forty-five spiders were used in this experiment. Spiders were allowed to acclimatize for 15 min before recording the data. The experiment was replicated thrice.
Foraging Behavior
For this study, following two experiments were conducted.
Offering of prey to insecticide-exposed spiders Adult spiders were divided into experimental and control groups. The number of spiders in each group was 10. Each spider was placed in a container (6 cm wide and 12 cm long). Leaves and twigs were placed in container as anchor points to build a web by spider. To standardized hunger level of spiders, they were first fed with house flies at three different times in the day to the satiation level and then starved for 3 d. Each spider of experimental group was exposed for 30 min to the filter papers treated with sublethal dose of λ-cyhalothrin (one-eighth of field dose), while each spider of control group was exposed to distilled water-treated filter paper. Spiders were exposed to λ-cyhalothrin by releasing them on λ-cyhalothrin impregnated filter paper in closed container. Spiders of both group were offered with same number of prey, M. domestica (n = 10). Similar experimental set up was used for Bifenthrin and Glyphosate. The sublethal concentrations used for Bifenthrin and Glyphosate were one-sixth and one-third of the recommended field rate, respectively. It was ensured that the size of flies should remain the same for each group. Forty spiders were used in this experiment. The number of flies consumed by spiders of experimental and control group were compared using independent t-test.
Offering of insecticides exposed prey to spiders In this, experimental spiders were offered insecticide-exposed prey to the spiders. Musca domestica were divided into four groups. Group I, II, and III were exposed to λ-cyhalothrin, Bifenthrin, and Glyphosate, respectively. Group IV was untreated. Similarly, spiders (n = 40) were divided into four groups. Each group contained 10 spiders. Each spider of group I was offered prey (n = 10) that were treated with sublethal dose of λ-cyhalothrin (one-eighth of field dose) and each spider of Group II and Group III was fed on prey that were treated with Bifenthrin (one-sixth of field dose) and Glyphosate (one-third of field dose), respectively. Group IV was taken as control and fed on untreated prey. The numbers of prey consumed by spiders were recorded till 24 h. The number of flies consumed by spiders of experimental and control groups were compared using independent t-test.
Biochemical Tests
To measure activities of AChE and CarE in insecticide-exposed and unexposed spiders, biochemical tests were performed in the laboratory. Activity of AChE was determined by Ellman et al. (1961) method. CarE activity was measured by Van Asperen (1962) method. Total protein contents were estimated by Bradford method (Bradford 1976).
Statistical Analysis
Normality of the data was assessed using Shapiro-Wilk test. Oneway analysis of variance (ANOVA) test was applied to compare the mortality among different treatments. The number of flies consumed by spiders of experimental and control group were compared using independent t-test. Paired t-test was used to compare the time spent by spiders on pesticide-treated and control part of filter paper. The predation rate of spiders of control groups and insecticide-treated groups were compared by using ANOVA followed by Tukey's test for multiple comparisons. Enzyme activity against insecticides/herbicide was compared by using ANOVA. All tests were performed using SPSS (version 22).
Avoidance Behavior
The time (442 ± 44.5 s) spent by N. theisi on the λ-cyhalothrin-treated part was less than the time (1,356 ± 44.5 s) spent on untreated part of filter paper. The difference was statistically significant (t = −14.5; P = 0.001). Time (642 ± 47 s) spent on the Bifenthrin-treated part was less than the time (1,158 ± 48 sec) spent on untreated part of filter paper. Again statistically significant difference was observed (t = −7.56; P = 0.002). Similarly, N. theisi spent less time (754 ± 31.2 s) on Glyphosate-treated part of filter paper than the time (1,044 ± 31.2 s) on untreated part of filter paper (t = −8.48; P = 0.001, Fig. 2).
Foraging Behavior
Offering of prey to insecticides exposed spiders Result showed all spiders that were exposed with pesticides consumed significantly less prey than control group of spiders (F 3,46 = 14.34; P < 0.05). Prey consumption was lowest in λ-cyhalothrin exposed spiders (2.66 ± 0.33) which was significantly less than the prey consumption of untreated control group (8.33 ± 0.88). The prey consumption of Bifenthrin-and Glyphosate-exposed spiders was 4.00 ± 0.57 and 5.33 ± 0.33, respectively (Fig. 3). Result of Tukey's test showed that although prey consumption of Bifenthrin-and Glyphostae-treated spiders was significantly as compared control groups but the prey consumption of Bifenthrin and Glyphosatetreated groups differed nonsignificantly (Fig. 3).
Estimation of Total Protein (mg/ml)
Total protein contents were decreased in pesticide-treated groups as compared with control group. Significant difference was recorded in 3. Feeding of M. domestica (n = 10) by N. theisi exposed to λ-cyhalothrin (one-eighth of field dose), Bifenthrin (one-sixth of field dose), and Glyphosate (one-third of field dose) in comparison with control group (24 h). Error bars are used to show the standard error. Fig. 2. Comparison of total time (seconds) spent by N. theisi on λ-cyhalothrinand water-treated filter paper, Bifenthrin-and water-treated filter paper and Glyphosate-or water-treated filter paper. Error bars are used to show the standard error.
the total protein content of treated groups as compared with control (F 3,19 = 102.8; P = <0.001). Results of Tukey's test showed that all treated groups differ nonstatistically but differ from control (Fig. 5).
Activity of AChE (pmol/min/mg)
There was significant difference in the level of AChE in treated group compared with control group (F 3,19 = 3.091; P = 0.047). Highest level of AChE was recorded in N. theisi treated with λ-cyhalothrin followed by Bifenthrin and Glyphosate, respectively. Lowest level of AChE was recorded in control group. It is evident from Fig. 6 that AChE activity of control-and Glyphosate-treated group differ nonstatistically. Similarly, nonsignificant difference was observed between the activity of AChE in the groups treated with λ-cyhalothrin and Bifenthrin.
Estimation of activity of CarE (mM/min/mg of protein)
There was significant difference in the level of CarE in insecticides treated and control group (F 3,19 = 14.41; P = <0.001). Highest level of CarE was recorded in N. theisi treated with λ-cyhalothrin followed by Bifenthrin and Glyphosate, respectively. Lowest level of CarE was recorded in the control group. Nonsignificant difference was observed between the activities of CarE in the treated groups. Results of Tukey's test showed that there is significant difference between control and treated group (Fig. 7).
Discussion
Pyrethroids are neurotoxic insecticides that are commonly used for the control of different insect pests of field crops to enhance crop yields. They are extremely toxic to insects as they act on the insect nervous system and affect feeding habitat (Yu 2014). Pyrethoids used in agro-ecosystems also affect diversity and abundance of natural enemies (Sherawat et al. 2015). Usually, natural enemies such as spiders are more vulnerable to the effects of insecticides . Application of insecticides results in high mortality of spider in all kinds of agro-ecosystem (Pekar 2012, Bhatti et al. 2013, Rodrigues et al. 2014).
In the current study, effects of two insecticides, i.e., λ-cyhalothrin, Bifenthrin and one herbicide, i.e., Glyphosate on the mortality, avoidance behavior, foraging behavior, and enzymes activity of N. theisi was studied in the laboratory. The results of the study showed that selected spider species is susceptible to both insecticides and herbicide. We recorded 70% mortality in N. theisi against λ-cyhalothrin. The previous studies have reported that λ-cyhalothrin is highly toxic to the spiders. Khan et al. (2017) noted 51% mortality against λ-cyhalothrin in Plexippus paykulli, a common jumping spider. Similarly, Dinter (2009) also recorded high mortality in Erigone atra after exposing them to λ-cyhalothrin. Tillman and Mulrooney (2000) found that λ-cyhalothrin is toxic to every natural enemy present in cotton.
Bifenthrin was also found toxic to N. theisi as it caused 40% mortality at its field rate concentration. Sherwat et al. (2015) reported 65% mortality in wolf spiders of wheat fields, i.e., Lycosa terrestris against Bifenthrin. Tahir et al. (2016) observed 80% mortality in Pardosa sumatrana at field dose of Bifenthrin. Similarly, Francis and North (2010) also noted high mortality (988.9%) against Bifenthrin among black house spiders. Alzoubi and Cobanoglu (2010) also observed higher mortality in Phytoseiulus persimilis after exposing them to Bifenthrin. Wang et al. (2014) reported that Bifenthrin hinders the insect nervous system and causes paralysis, which is usually followed by death.
The observed mortality in N. theisi against field rate of glyphosate was 30%. Pereira et al. (2018) recorded 50% mortality in Cicurina arcuata against Glyphosate. They further reported that toxicity of Glyphosate is moderate but it causes high irritability. Contrary to Pereira et al. (2018), Benamu et al. (2007 reported that Glyphosate causes no lethal effects on the spider Alpaida veniliae. Evans et al. (2010) revealed that Glyphosate exposure affects the behavior and survival of arthropods.
We recorded 80% reduction in prey consumption in N. theisi after exposure with sublethal dose of λ-cyhalothrin (one-eighth of field rate) as compared to untreated spiders. Similarly, we noted 60% and 50% reduction in consumption of N. theisi which were exposed with sublethal dose of Bifenthrin (one-sixth of field rate) and Glyphosate (one-third of field rate), respectively. Tahir et al. (2015) observed that Pardosa birmanica consumed less prey after exposure with λ-cyhalothrin. The changes in prey consumption may be due to several factors, i.e., weak sensory system, altered taste, and altered potency Pekar 2017, Petrakova et al. 2016). Petcharad et al. (2018) reported that insecticides might blur insect's senses and reduce olfactory capacity; therefore, they do not recognize the prey, which results in less consumption. Pardosa milvina females change their predatory behavior when placed on a surface treated with Glyphosate-based herbicide (Wrinn et al. 2012, Sitvarin and Rypstra 2014, Behrend and Rypstra 2018. The interference of pesticides with feeding behavior of exposed insect may include different mechanisms (Desneux et al. 2007).
AChE, CarE, and protein contents were measured in spiders that survive after 24-h exposure of insecticides/herbicide. It was observed that λ-cyhalothrin-, Bifenthrin-, and Glyphosate-treated spiders have increased level of enzymes as compared to untreated control group. Higher levels of these enzymes in the survivors suggest their possible involvement in the metabolic resistance against insecticides/herbicide. Rodrigues et al. (2014) recorded high resistance response in Eriopis connexa, which detoxifies λ-cyhalothrin by enzymatic activity. Similarly, Miao et al. (2016) also observed that the AChE activity of Megacopta cribraria was significantly increased by LC40 Imidacloprid. Increased esterase level has been recorded in different types of pyrethroids-resistant insect orders, e.g., Hemiptera, Hymenoptera, Diptera, and Lepidoptera (Li et al. 2007, Bass andField 2011). Recep et al. (2005) also reported that esterase activity of Tetranychus urticae was increased against Bifenthrin. Van Leeuwen et al. (2007) observed that λ-cyhalothrin exposure increased the level of CarE activity in Tetranychus urticae.
Low level of protein contents were observed in the treated groups of spiders than control group. The results showed that decreased protein contents in treated spiders due to insecticidal stress. Hussain et al. (2009) reported that decreased protein content implies the mobilization of amino acids in insecticide stress to meet energy demands. Similarly, Kliot et al. (2014) reported that metabolic resistance often involves an energy cost in insects and found decreased protein in insects due to energy demand that involved during the activation of detoxification mechanisms and other defense mechanisms.
It is concluded from the study that λ-cyhalothrin is more harmful compared with Bifenthrin and Glyposate. It is suggested that the effect of all pesticides used in agro-ecosystem on beneficial insects should be evaluated before using them in fields. Furthermore, only those compounds should be used in the fields that are pest specific and have least effects on the population of natural predators. | 2019-03-28T13:02:31.661Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "a90150ecfac3fe27286b0937711ae99b7531572d",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/19/2/17/33016838/iez024.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74c9e1a03fc2a281053d9c33c6bfc2d283f02152",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7682412 | pes2o/s2orc | v3-fos-license | Self-energy correction to the bound-electron g factor in H-like ions
The one-loop self-energy correction to the 1s electron g factor is evaluated to all orders in Z\alpha with an accuracy, which is essentially better than that of previous calculations of this correction. As a result, the uncertainty of the theoretical prediction for the bound-electron g factor in H-like carbon is reduced by a factor of 3. This improves the total accuracy of the recent electron-mass determination [Beier et al. Phys. Rev. Lett. 88, 011603 (2002)]. The new value of the electron mass is found to be m_e = 0.000 548 579 909 3(3) u.
within 1.5 standard deviations but is three times more precise. The uncertainty of the electron-mass value of [3] originates equally from the theoretical result for the bound-electron g factor and from the experimental value for the ratio of the electronic Larmor precession frequency and the cyclotron frequency of the ion in the trap. Therefore, any advance in theoretical or experimental investigations will improve the accuracy of the electron-mass value. However, for significant progress one needs to reduce both the theoretical and experimental uncertainties. From the experimental side, an increase of the accuracy by an order of magnitude is anticipated in the near future, as well as an extension of the measurements to higher-Z systems [5]. Investigations of the bound-electron g factor in high-Z systems are of particular importance since they can provide a new determination of the fine structure constant [6,5], nuclear magnetic moments [5], and nuclear charge radii. They would also create a good possibility for testing the magnetic sector of QED in a strong Coulomb field.
From the theoretical point of view, the leading error of the bound-electron g-factor value for H-like carbon comes from the one-loop self-energy correction. Reducing this uncertainty is the aim of the present investigation. The second major error is due to the two-loop binding QED correction that is known at present only to the lowest order in Zα [6][7][8]. To reduce that uncertainty is a serious problem. However, recent progress in calculations of two-loop QED corrections to the Lamb shift, both within the Zα expansion [9] and to all orders in Zα [10], allows us to hope that its solution might be possible in the near future. An important feature of studying the bound-electron g factor is a relative weakness of nuclear effects. Unlike the hyperfine splitting, where a large effect of distribution of the magnetic moment over the nucleus complicates the identification of one-loop QED effects, for the bound-electron g factor, the uncertainty due to nuclear effects is of the order of two-loop binding QED corrections even in the high-Z region. In addition, as shown in [11], the finitenuclear-size effect can be largely cancelled in a specific difference of the bound-electron g factors for H-and Li-like ions with the same nucleus. Therefore, this difference can be (in principle) calculated up to a very high accuracy. This fact makes the bound-electron g factor very promising for testing two-loop QED effects by comparing theory and experiment.
The one-loop self-energy correction to the 1s g factor was first evaluated by Blundell et al. [12] and by Persson et al. [13]. The latter work was extended by Beier and co-workers [14], whose result was used in the electron-mass determination [3].
Formal expressions for the one-loop self-energy correction to the bound-state g factor are well-known (see, e.g., [15]). The whole correction is conveniently divided into three parts, which are referred to as the irreducible (∆g ir ), the reducible (∆g red ) and the vertex (∆g ver ) contribution. The irreducible part is given by where Σ R denotes the renormalized self-energy operator, and a indicates the initial state.
We use the relativistic units (h = c = 1) and the Heaviside charge unit [α = e 2 /(4π), e < 0] throughout this Letter. The perturbed wave function |δa is where δV (x) = −eα·A cl (x), A cl denotes the classical homogeneous magnetic field, A cl (x) = and m a is the angular-momentum projection of the initial state. The reducible contribution is represented as and the vertex part is given by where I(ω) = e 2 α µ α ν D µν (ω) is the operator of the electron-electron interaction, D µν (ω) stands for the photon propagator, and α µ = (1, α) are the Dirac matrices. In order to avoid large numerical cancellations, it is convenient to calculate the vertex and the reducible part together. We indicate the sum of these two contributions with the subscript "vr", ∆g vr = ∆g ver + ∆g red . Now we turn to the numerical evaluation of these contributions. We perform our calculations in the Feynman gauge and both for the point and the extended nucleus. In the latter case, the hollow-shell nuclear model was utilized. Since calculations for the point nucleus are easier from the technical point of view and because of smallness of the finite-nuclear-size effect, we later discuss mainly the point-nucleus evaluation. Convergence of the extendednucleus value to the point-nucleus result for small values of Z served as one of the checkups for our numerical procedure. The calculation of the irreducible part is quite straightforward. For the point nucleus, the perturbed wave function |δa can be found analytically by employing the generalized virial relations for the Dirac equation [16]. The corresponding explicit expressions can be found in [17]. The numerical evaluation of the non-diagonal matrix element of the self-energy operator was carried out similarly to that for the selfenergy correction to the hyperfine structure [18], within the Green-function technique. The partial-wave expansion converges well in that case, and taking into account 30-50 partial waves is sufficient for getting the required accuracy (with the rest of the series estimated by polynomial fitting). As an additional cross-check for the evaluation of the irreducible part, we also utilized a modified renormalization procedure, where the energy of the one-potential term is shifted from its physical value (for more details we refer the reader to [18]).
The numerical evaluation of the vertex and reducible parts is more problematic. The standard way to treat corrections of this kind is to separate terms in which bound electron propagators are replaced with free propagators. We refer to this part as the 0-potential contribution ∆g (0) vr . This term contains ultraviolet divergences that can be covariantly separated and cancelled in momentum space. The remainder ∆g (1+) vr is ultraviolet finite and can be calculated directly in coordinate space, as in [12]. However, it turns out that due to a strong cancellation between the reducible and the vertex part, the contribution of high partial waves is relatively large for low Z, and the corresponding expansion is slowly converging.
For gaining better control over the partial-wave summation, in [13,14] it was proposed to separate from ∆g (1+) vr a part containing (besides an interaction with the magnetic field) one Coulomb interaction with the nucleus in electron propagators, the so-called 1-potential con-tribution ∆g (1) vr . The authors demonstrated that the partial-wave expansion of the remainder (the many-potential contribution ∆g (2+) vr ) is converging much better than that for ∆g (1+) vr . For the evaluation of the 1-potential term, a separate numerical scheme was developed in [13,14], based on an analytical treatment of radial integrals. This allowed the authors to extend the partial-wave summation up to l max = 120. However, the unevaluated tail of the expansion still yields a significant contribution in that case. In order to get the accuracy, ascribed to the 1-potential term in [14] for carbon, one should estimate the tail of the series with an uncertainty of about 1%. This is a potentially dangerous point of this numerical evaluation.
The central point of the present calculation is a different treatment of the 1-potential term. We evaluate it directly in momentum space without utilizing the partial-wave expansion and, in this way, eliminate the uncertainty due to the estimation of the tail of the series. The next difference from the calculations [13,14] consists in the treatment of the magnetic interaction in momentum space. The Fourier transform of the classical magnetic potential involves the gradient of a δ function, In [13,14], the δ function was replaced by a continuous Gaussian function with a small but finite regulator. In our evaluation of the 0-and 1-potential terms, we employ directly (5) and evaluate the corresponding corrections after integration by parts. (For the 0-potential term, the same approach was utilized earlier in [12].) In case of the 0-potential term, this treatment requires additional analytical work, but finally, instead of a five-dimensional numerical integration (as in [13,14]), we end up with a single integral that can be evaluated up to an arbitrary precision. The analytical part of the evaluation of the 1-potential term is quite tedious, but the overall δ function simplifies the calculation greatly. Finally, the 1-potential term is represented by a four-dimensional integral, whose numerical evaluation is relatively easy.
The calculation of the many-potential term was carried out in a manner similar to that in [18]. The many-potential part was represented by a point-by-point difference of the unrenormalized, the 0-potential, and the 1-potential term. In addition, we also subtract the infrared-divergent contribution of the reference state from the vertex and reducible parts.
This contribution was then evaluated separately, carrying out the ω integration analytically and explicitly cancelling divergences in the sum of the reducible and the vertex part. Care should be taken in the evaluation of the many-potential correction, since a large numerical cancellation occurs in the point-by-point difference. In order to avoid the appearance of pole terms that lead to additional numerical cancellations, we employ the following contour of The results of our numerical evaluation are presented in Table I. In order to isolate the one-loop binding self-energy correction, we subtract from the total self-energy correction the free-electron value α/π [19]. The resulting binding correction is compared with the data from [14]. For all cases except for Z = 20, the results agree with each other within the given error bars. A more detailed comparison is presented in Table II for two most important cases, carbon and oxygen. A certain deviation can be observed for the 1-potential and many-potential contributions, that is largely cancelled in the sum. We do not have any explanation of this fact at present. As an additional checkup of our calculation, we fitted The results for the higher-order contribution are represented in Fig. 1, together with those from [14]. A least-squares fit of our data to the form yields a 41 = −7.0(8) and a 40 = −10(2).
In Table III we present individual contributions to the 1s electron g factor for two most important cases, H-like carbon and oxygen. The Dirac point-nucleus value and the free-electron part of the one-loop QED correction are evaluated utilizing the recommended value for the fine-structure constant from [4]. The finite-nuclear-size correction is calculated numerically and is in good agreement with the previous evaluations [14,20]. The so-called "electric-loop" part of the one-loop vacuum-polarization correction is also re-evaluated in this work. The corresponding results agree with the earlier numerical [13,14] and analytical [21] calculations. The remaining part of the vacuum-polarization ("magnetic-loop") correction is shown to be negligible for the case under consideration [21]. The α 2 QED correction includes the existing Zα expansion terms for the QED correction of second order in α [6,7] and the known free-QED terms of higher orders in α (see, e.g., [15]). Its relative uncertainty was estimated as the ratio of the part of the one-loop QED correction that is beyond the (Zα) 2 approximation, to the part that is within the (Zα) 2 approximation, multiplied by a factor of 1.5. The recoil correction incorporates the total recoil contribution of first order in m/M, calculated to all orders in Zα in [22], and the known corrections of orders (m/M) 2 and α(m/M) [23].
In summary, our evaluation of the one-loop self-energy correction for the 1s electron g factor in H-like ions improves the accuracy of the theoretical prediction for carbon by a factor of 3 and for oxygen by a factor of 2. This reduces the total uncertainty of the electron-mass determination of [3]. The new value for the electron mass is found to be m e = 0.000 548 579 909 29 (29)(8) , where the first uncertainty originates from the experimental value for the ratio of the electronic Larmor precession frequency and the cyclotron frequency of the ion in the trap, and the second error comes from the theoretical value for the bound-electron g factor.
We would like to thank Th. Beier and presented in units of 10 −6 (ppm). The point and the extended nuclear model are indicated with labels "pnt." and "ext.", respectively. The binding correction is obtained by subtraction of the free-electron value α/π from the total self-energy correction ∆g SE . Only the total numerical error of the present evaluation is indicated. | 2014-10-01T00:00:00.000Z | 2002-05-22T00:00:00.000 | {
"year": 2002,
"sha1": "3b116c3dcc077f85cd37271a62d773735ece841a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0205245",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fb355800276d42d0d67ad53a6617a13c83838e88",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
78382917 | pes2o/s2orc | v3-fos-license | Experimental intraoperative use of perfluorotributylamine, perfluorodecaline, and perfluoropolyether
The perfluorocarbon Iiquids (PFCLs) are a group of compounds with a specific gravity greater than water that are used as an adjunct in the surgical management of complicated vitreoretinal diseases. Thirty-two pigmented rabbits underwent vitrectomy with injection of one of three PFCLs or saline into the vitreous cavity: eight eyes received perfluorotributylamine (PFT A) ; eight eyes received perfluorodecaline (PFDC) ; eight eyes received perfluoropolyether (PFPE) ; the remaining eight eyes, which served as controls, received saline. One hour after placement in the eyes the PFCLs were removed from the vitreous cavity and the eyes were filled with saline. The rabbits were then euthanized and the eyes enucleated and processed for histologic examination. Light microscopy demonstrated no significant difference between eyes with the PFCLs compared with those of control animais. Electron microscopy revealed only minor changes, consisting of a few irregulary shaped defects in the outer segment discs of photoreceptors, in the PFDC group only. These findings suggest that short-term intraoperative use of PFT A, PFPE, and PFDC, at least in this experimental animal model, is not associated with any significant retinal pathology.
ABSTRACf
The perfluorocarbon Iiquids (PFCLs) are a group of compounds with a specific gravity greater than water that are used as an adjunct in the surgical management of complicated vitreoretinal diseases. Thirty-two pigmented rabbits underwent vitrectomy with injection of one of three PFCLs or saline into the vitreous cavity: eight eyes received perfluorotributylamine (PFT A) ; eight eyes received perfluorodecaline (PFDC) ; eight eyes received perfluoropolyether (PFPE) ; the remaining eight eyes, which served as controls, received saline. One hour after placement in the eyes the PFCLs were removed from the vitreous cavity and the eyes were filled with saline. The rabbits were then euthanized and the eyes enucleated and processed for histologic examination. Light microscopy demonstrated no significant difference between eyes with the PFCLs compared with those of control animais. Electron microscopy revealed only minor changes, consisting of a few irregulary shaped defects in the outer segment discs of photoreceptors, in the PFDC group only. These findings suggest that short-term intraoperative use of PFT A, PFPE, and PFDC, at least in this experimental animal model, is not associated with any significant retinal pathology.
INTRODUCfION
Vitreoretinal surgeons have long sought a vitreous substitute that is transparent, viscous, and heavier than water, that does not interfere with refraction and, most of all, is not toxic to the retina or other structures of the eye. Currently, silicone oil is most widely used as a vitreous substitute, although there are some problems with its use and controversy over its possible toxicity continues(l-S)_ ln addition, silicone oil has a low specific gravity, and thus is not of benefit in the repair of inferior breaks or for intraoperative management of complicated retinal detachments(9,IO) .
Liquid perfluorochemicals meet many of the cri teria that are desirable in a vitreous substitute, and several have been tested as potential vitreous substitutes(II-IS). Unfortunately, a number of these agents cause ocular toxicity or other problems(1 2 ,IS) .
Recent1y CHANG and coworkers<19-22) described the use of perfluorocarbon liquids (PFCLs) for unfolding retinal flaps and in management of prolifera tive vitreoretinopathy. PFCLs, when used experimentally, tend to cause retinal toxicity at about 30 days following injection,<12,14,16) and thus are used only in the intraoperative period, and are removed once the surgical goal of retinal reattachment has been achieved.
The purpose of this study was to evaluate any structural changes in the retina caused by three different PFCLs after one hour in rabbit eyes.
MATERIALS AND METHODS
All experiments adhered to the Association for Research in Vi sion and Op hthalmology Resolution on the Use of AnimaIs in Research. Thirty-two Dutch cross rabbits (2 to 3 kg) were used to evaluate the one hour ocular tolerance to three different PFCLs injected intraoperatively. Perfluorotributylamine (PFr A), perfluorodecaline (PFDC), and perfluoropolyether (PFPE) were the PFCLs studied. Each of these PFCLs was injected into eight eyes; as controls, eight eyes were similarly injected with saline.
Maximum mydriasis was achieved by topical application of phenylephrine hydrochloride 2.5% and tropicamide 1 %. AnimaIs were anesthetized with an intramuscular injection of 0.75 mL/kg ketamine hydrochloride (100 mg/mL) and xylazine hydrochloride (20 mg!mL) in a 1: 1 solution. Proparacaine hydrochloride 0.5% was used as a topical anesthetic. Proptosis was obtained by a retrobulbar injection of 1.0 cc of lidocaine hydrochloride and sterile water (1: 1 solution). Eyes were prepared for surgery in a ARQ. BRAS. OFfAL. 55, (3), 1992 Exp erimental intraoperative use 01 perfluorotributylamine, perfluorodecaline, and perfluoropolyether sterile fashion. Pars plana vitrectomies were performed using an Ocutome system under an operating microscope (Zeiss Co., Model OPMI-6, Oberkochen, Germany) using a planoconcave vitrectomy lens. Following a superior fomix based peritomy, two 20-gauge sclerotomies were made at a distance of 2-3 mm posterior to the comeoscleral 1imbus at the 10 and 2 o'clock positions. A 4-mm infusion cannula, attached to a bottle of saline, was fi xed in the temporal sclerotomy with a preplaced 6/0 vicry1 suture. The posterior vitreous was retnoved as completely as possible prior to the injections. The PFCLs were transferred from the sterile containers to a 5-cc syringe through a 20-gauge needle and permeable membrane filters (Millipore, pore size=0.22 �m). Fluid-fluid exchange was performed by injecting each studied PFCL intravitreal1y through the infusion cannula. Since PFCLs are heavier than saline, the former moved to the posterior pole, expelling the lighter saline through the nasal sclerotomy. Injection of the PFCL continued until it was seen to be exiting from the nasal sclerotomy site. The infusion cannula was then removed and the sclerotomy closed with a 6/0 vicry1 suture. If the eye were hypotonus, a small amount of PFCL was injected through the pars plana via a 25-gauge needle. The entire exchange procedure required approximately 2 cc to 3 cc of the heavy liquid as replacement; with this method, we were able to replace 50% to 60% of vitreous with one of the three above mentioned liquids. The PFCLs were removed from the eye exactly one hour after injection. To achieve this procedure, a 25-gauge needle, connected to an elevated (2 ft) bottle of infusion fluid, was introduced into the eye via the temporal pars plana. The nasal sclerotomy was reopened and a suction probe was used to aspirate the PFCL. White the infusion was flowing, the suction probe was directed to the posterior pole and PFCL aspirated with 100 mmHg suction pressure. Following this procedure, the needle and the probe were withdrawn; the sclerotomy was closed with a 6/0 vicry1 suture and the conjunctiva was stroked back to the limbus. AnimaIs were then sacrificed with an intracardiac overdose of pentobarbital (100 mg! kg) under deep anesthesia, and eyes were enucleated for histopathologic studies.
For light and electron microscopy, eyes were fixed in half strength Kamovsky's fixative. Specimens for light microscopy were embedded in Historesin (LKB) and 3 � sections were cut and stained with toluidin blue. For electron microscopy, tissues were processed in Polybed and 70 Á sections were cut, post-stained with urany1 acetate and lead citrate and viewed with a Zeiss EM 10 electron microscope.
RESULTS
No complications were encountered during injection or after removal of the PFCLs. All of the eyes, both experimental and control, demonstrated a similar appearance on light microscopy (Fig. 1). The size, shape and number of nuclei in both the outer and inner nuclear layers were within normal limits. The thickness of the retina corresponded to that of normal retina, and the ganglion cell layer showed no abnormalities.
Exp erimental intraoperative use of perfluorotributylamine, perfluorodecaline, and perfluoropolyether However, two of the eight eyes in which PFDC had been injected showed irregular1y shaped defects in the outer segment discs of the photoreceptors.
DISCUSSION
Th e search for a new and better vitreous substitute has intensified recent1y because of problems related to the use of intra ocular silicone oil that have recently been reported (23)(24)(25). Perfluorocarbon derivatives are generally inert, and many of them are used as blood substitutes because they are good carriers of oxygen and carbon dioxide (26)(27)(28)(29).
PFPE,PFr A, and PFDC are fluorinated, synthetic, transparent compounds that provide excellent thermal and chemical stability. Their high specific gravity (1.88 to 1.94 g/mL) provi de excellent mechanical tamponade, especially for inferior retinal breaks. The low surface tension (16-20 dynes/cm2) results in a high interface tension with saline, which limits their passage through retinal breaks. Th eir lower viscosity makes them very easy to handle, and they can be removed easily from the eye at any time during the surgical procedure. The higher viscosity (68 centistokes) of PFPE prevents formation of the fishegg phenomenon, which is so common with PFrAC14) and PFDCC15).
The differences in physical characteristics between these three PFCLs were not clinicalIy significant. Intraoperatively, the PFCL/saline interface was always visible, alIowing easy fluid-fluid exchange. The refractive index is slightly dissimilar to saline, but this did not alter the refractive error of the eye, and conventional contact lenses could be used throughout surgery. PFDC had a slightly greater ARQ. BRAS. OFTAL. 55, (3), 1992 Exp erimental intraoperative use of perfluorotributylamine, perfluorodecaline, and perfluoropolyether tendency to disperse during injection and the interface was not as easily seen because its refractive index was more similar to that of saline. Removal of alI of these PFCLs had to be done slowly and progressively, especialIy in the PFPE-injected eyes. We did not observe the formation of smalI bubbles (fish egging phenomenon) when using these liquids in conjunction with the infusion fluid utilized in our experiments. The complications resulting from the intraoperative use of PFCLs appear to be minimal. ln some cases, as the liquid was injected, some dispersion did occur, with smalI droplets of PFCL being visible around the edge of the large bubble. Upon contact, however, these smalI droplets tended to coalesce with the large bubble. Furtherrnore, the low viscosity of the PFCLs used in this study permitted use of conventional 20-gauge microsurgical instruments. ln a previous study using PFr A, the outer segments of rabbit retina demonstrated irregular1y shaped defects, referred to as "moth-eaten" by the authors, that reverted to normal after removal of the PFCU14). ln another study, TERAUCHI and co-workers (16) observed toxic effects in eyes in which PFr A was used as a vitreous substitute for one month, but no structural changes were found if PFr A were removed from the eye one hour after injection. The irregularly shaped defects in the outer segment discs noted in our study were similar to those described by CHANG et al (14). Since these changes are reported to be reversible, however, this effect may not preclude the use of PFDC intraoperatively.
We believe our results support the intraoperative use of these PFCLs. Our findings suggest that one hour exposure of the retina to these PFCLs bubbles is not related to any significant toxic effect in the eyes of experimental rabbits. Additional studies are underway to further confirm the nontoxic nature of these PFCLs when used as short term substitutes for vitreous. | 2019-03-16T13:05:42.904Z | 1992-01-01T00:00:00.000 | {
"year": 1992,
"sha1": "c0e79e9636e0961f5c79f589a61c12b5e16b45e5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5935/0004-2749.19920027",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3d046ea27eea9db06a16f49702e03699a770bec3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248097818 | pes2o/s2orc | v3-fos-license | A multidisciplinary approach to peritoneal metastasis from hepatocellular carcinoma: clinical features, management and outcomes
Aim of the study Hepatocellular carcinoma (HCC) is a lethal malignancy with heterogeneous behavior determined by liver function, clinical presentation and treatment response. Peritoneal metastasis (PM) from HCC is rare and management is challenging. We aim to report a cohort of patients with advanced HCC and describe demographic characteristics, treatment and outcomes of patients with PM. Material and methods We analyzed data from a retrospective cohort of patients with HCC. Patients with PM were analyzed individually. Baseline characteristics, treatment strategy and median overall survival (OS) with 95% confidence interval (CI) were reported. Results 238 patients with advanced HCC were evaluated. Eleven patients had PM: 7 patients were treated with systemic treatment and 4 were treated with upfront peritonectomy followed by systemic treatment at recurrence. These 4 patients had well-preserved liver function and low disease burden and were younger compared to the total cohort. The median time to recurrence after peritonectomy was 30.25 months (interquartile range [IQR]: 13.53-46.92): 3 of them presented peritoneal recurrence (2 with diffuse peritoneal spread and 1 with concomitant hepatic recurrence) and 1 presented pulmonary recurrence. Overall, patients with PM showed similar OS compared to patients with other metastatic sites (11.8 months; 95% CI: 1.5-19.8 vs. 8 months; 95% CI: 6.7-10, p = 0.901). Patients with PM treated with upfront surgery had a median OS of 60 months (95% CI: 16.7-not reached). Conclusions Resection of PM from HCC may provide long-term survival in selected patients. A multidisciplinary approach is the optimal strategy for managing PM from HCC.
Introduction
Hepatocellular carcinoma (HCC) is a highly lethal malignancy, with a 5-year survival rate of 5-15% [1,2]. The poor prognosis of HCC is related to the underlying liver disease and due to the fact that around 50% of patients are diagnosed at an advanced stage, when no curative alternatives are feasible [3]. In these cases, systemic therapies aim to improve survival by delaying disease progression and clinical deterioration.
The upfront standard treatment for patients with extrahepatic spread is based on drugs tested in phase III randomized trials. The combination of atezolizumab (an anti-PD1 antibody) and bevacizumab (an anti-VEGF A multidisciplinary approach to peritoneal metastasis from hepatocellular carcinoma: clinical features, management and outcomes antibody) has been the standard of care since 2020 [4], while the multikinase inhibitors sorafenib and lenvatinib are reasonable options in patients ineligible for immunotherapy or in regions where the combination is not available [5].
Hepatocellular carcinoma often metastasizes through hematogenous spread, lymphatic dissemination and direct invasion. The usual sites of metastasis are lungs, bones, lymph nodes and adrenal glands. Nevertheless, HCC can rarely present with peritoneal metastasis (PM) in 2-15% of cases [6,7]. Particular risk factors have been suggested to be associated with PM, such as tumor rupture or needle tract seeding following biopsies or percutaneous ablation procedures. The prognostic impact of peritoneal dissemination is not well established [8,9]. Data suggest that liver function and intrahepatic disease control are determinants of outcomes in patients with PM [10]. Therefore, the optimal approach for this subset of patients should be based on a multidisciplinary approach and focused on maintaining a preserved hepatic function, rational use of systemic treatments and surgical approaches in selected cases. The aim of this study is to report clinical-pathological characteristics and outcomes of patients with HCC and peritoneal spread treated through multidisciplinary decisions.
Study design and participants
We evaluated a retrospective cohort of patients with advanced HCC treated between October 2009 and January 2018 at our institution. All patients included in the analysis met diagnostic criteria for HCC based on radiological and/or histological findings [11]. Clinical characteristics related to underlying liver disease and information about treatments and clinical outcomes were collected from medical records.
Patients were excluded in case of: 1) diagnosis of fibrolamellar HCC or mixed hepatocellular tumors, 2) insufficient data in medical records, or 3) loss of follow-up that impair data analysis.
The site of metastatic spread was determined according to the baseline imaging routinely performed: abdominal and thoracic computed tomography, abdominal magnetic resonance imaging and bone scintigraphy.
Patients were further divided into two groups: 1) those with PM and 2) those without PM. Among patients with PM, we identified those patients who have been submitted to peritonectomy. Comparisons of features and outcomes were performed between patients with PM vs. no PM and between patients with PM who were treated with systemic treatment or upfront peritonectomy. The study was approved by the institutional ethics committee (protocol number 3.807.496).
Treatment
According to the institutional protocol, the standard management of advanced stage disease is systemic therapy. Nevertheless, weekly multidisciplinary boards (composed of surgeons, clinical oncologists, hepatologists, radiologists and pathologists) discuss individual cases for which personalized management can be offered. Patients with low burden disease, preserved liver function, no significant comorbidities and features that suggest an indolent biological behavior (late recurrence, absence of symptoms and low α-fetoprotein serum levels) are considered for local modalities and/ or surgery for metastatic sites. Before 2008, the use of systemic treatment was limited due to the lack of approved drugs. In 2008, sorafenib was approved based on the results of the SHARP trial [12].
During the study period (2009-2018), the recommended systemic treatment was sorafenib at a dose of 400 mg twice daily until disease progression, unacceptable adverse events or death. The use of reduced doses was permitted from the beginning and during the course of treatment (depending on tolerability and side effects) at the discretion of treating physicians. The follow-up consisted of regular clinical visits, laboratory tests every three to four weeks and assessment of radiological response (computed tomography or magnetic resonance) every eight to twelve weeks.
In patients with PM for whom peritonectomy was indicated, the procedure started with diagnostic laparoscopy to assess feasibility and disease burden. If a decision was made to proceed with surgery, a midline laparotomy was performed and the peritoneal carcinomatosis index (PCI) was registered. PCI is a widely used system that quantitatively describes the distribution and size of the peritoneal spread throughout 13 abdominal regions: the size of the largest implant is scored for each region and the sum of each region's score results in the PCI score, with a range of 1-39 [13]. Due to the retrospective design, there was no predefined PCI cutoff, although a PCI higher than 16 was considered unsuitable for peritonectomy by most of the board members. Tumor debulking was performed as dictated by the disease burden and distribution, which included resection of intra-abdominal organs, peritoneal nodules, affected peritoneal surfaces and suspected lymph nodes. Post-surgery surveillance consisted of physical examination, imaging tests and α-fetoprotein (AFP) measurement every 3-4 months.
Statistical analysis
Continuous variables were expressed as mean, median, ranges or interquartile intervals. Categorical variables were expressed as frequency. Categorical variables were compared using the χ 2 test or Fisher's exact test where appropriate. Overall survival (OS) was estimated using the Kaplan-Meier method and curves were compared by log-rank test. Univariate and multivariate analyses using the Cox proportional hazards model were performed to evaluate prognostic factors. Variables were included in the multivariate analyses if they presented a p value < 0.05 in the univariate analysis and were not associated with each other. Data were evaluated using STATA software version 15.0.
Patients with peritoneal metastasis
Eleven patients had PM, representing 9.8% of the patients with extrahepatic disease and 4.6% of the total cohort. Two patients had de novo HCC with peritoneal spread, while 9 other patients had recurrent PM after having received locoregional treatment: 2 patients had been submitted to liver transplantation, 1 patient had received transarterial chemoembolization, 5 patients had been treated with hepatic resection and 1 patient received percutaneous ablation combined with transarterial chemoembolization. Ten out of 11 patients had well-preserved liver function, while 1 (9.1%) patient had Child-Pugh B.
There were no significant differences between patients with and without PM regarding clinical and pathological characteristics, except that those patients with PM were significantly younger than those with no PM (p = 0.035). Table 1 shows the baseline demographic characteristics of the cohort and Table 2 details the 11 patients with PM.
Patients submitted to peritonectomy
Four patients with PM were submitted to peritonectomy as the upfront treatment for metastatic disease (Fig. 1). These cases were individually discussed in multidisciplinary tumor boards and clinical characteristics, tumor burden and feasibility of surgery were considered for decision making. Table 2 shows the main baseline features of this subgroup. All 4 patients were Child-Pugh A, had no macrovascular invasion, no ascites and a median alpha-fetoprotein serum level of 53 ng/ml (range: 11.9-188). Three patients had peritoneal disease as the only site of disease and one patient had both PM and lymph node involvement. All patients who were submitted to surgery had a R0 resection. There was no surgery-related mortality.
All 4 patients had recurrences, with a median time from surgery to recurrence of 30.25 months (interquartile range [IQR]: 13.53-46.92): 3 of them with PM (2 diffuse peritoneal spread, 1 with concomitant hepatic recurrence) and 1 with lung recurrence.
Regarding patients with PM (n = 11), the median OS was 17.9 months (95% CI: 2.77-not-reached [NR]). Although numerically superior, there was no statisti-A multidisciplinary approach to peritoneal metastasis from hepatocellular carcinoma: clinical features, management and outcomes (Fig. 2). In the last follow-up, 2 patients in the PM group were still alive and under systemic treatment; both had been submitted to peritonectomy (Table 3).
Discussion
This article describes a large cohort of patients with advanced HCC and detailed clinical characteristics and outcomes of patients with PM. A subset of patients with PM was treated with surgery and showed longterm recurrence-free survival. These patients presented an encouraging survival time of 60 months, which highlights the need to individualize treatment strategies for HCC patients with PM.
According to international guidelines, the management of patients with extrahepatic spread is based on systemic treatment [14,15]. In the past years, a variety of novel drugs were incorporated and the prognosis of advanced HCC improved significantly [4,16]. Prospective studies in advanced HCC reveal that around 10-30% of patients may present deep and durable responses, which opens the opportunity for discussing A multidisciplinary approach to peritoneal metastasis from hepatocellular carcinoma: clinical features, management and outcomes the combination of systemic treatment and local modalities, such as resection of metastatic disease [17]. Moreover, it is clear that biological behavior is a crucial determinant of the benefit from a more aggressive strategy in advanced HCC, such as surgical treatment of metastasis. Peritoneal spread is reported to occur in up to 2-18% in autopsy series [18,19]. The mechanisms that drive PM have not been well established, although it is suggested that malignant cells may disperse into the abdominal cavity during HCC spontaneous rupture, percutaneous biopsy and percutaneous ablation treatments [20,21]. These risk factors were observed in a few patients from our cohort, but a clear association was not possible due to the small sample size. Other unrecognized mechanisms may also play a role in PM genesis, such as the differentiation grade and the presence of macrovascular invasion, which deserves further investigation [18].
Some groups have published small series and case reports suggesting that, in addition to systemic treatment, surgical treatment of PM may provide favor- [22]. Some of these studies also reported the feasibility of adding hyperthermic intraperitoneal chemotherapy (HIPEC) in diverse malignancies [23], but the use of HIPEC requires further evaluation in prospective studies to determine safety, once HCC patients have limited tolerance of cytotoxic chemotherapy due to coexisting liver disease. In our series, 4 patients were treated with upfront surgery, which provided a long recurrencefree survival time of around 30 months. At recurrence, patients received systemic treatment, with a similar response compared to those who received systemic treatment as the first treatment for PM or other sites of disease.
A close follow-up after surgery ensured that all recurrences were detected before clinical deterioration and liver impairment, so that patients had a favorable performance status to receive systemic treatment.
A key aspect in our study is that these 4 patients had favorable baseline characteristics that encouraged a more aggressive approach. All 4 patients had no severe comorbidities, preserved liver function, no ascites, no concomitant hepatic disease, no major vascular invasion, low serum α-fetoprotein levels and a PCI < 10. Additionally, this subgroup was younger compared to the total cohort.
Limitations of the present study are the small sample size and its retrospective nature. Nevertheless, multidisciplinary tumor boards involving all the specialties dedicated to liver cancer should dictate the best approach for HCC patients with PM who present other factors of better prognosis, such as Child-Pugh A, low tumor burden and stable hepatic disease.
Conclusions
Peritoneal metastases from HCC are rare and include a heterogeneous range of clinical presentation, biological behavior and outcomes. The prognosis varies according to clinical characteristics and response to treatment. A multidisciplinary approach is mandatory for selecting patients who derive benefit from surgery for PM and a close follow-up allows the early detection of recurrence and proper indication for systemic treatment.
Disclosure
The authors declare no conflict of interest. | 2022-04-13T05:28:24.620Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "d08ebfd63b8f484d9fc95dde0ccb637a141715f5",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d08ebfd63b8f484d9fc95dde0ccb637a141715f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256041757 | pes2o/s2orc | v3-fos-license | The three- and four-Higgs couplings in the general two-Higgs-doublet model
We apply the unitarity bounds and the bounded-from-below (BFB) bounds to the most general scalar potential of the two-Higgs-doublet model (2HDM). We do this in the Higgs basis, i.e. in the basis for the scalar doublets where only one doublet has vacuum expectation value. In this way we obtain bounds on the scalar masses and couplings that are valid for all 2HDMs. We compare those bounds to the analogous bounds that we have obtained for other simple extensions of the Standard Model (SM), namely the 2HDM extended by one scalar singlet and the extension of the SM through two scalar singlets.
Introduction
In order to unveil the detailed mechanism of electroweak symmetry breaking it is crucial to measure the self-couplings of the boson with mass 125 GeV discovered in 2012 at the LHC [1,2]. In this paper we call that boson h 1 . The Standard Model (SM) predicts h 1 to be a scalar and predicts its cubic and quartic couplings g 3 and g 4 , which we define through L = · · · − g 3 (h 1 ) 3 − g 4 (h 1 ) 4 , (1.1) JHEP12(2018)004
g 3 and g 4 in the SM
The Standard Model has only one scalar doublet φ 1 . We write it where v is the VEV, which is real and positive, and G + and G 0 are (unphysical) Goldstone bosons. In the SM H coincides with the observed scalar h 1 . The scalar potential is (1. 3) The minimization condition of V is µ 1 = −λ 1 v 2 . Therefore, in the unitary gauge where G ± and G 0 do not exist, The second term in the right-hand side of equation (1.4) indicates that the squared mass M 1 of the observed scalar is given by M 1 = 2λ 1 v 2 . Therefore, It should be noted that the sign of g 3 implicitly depends on the sign of h 1 . We fix that sign by noting that the covariant derivative of φ 1 gives rise to a term Thus, the coupling W + µ W µ− h 1 , viz. g 2 v √ 2 , is positive.
JHEP12(2018)004 2 The Standard Model plus two singlets
We consider the Standard Model with the addition of two real SU(2)×U(1)-invariant scalar fields S 1 and S 2 . 1 We assume two symmetries S 1 → −S 1 and S 2 → −S 2 . We call this model the SM2S. 2 The scalar potential is (2.1c)
Unitarity condidions
We derive the unitarity conditions on the parameters of V 4 . 3 We follow closely the method of ref. [33]. We write where a and b are complex fields. Then, There are seven two-particle scattering channels (Q is the electric charge, T 3 is the third component of weak isospin): 1 In appendix A we treat the simpler case of the HSM, viz. the Standard Model with the addition of only one real gauge singlet. 2 The SM2S has already been mentioned in the literature as a model for Dark Matter, see refs. [27][28][29][30]. 3 Strictly speaking, the unitarity conditions derived and utilized in this paper are the ones valid in the limit of infinite Mandelstam parameter s. For finite s one must take into account the trilinear vertices that are induced from the quartic vertices when one substitutes one of the fields by its VEV. The unitarity conditions then become s-dependent and may be either more or less restrictive than the conditions in the limit of infinite s. See refs. [31,32].
JHEP12(2018)004
In order to derive the unitarity conditions one must write the scattering matrices for pairs of one incoming state and one outgoing state with the same Q and T 3 . Let the incoming state be xy and let the outgoing state by zw, where x, y, z, and w may be either a, a * , b, b * , S 1 , or S 2 . The corresponding entry in the scattering matrix is the coefficient of xyz * w * in V 4 , with the following additions: For each n identical operators in xyz * w * there is an additional factor n! in the entry.
If x = y there is additional factor 2 −1/2 in the entry.
If z = w there is additional factor 2 −1/2 in the entry.
One finds in this way that the scattering matrices for the channels 1, 2, 3, and 4 are The scattering matrices for the channels 5 and 6 are The scattering matrix for channel 7 is The matrix (2.6) is similar to the matrix The unitarity conditions are the following: the eigenvalues of all the scattering matrices should be smaller, in modulus, than 4π. Thus, in our case, should have moduli smaller than 4π.
Bounded-from-below conditions
One may write , and Z = S 2 2 are positive definite quantities independent of each other. In order for V 4 to be positive the square matrix in equation (2.10) must be copositive [34]. A real symmetric matrix M is copositive if x T M x > 0 for any vector x with non-negative components. A necessary condition for a real n × n matrix to be copositive is that all its (n − 1) × (n − 1) principal submatrices are copositive too. 4 Thus, the matrices In order for the full 3 × 3 matrix in equation (2.10) to be copositive an additional BFB condition is required [35]:
Procedure
Let the VEV of S 1 be w 1 and let the VEV of S 2 be w 2 . 5 Then, the vacuum stability conditions are (2.14c) 4 The principal submatrices are obtained by deleting rows and columns of the original matrix in a symmetric way, i.e. when one deletes the i1, i2, . . . , i k rows one also deletes the i1, i2, . . . , i k columns. 5 In appendix B we demonstrate that stability points of the potential with either w1 = 0 or w2 = 0 have a higher value of the potential and cannot therefore be the vacuum.
JHEP12(2018)004
Using equation (1.2) with G + = 0 and G 0 = 0, i.e. in the unitary gauge, together with S 1 = w 1 + σ 1 and S 2 = w 2 + σ 2 , one obtains One diagonalizes the real symmetric matrix M as where R is a 3 × 3 orthogonal matrix that may be parameterized as Here, c j = cos ϑ j and s j = sin ϑ j for j = 1, 2, 3. One has where the h j are the physical scalars, i.e. the eigenstates of mass; the scalar h j has squared mass M j . We assume that h 1 is the already-observed scalar. The interactions of the scalars with W + W − are given by equation (1.8b), i.e.
We define the sign of the field h 1 to be such that the coupling of h 1 to W + W − has the same sign as in the Standard Model. Thus, we choose −π/2 < ϑ 1 < π/2.
JHEP12(2018)004
According to equation (2.15), and The oblique parameter T is given by [36] (2.24) In our numerical work we use as input the nine quantities v, w 1 , w 2 , M 1 , M 2 , M 3 , ϑ 1 , ϑ 2 , and ϑ 3 , which are equivalent to the nine parameters of the scalar potential µ 1 , m 2 1 , m 2 2 , λ 1 , ψ 1 , ψ 2 , ψ 3 , ξ 1 , and ξ 2 . We input equations (1.6) and choose arbitrary values for M 2 > 0 and M 3 > 0 such that M 2 ≤ M 3 (this represents no lack of generality, it is just the naming convention for h 2 and h 3 ). We enforce no lower bound on M 2 and M 3 , in particular we allow them to be lower than M 1 = (125 GeV) 2 . The VEVs w 1 and w 2 are chosen positive; this corresponds to the freedom of choice of the signs of S 1 and S 2 . The angle ϑ 1 is in either the first or the fourth quadrant, with cos ϑ 1 > 0.9, (2.25) so that the h 1 W + W − coupling is within 10% of its Standard Model value. The angle ϑ 2 is in the first quadrant; this corresponds to a choice of the signs of the fields h 2 and h 3 . The angle ϑ 3 may be in any quadrant. We firstly compute T according to equation ( solid line in figure 1 was obtained through a random scan of the parameter space; it is not an analytical bound.
In figure 2 we display the predictions for g 3 and g 4 . In order to produce that figure we have randomly generated √ M 2 , √ M 3 , and the VEVs w 1 and w 2 in the range 0 to 10 TeV. One sees that g 3 is always positive but below its SM value when M 2 > M 1 ; when M 2 < M 1 the allowed range for g 3 becomes much wider. When the masses of the new scalars get higher, g 3 takes values closer to the SM value. An important point is that g 3 remains of the same order of magnitude as in the SM, but g 4 may reach 15 times its SM value.
In the left panel of figure 3 one sees that when cos ϑ 1 → 1 the coupling g 3 necessarily approaches its SM value. This behaviour is because of equation (2.21e) and c 1 > 0.9, which implies |s 1 | c 1 . On the other hand, g 4 is not correlated with cos ϑ 1 , as one sees in the right panel of figure 3.
JHEP12(2018)004 3 The two-Higgs-doublet model
We next consider the model with two scalar gauge-SU(2) doublets φ 1 and φ 2 having the same weak hypercharge. This is usually known as 2HDM. The scalar potential is given by equation (2.1a), where where µ 1,2 and λ 1,2,3,4 are real. The ten (real) coefficients in V 4 may be grouped as [37] η 00 = λ 1 + λ 2 + 2λ 3 , Under a (unitary) change of basis of the scalar doublets, η 00 is invariant while where O is an SO(3) matrix. Only quantities and procedures that are invariant under the transformation (3.3) are meaningful.
Unitarity conditions
We write
Channel 5 produces the scattering matrix A similarity transformation transforms the matrix (3.6) into the direct sum of two 4 × 4 matrices (3.7b) -12 -
JHEP12(2018)004
Here, is invariant under a change of basis of the doublets. It is obvious that the eigenvalues of the matrices (3.7) are invariant under such a change too.
Channel 4 produces the scattering matrix which may readily be shown to be similar to M 1 . Channel 3 produces the scattering matrix The matrix (3.10) is similar to Thus, the unitarity conditions for the scalar potential of the 2HDM are the following: the eigenvalues of the two 4 × 4 matrices (3.7) and of the 3 × 3 matrix (3.12), and I in equation (3.8), should have moduli smaller than 4π. These conditions were first derived in refs. [38,39]. We emphasize that they are, as they should, invariant under a change of basis of the two doublets.
Bounded-from-below conditions
Necessary and sufficient conditions for the scalar potential of the 2HDM to be BFB were first derived in ref. [37]. Ivanov [40] and Silva [41] later produced other, equivalent conditions to the same effect. We have implemented numerically both the conditions of ref. [37] and those of ref. [41]. We have found that the Ivanov-Silva algorithm runs several times faster than the one of ref. [37]. We have also checked that all the points produced by either algorithm were validated by the other one.
JHEP12(2018)004
The points in our scatter plots were produced by using the algorithm of ref. [41]. That algorithm runs as follows. One constructs the 4 × 4 matrix and one computes its four eigenvalues. Then the potential is BFB if all the following conditions apply: • All four eigenvalues are real.
• All four eigenvalues are different from each other.
We may now return to inequality (3.23), which implies, in principle, many more necessary conditions for boundedness-from-below. Setting for instance sin θ = cos θ one concludes that which must hold for any h and α. Therefore [44], We have numerically analyzed the BFB conditions by giving random values to λ 1 , , and arg (λ * 6 λ 7 ) and then checking whether the BFB conditions are met. We have confirmed that the conditions (3.22), (3.25), and (3.27) always hold. 6
Procedure
We consider the most general 2HDM and purport to find out its ranges for g 3 and g 4 . We use the Higgs basis for the scalar doublets; in that basis only φ 0 1 has VEV and therefore φ 1 has the expression (1.2), while In equation (3.28), σ 1 and σ 2 are real fields and C + is the physical charged scalar of the 2HDM. We emphasize that using the Higgs basis represents no lack of generality, because both the unitarity and the BFB conditions are the same in any basis. Since only φ 1 has VEV, the vacuum stability conditions are µ 1 = −λ 1 v 2 and µ 3 = −λ 6 v 2 [46]. The coupling µ 2 in equation (3.1a) is unrelated to the parameters of V 4 ; one may trade it for the charged-Higgs squared mass M C = µ 2 + λ 3 v 2 . The mass terms of H, σ 1 , and σ 2 are given by line (2.15b), with [46] The matrix M is diagonalized through equations (2.17)-(2.19). 6 The BFB conditions worked out in this subsection are, clearly, the ones valid at tree level. At loop level the BFB conditions change, see ref. [45].
JHEP12(2018)004
The three invariants of M are We input parameters λ 1,2,··· ,7 that satisfy both the unitarity conditions and the BFB conditions of subsections 3.1 and 3.2, respectively. 7 We also use the values of M 1 and v in equations (1.6). The two equations are quadratic in M C . By affirming the fact that both quadratic equations (3.31) must hold for the same value of M C , one is able to compute both M C and cos 2 ϑ 1 . We thus get to know the full matrix M , hence its eigenvalues M 2 and M 3 and its diagonalizing matrix R.
We require cos ϑ 1 > 0.9. We also compute the oblique parameter where T singlets is given by equation (2.23). We require −0.04 < T < 0.20. We have applied the method devised in ref. [41] to guarantee that our assumed vacuum state is indeed the state with the lowest possible value of the potential. The method may be described as follows. Let the matrix Λ E in equation (3.16) have four eigenvalues Λ 0,1,2,3 . We already know, from the BFB conditions, that those eigenvalues must be real and different from each other; let us order them as Λ 0 > Λ 1 > Λ 2 > Λ 3 . Let the charged-Higgs squared mass be M C ; define ζ ≡ 2M C v 2 . Then, the assumed vacuum state is the global minimum of the potential if either ζ > Λ 0 , or Λ 0 > ζ > Λ 1 , or Λ 2 > ζ > Λ 3 . This test led us to discard about 10% of our previous set of points.
The four-Higgs vertex is given by The three-Higgs vertex is given by We also want to consider the h 1 C + C − vertex, which may be relevant in the discovery of the charged scalar. That vertex is given by where, in the 2HDM, (3.36)
Results
As we know from subsections 3.1 and 3.2, in general λ 1 can take any value in between 0 and 4π/3. Once the constraint cos ϑ 1 > 0.9 is imposed, however, λ 1 can be no larger than ∼ 1; this is illustrated in figure 7. The closer cos ϑ 1 is to 1, the closer λ 1 must be to its SM value M 1 2v 2 = 0.258; note that λ 1 is almost always larger than its SM value when cos ϑ 1 > 0.9; the minimum value that we have obtained for λ 1 is 0.2135. If cos ϑ 1 0.99, then the masses of the new scalar particles of the 2HDM, namely √ M C , √ M 2 , and √ M 3 can be no larger than ∼ 700 GeV; if cos ϑ 1 0.95, they can be no larger than ∼ 550 GeV. When cos ϑ 1 becomes close to 1, the masses of the new scalar particles may reach O(TeV); this is illustrated in figure 8.
One sees in figure 9 that √ M C and √ M 2 differ by at most ∼100 GeV unless 200 GeV < √ M C < 500 GeV. (Remember that by convention M 2 is always smaller than M 3 , but they may be smaller than M 1 .) JHEP12(2018)004 We now come to the predictions for g 3 and g 4 in the 2HDM, which are depicted in figure 10. One sees that g 3 in the 2HDM has a range only slightly larger than in the SM2S, while g 4 in the 2HDM is much more restricted than in the SM2S; g 4 g SM 4 4 in the 2HDM but g 4 g SM 4 15 in the SM2S. An interesting feature is that g 3 may be zero or even negative, i.e. it may have sign opposite to the one in the SM. (We recall that the sign of g 3 is measured relative to the sign of c 1 ; we arrange that c 1 is always positive.) On the other hand, g 4 is always positive because of the boundedness from below of the potential.
In figure 11 we depict the coupling g 1CC of the 125 GeV neutral scalar to a pair of charged scalars in the 2HDM. One sees that that coupling is in between −200 GeV and 1,700 GeV. The expression for g 1CC in equation
The two-Higgs-doublet model plus one singlet
We consider in this section the two-Higgs-doublet model with the addition of one real SU(2) × U(1)-invariant scalar field S. We assume a symmetry S → −S. As a shorthand, we shall dub this model the 2HDM1S (other authors use just 2HDMS [47]). The quartic part of the scalar potential is
Bounded-from-below conditions
Deriving necessary and sufficient BFB conditions for even a rather simple potential like the one in equation (4.1) is a notoriously difficult problem [48]. If V 4 were negative for some possible values of S 2 , φ † 1 φ 1 , φ † 2 φ 2 , and φ † 1 φ 2 , then V 4 would tend to −∞ upon multiplication of those four values by an ever-larger positive constant. Therefore, we want V 4 to be positive for all possible values of S 2 , φ † 1 φ 1 , φ † 2 φ 2 , and φ † 1 φ 2 . In order to guarantee this, we proceed in the following fashion.
Necessary condition 1. When S 2 = 0, equation (4.1) reduces to its first two lines, i.e. to the quartic potential of the 2HDM. Therefore, one must require the fulfilment of the conditions of subsection 3.2, viz. the four conditions in between equations (3.16) and (3.18).
Necessary condition 2. When φ † 1 φ 2 = 0, Since φ † 1 φ 1 , φ † 2 φ 2 , and S 2 are positive definite quantites, we must require [34,35] ψ > 0, (4.3a) After enforcing the necessary condition 1, we know that V 4 > 0 when only the first two lines of the potential (4.1) exist; after enforcing the inequality (4.3a), we know that V 4 > 0 when only the third line exists. If we guarantee that the fourth line of the potential (4.1) is always positive too, then we will be sure that V 4 is always positive. We therefore have the following 8 Sufficient condition. If, besides the two necessary conditions, then V 4 is BFB. Among the sets of parameters of the potential (4.1) that we have randomly generated, there were some that met both the two necessary conditions and the sufficient conditions (4.4); we have used those sets of parameters. There were many other sets that JHEP12(2018)004 satisfied the two necessary conditions but did not meet the sufficient conditions (4.4); for those sets, we have numerically found the absolute minimum of V 4 . We have done this by using S 2 = 1 together with equations (3.19) and by minimizing V 4 in the domain r 2 > 0, 0 ≤ θ ≤ π/2, 0 ≤ h ≤ 1, and 0 ≤ α < 2π. If the minimum of V 4 is positive, then the set of input parameters is good, else the set of input parameters is bad and one must discard it.
Unitarity conditions
There are the same five scattering channels as in the 2HDM, cf. subsection 3.1; but the channel Q = T 3 = 0 has an additional scattering state S 2 . Additionally, there are two extra scattering channels: • The channel Q = 1, T 3 = 1/2 with the two states aS and cS.
• The channel Q = 0, T 3 = −1/2 with the two states bS and dS.
Both these channels produce a scattering matrix Thus, the unitarity conditions for the 2HDM1S are the following: both |I| and the moduli of all the eigenvalues of the 2×2 matrix M 4 , of the 3×3 matrix M 3 , of the 4×4 matrix M 1 , and of the 5 × 5 matrix M 2 must be smaller than 4π.
Procedure
Just as in the previous section, we utilize the Higgs basis for the two doublets, i.e. equations (1.2) and (3.28). We also write S = w + σ, where w is the VEV of the scalar S and σ is a field. The mass terms of the scalars are cf. equation (3.29). One diagonalizes M as where R is a 4 × 4 orthogonal matrix. The squared mass M 1 is given by equation (1.6a). Without loss of generality, M 2 < M 3 < M 4 . Just as in the previous sections, we require The expression for the oblique parameter T is [36] and we demand −0.04 < T < 0.20. We input random values for the 15 real parameters M C , λ 1,2,3,4 , |λ 5,6,7 |, ψ, ξ 1,2 , |ξ 3 |, arg (λ * 5 λ 6 λ 7 ), arg (λ * 6 λ 7 ), and arg (λ * 6 ξ 3 ). We moreover input M 1 and v 2 given in equations (1.6). Then, 1. We require the input parameters to satisfy the BFB conditions of subsection 4.1this may imply a numerical minimization of V 4 to check that V 4 > 0.
2. We require the input parameters to satisfy the unitarity conditions written after equation (4.6).
3. We compute the VEV w from the condition that M 1 should be an eigenvalue of the matrix M .
JHEP12(2018)004
4. We enforce the conditions in appendix C. They guarantee that the vacuum state with v = 174 GeV and w = 0 has a lower value of the potential than all the other possible stability points of the potential. 5. We compute the full matrix M , its eigenvalues M 2,3,4 , and its diagonalizing matrix R; we choose the overall sign of R such that R 11 ≡ c 1 > 0.
6. We impose both the condition (4.10) and the condition that the oblique parameter T is within its experimental bounds.
Results
In figure 12 we have plotted the differences among the masses of the scalars against the mass of the charged scalar. One sees that √ M C and √ M 3 cannot be more than ∼ 300 GeV from each other, but √ M 2 may be much smaller than both of them. In figure 13 we present a scatter plot of the mass of the lightest non-SM neutral scalar against c 1 . One sees that, contrary to what happens in the 2HDM (cf. figure 8), √ M 2 may reach 1 TeV even when c 1 is as low as 0.9.
We depict in figure 14 the three-and four-Higgs couplings g 3 and g 4 in the 2HDM1S. The main difference relative to the 2HDM (cf. figure 10) is that g 4 may be much higher, just as in the SM2S. In the 2HDM1S there is no clear correlation between g 3 and g 4 . In figure 15 we have plotted the h 1 C + C − coupling g 1CC . That coupling in the 2HDM1S may be more than two times larger than in the 2HDM; very large values of g 1CC occur even for c 1 very close to 1. This is because the right-hand side of equation (4.14) may be dominated by its fourth term when w v. The first term displays the same behaviour as the corresponding term in the 2HDM, viz. it is usually positive and no larger than 1,500 GeV, but it is often overwhelmed by the fourth term.
Conclusions
In this paper we have emphasized that both the bounded-from-below (BFB) conditions and the unitarity conditions for the two-Higgs-doublet model (2HDM) are invariant under JHEP12(2018)004 a change of the basis used for the two doublets. Therefore, one may implement those conditions directly in the Higgs basis, viz. the basis where only one doublet has vacuum expectation value. This procedure allows one to extract bounds on the masses and couplings of the scalar particles of the most general 2HDM, disregarding any symmetry that a particular 2HDM may possess. We have focussed on the three couplings g 3 (h 1 ) 3 , g 4 (h 1 ) 4 , and g 1CC h 1 C + C − , where h 1 is the observed neutral scalar with mass 125 GeV and C ± are the charged scalars of the 2HDM. We have utilized the same procedure for two other models, namely the Standard Model with the addition of two real singlets (SM2S) and the two-Higgs-doublet model with the addition of one real singlet (2HDM1S), in both cases with reflection symmetries acting on JHEP12(2018)004 each of the singlets. We have found, for instance, that: • The coupling g 3 may, in both the 2HDM and the 2HDM1S, have sign opposite to the one in the SM. On the other hand, in any of the three models that we have studied, |g 3 | can hardly be much larger than in the SM.
• The coupling g 4 , which is always positive because of BFB, may for all practical purposes be equal to zero in all the three models. (As a matter of fact, g 3 = g 4 = 0 is possible in all three models.) But it may also be much larger than in the SM. A distinguished feature is that g 4 may be much larger (up to g 4 ∼ 0.5) in the models containing singlets than in the 2HDM, wherein it can at best reach g 4 ∼ 0.13.
• The coupling g 1CC may be of order TeV, but only when the mass of C ± exceeds 300 GeV; in general, a positive g 1CC may be larger for higher masses of C ± , but g 1CC may also be negative for any C ± mass. Moreover, g 1CC may be more than two times larger (either positive or negative) in the 2HDM1S than in the 2HDM.
A comparison of the predictions of the three models for g 3 and g 4 is depicted in figure 16.
We emphasize that our method may be used to obtain bounds and/or correlations among other parameters and/or observables of these models. Unfortunately, it may be difficult to generalize our work to more complicated models, both because they may contain too many parameters and because it is very difficult to derive full BFB conditions for even rather simple models.
A The Higgs Singlet Model
The Higgs Singlet Model (HSM) is the Standard Model with the addition of one real scalar singlet S. We furthermore assume a symmetry S → −S. The scalar potential has just five parameters µ, m 2 , λ, ψ, and ξ. The bounded-from-below (BFB) conditions are The unitarity conditions are We assume that φ 1 has VEV v and S has VEV w. We write S = w + σ together with equation (1.2). The mass matrix for H and σ is where c ≡ cos ϑ and s ≡ sin ϑ. We assume |c| > 0.9. The oblique parameter must satisfy −0.04 < T < 0.20. The three-and four-Higgs couplings are given by In figure 17 we compare the predictions of the HSM and of the SM2S for g 3 and g 4 . One sees that there is no substantial difference between the two models.
B Other stability points of the SM2S potential
In this appendix we consider more carefully the various stability points of the potential of the SM2S in equation (2.1). The vacuum value of that potential is given by Equations (2.14) follow from the assumption that v, w 1 , and w 2 are not zero. Defining The mass matrix M of the scalars is real and symmetric and is given in equation (2.16). We assume that M has three positive eigenvalues M 1 , M 2 , and M 3 . It follows that all the JHEP12(2018)004 principal minors of M are positive. 9 (This is called 'Sylvester's criterion' [49].) Thus, These inequalities display some resemblance to the BFB conditions (2.12), (2.13).
We now consider other stability points of the potential where either v or w 1 or w 2 vanish.
Finally, there is another stability point with value of the potential. 9 The principal minors of a square matrix are the determinants of its principal submatrices.
JHEP12(2018)004
From inequalities (B.4c) and (B.4f) it follows that V (4) ≤ V (2) is equivalent to which in turn is equivalent to and this is obvioulsy true. One thus concludes that V (4) can never be larger than V (2) . In similar fashion one finds that Next consider the inequality V 0 ≤ V (4) . Because of (B.4f) and (B.4g), it is equivalent to This may be written as which is of course true. In similar fashion one obtains that
JHEP12(2018)004
We have thus demonstrated that, because of our assumption that all three eigenvalues of the matrix M are positive, V 0 is smaller than V 1,2,3,4,5,6 , viz. the stability point of V with nonzero v, w 1 , and w 2 is the vacuum.
This result may be easily understood in the following way. The potential (2.1) of the SM2S may be rewritten where V 0 is the vacuum expectation value of the potential given in equation (B.3a) and We assume that the point X = 0, 0, 0 T is a local minimum of the potential V . Then, since the potential in equation (B.17) is a quadratic form is X, the point X = 0, 0, 0 T must also be the global minimum of V . 10
C Global minimum conditions for the 2HDM1S
In the 2HDM1S, we define We define the column vector X = q 1 , q 2 , z, z * , q 3 T . The scalar potential of the 2HDM1S may then be written as The coefficients µ 1 , µ 2 , µ 3 , and µ 4 contained in the column vector Y have squared-mass dimension; µ 3 is in general complex while µ 1 , µ 2 , and µ 4 are real. The coefficients contained in the symmetric matrix Λ are treated by us as an input, cf. section 4.3. Since we study the 2HDM1S in the Higgs basis, where φ 2 has zero VEV, in the vacuum one has q 2 = z = z * = 0, q 1 = v 2 , and q 3 = w 2 ; the vacuum expectation value of the potential is (C.4) 10 We thank Igor Ivanov for presenting this argument to us. 11 Since we only analyze the potential at the classical level, we simplify the notation by treating the fields as c-numbers instead of q-numbers.
JHEP12(2018)004
It follows that Solving for v 2 and w 2 the system (C.5) and plugging the solution into equation (C.4), one obtains Moreover, in the Higgs basis In equation (C.7a), M C is the mass of the charged scalar; we treat it as an input, just as v and w. 12 By using equations (C.5) and (C.7) we find the values of µ 1 , µ 2 , µ 3 , and µ 4 from the input. We want to check that, for each set of input parameters (i.e. λ 1,...,7 , ξ 1,2,3 , ψ, v, w, and M C ) in our data set, the state that we assume to be the vacuum, characterized by q 2 = z = z * = 0, is indeed the global minimum of the potential. In order to do this we must consider all the other possible stability points of the potential and check that the value of the potential at each of those points is larger than V 0 in equation (C.6). The stability points may either be inside the domain defined by equations (C.1) or they may lie on a boundary of that domain. There is only one possible stability point inside the domain; deriving equation (C.2) relative to X, we find that it is given by For each set of input parameters, we have computed the column vector X (1) by using equation (C.8a). If that vector happened to be inside the domain, viz. if X 4 > 0, then we computed V (1) by using equation (C.8b). We checked whether V (1) > V 0 ; if the latter condition did not hold, then we discarded that set of input parameters.
Next we have considered the various possible stability points on boundaries of the domain. Firstly there is the boundary with q 3 = 0 but q 1 > 0, q 2 > 0, and |z| 2 < q 1 q 2 . In that case one has There is one possible stability point with For each set of input parameters, we have computed the column vectorX (2) by using equation (C.11a). Whenever that vector happened to fulfilX
Secondly we have checked a possible stability point with null q 1 (and z) instead of null q 2 (and z). In analogy with equations (C.5) and (C.6), in that case one has q 2 = −ψµ 2 + ξ 2 µ 4 ψλ 2 − (ξ 2 ) 2 , (C.12a) (C.12c) For each set of parameters, we have computed q 2 and q 3 through equations (C.12a) and (C.12b), respectively. Whenever q 2 and q 3 were both positive, we have computed V (3) through equation (C.12c); if V (3) < V 0 , then we discarded the set of parameters. Thirdly, we have considered the following possible stability points on boundaries of the domain: 1. The point q 1 = q 2 = z = q 3 = 0 has V = 0, Therefore, when V 0 > 0 we have discarded the set of parameters.
2. When q 1 = q 2 = z = 0 but q 3 = 0, there is a stability point featuring Whenever q 3 in equation (C.13a) happened to be positive and simultaneously V (4) in equation (C.13b) was smaller then V 0 , we have discarded the set of parameters.
JHEP12(2018)004
3. When q 1 = q 3 = z = 0 but q 2 = 0, there is a stability point featuring Whenever q 2 in equation (C.14a) happened to be positive and simultaneously V (5) in equation (C.14b) was smaller then V 0 , we have discarded the set of parameters.
4. When q 2 = q 3 = z = 0 but q 1 = 0, there is a stability point featuring Whenever q 1 in equation (C.15a) happened to be positive and simultaneously V (6) in equation (C.15b) was smaller then V 0 , we have discarded the set of parameters.
JHEP12(2018)004
For each set of parameters of the potential, we have searched for solutions, i.e. for q 1 > 0, q 2 > 0, q 3 > 0, and a phase θ satisfying the system (C.17) of four equations. (This proved to be a highly nontrivial task.) Whenever we found a solution, we computedV 0 through equation (C.16) and checked whetherV 0 < V 0 ; when that happened for at least one solution of (C.17), we have discarded the corresponding set of parameters.
(C.19c) IfṼ 0 < V 0 for any solution of equations (C.18), then we discarded the set of input parameters. By applying all the tests in this appendix, we have eliminated about half of our initial set of sets of input parameters. Thus, the tests in this appendix prove crucial in the correct analysis of the 2HDM1S.
We have also applied the tests in this appendix, with the necessary simplifications, to the case of the 2HDM [50]. In particular, in that case we do not have to solve the very complicated system of four equations (C.17), we only have to solve the much easier system of three equations (C.18). We have checked that the tests in this appendix yield, for the 2HDM, exactly the same result as the much simpler method described in the paragraph between equations (3.32) and (3.33).
JHEP12(2018)004
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 2023-01-21T14:29:21.699Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "6c75dcb773f76b4c64f37fd17e9e6a1a1c9eb86c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2018)004.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6c75dcb773f76b4c64f37fd17e9e6a1a1c9eb86c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
1614261 | pes2o/s2orc | v3-fos-license | Cholesterol-Enhanced Polylactide-Based Stereocomplex Micelle for Effective Delivery of Doxorubicin
Nanoscale micelles as an effective drug delivery system have attracted increasing interest in malignancy therapy. The present study reported the construction of the cholesterol-enhanced doxorubicin (DOX)-loaded poly(D-lactide)-based micelle (CDM/DOX), poly(L-lactide)-based micelle (CLM/DOX), and stereocomplex micelle (CSCM/DOX) from the equimolar enantiomeric 4-armed poly(ethylene glycol)–polylactide copolymers in aqueous condition. Compared with CDM/DOX and CLM/DOX, CSCM/DOX showed the smallest hydrodynamic size of 96 ± 4.8 nm and the slowest DOX release. The DOX-loaded micelles exhibited a weaker DOX fluorescence inside mouse renal carcinoma cells (i.e., RenCa cells) compared to free DOX·HCl, probably because of a slower DOX release. More importantly, all the DOX-loaded micelles, especially CSCM/DOX, exhibited the excellent antiproliferative efficacy that was equal to or even better than free DOX·HCl toward RenCa cells attributed to their successful internalization. Furthermore, all of the DOX-loaded micelles exhibited the satisfactory hemocompatibility compared to free DOX·HCl, indicating the great potential for systemic chemotherapy through intravenous injection.
on its function [35]. Cholesterol can be incorporated into the DDSs to facilitate the cellular uptake, which should be attributed to its excellent structural compatibility with cell membrane [36,37]. There are lots of researches about the influence of cholesterol on DDSs, which show some positive effects on the nanosized drug carriers [38,39].
In this work, DOX was loaded into the cholesterol-mediated 4-armed PEG-PDLA-, PEG-PLLA-, and equimolar PEG-PDLA/PEG-PLLA-based micelles, noted as CDM/DOX, CLM/DOX, and CSCM/DOX, respectively, by nanoprecipitation (Scheme 1). The formulations, especially CSCM/DOX, exhibited excellent antiproliferative activities toward RenCa cells (a mouse renal carcinoma cell line), which were even better than free DOX· HCl. In addition, the DOX-loaded micelles showed decreased hemolysis rates compared with free DOX· HCl, which endowed them with great potential for in vivo applications.
Fabrication and Characterization of DOX-Loaded Micelles
The branched and multi-armed copolymers have attracted great attention for their more excellent rheological, mechanical, and biomedical properties compared to the linear one and have been widely explored [40]. In this work, pairs of enantiomeric cholesterol-modified 4-armed copolymers of PEG-PDLA and PEG-PLLA were employed as matrices for controlled antitumor drug delivery. The amphiphilic nature of 4-armed PEG-PLA-cholesterol copolymers made them easy to self-organize into micelles in aqueous solution. As shown in Scheme 1, DOX was physically encapsulated by CDM, CLM, or CSCM by nanoprecipitation, yielding CDM/DOX, CLM/DOX, and CSCM/DOX, respectively. The drug-loading contents (DLCs) of CDM/DOX, CLM/DOX, and CSCM/DOX were calculated to be 8.3, 8.8, and 9.5 wt.% and the drug-loading efficiencies (DLEs) were 45.1, 48.2, and 52.2 wt.%, respectively. Compared to CDM/DOX and CLM/DOX, CSCM/DOX exhibited the highest DLC and DLE because of the stable stereocomplex crystallization of enantiomeric PLA in micellar core [31]. The successful preparation of various DOX-loaded micelles was confirmed by transmission electron microscopy (TEM) and dynamic laser scattering (DLS). As shown in Figures 1A-C, all the laden micelles were well dispersed with clear spherical morphologies. The apparent mean diameters of CDM/DOX, CLM/DOX, and CSCM/DOX were around 100, 90, and 80 nm, respectively. In comparison, the hydrodynamic diameters (Dhs) of these micelles tested by DLS were 118 ± 5.2, 104 ± 4.3, and 96 ± 4.8 nm, respectively ( Figure 1D-F). Furthermore, the polydispersity indices (PDIs) of Dhs were calculated to be 0.14, 0.14, and 0.13, respectively, which indicated the narrow size distribution of loading micelles. The sizes of DOX-loaded micelles measured by DLS were larger than those detected by TEM mainly because of the hydration state of micelles in DLS tests [41]. The particle size is an important factor for DDSs to deliver drug into tumor effectively. The appropriate sizes of these micelles might give them with excellent capability for selective accumulation in tumor through the EPR effect [10]. Moreover, as shown in Figure 2, the laden micelles showed excellent stability for at least 72 h incubated in phosphate-buffered saline (PBS) at pH 7.4.
DOX Release from Various Formulations
The in vitro release behaviors of these micelles were examined in PBS at pH 7.4, imitating the conditions in normal physiological tissues. As shown in Figure 3, CDM/DOX and CLM/DOX showed a similar release behavior, which presented three phases: an initial burst release stage, in which 65% of the loaded DOX was released during 6 h; a continuous slow release phase, in which 85% of the loaded DOX was released in a continuous way during 36 h; a platform period, in which only a little loaded DOX was released until 72 h. The initial burst release might be attributed to the absorption of DOX by the shallow parts of the micelles. The mechanism of slow release might be related to the diffusion of DOX through the micelles and the degradation of PLA block [42]. Interestingly, CSCM/DOX exhibited slower drug release than those with single component because of the enhanced stability of CSCM [43]. As time was extended to 36 h, the DOX release of all these laden micelles was tended to be stable. In addition, the cumulative DOX release of these two micelles with single component was about 90 wt.% in 72 h, while that of CSCM/DOX was about 75 wt.%. The results showed that these micelles, especially CSCM, could load and controllably release DOX effectively. Furthermore, the cellular uptakes and intracellular release behaviors of these DOX-loaded micelles were explored on RenCa cells through confocal laser scanning microscopy (CLSM) and flow cytometry (FCM). As shown in CLSM microimages ( Figure 4A), the fluorescence intensity of cells co-cultured with free DOX· HCl was higher than those of DOX-loaded micelles for 2 h. It might because that the cellular uptake of free DOX· HCl by diffusion was quicker than those of DOX-incorporated micelles by endocytosis [44]. Moreover, the fluorescence intensity of DOX in CSCM/DOX group was higher than those of CDM/DOX and CLM/DOX groups, which was possibly attributed to the slower extracellular DOX release and more efficient DOX release in intracellular condition [32]. For further confirmation, the FCM histograms of DOX-loaded micelles and free DOX· HCl were performed and shown in Figure 4B. RenCa cells without any treatments served as blank control, which showed only the autofluorescence of cells. The signal intensity of CDM/DOX and CLM/DOX made little difference. The fluorescence intensity of CSCM/DOX in the nuclei was higher than those of CDM/DOX and CLM/DOX, and free DOX· HCl exhibited the highest fluorescence intensity. The results of FCM were agreed very well with that of CLSM, which showed the effective internalization of DOX-loaded micelles by RenCa cells.
In Vitro Assessment of Cell Viability
In order to evaluate the toxicity profiles of DOX-loaded micelles and free DOX· HCl, the cell viability of RenCa cells was evaluated by a 3-(4.5-dimethyl-thiazol-2-yl)-2.5-diphenyl tetrazolium bromide (MTT) assay. The cellular proliferation inhibition capabilities of DOX-loaded micelles and free DOX· HCl were compared. As shown in Figure 5, CDM/DOX and CLM/DOX appeared lower proliferation inhibitory efficacy than free DOX· HCl at equivalent DOX concentration after incubation for 48 h. The phenomenon might be due to the exist of cholesterol, which enter the cells possibly through a low-density lipoprotein receptor-mediated endocytosis pathway [45,46] and the high-energy dependency of the endocytosis process [47]. However, CSCM/DOX exhibited more effective proliferation inhibition effects on RenCa cells than those of CDM/DOX, CLM/DOX, and free DOX· HCl. It might be relevant to the greater stability of SCM, so less cholesterol appeared on the surface of CSCM/DOX, which less influenced endocytosis. What's more, the improved stability of the micelle meant the less extracellular drug release. Therefore, a greater amount of drug was internalized into the cells by endocytosis and sustained drug release in tumor cells effectively. The half maximal inhibitory concentrations (IC50) of CDM/DOX, CLM/DOX, CSCM/DOX, and free DOX· HCl were calculated to be 0.44, 0.37, 0.22, and 0.33 μg mL −1 , respectively. The lowest IC50 of CSCM/DOX quantitatively showed its enhanced antiproliferative capability against the tumor cells and the potential advantage to be used as a potential antitumor drug formulation. micelles, especially CSCM, could load and controllably release DOX effectively.
Evaluations of Serum Albumin-Tolerance Stability and Hemocompatibility
The evaluations of serum albumin-tolerance stability and hemocompatibility of laden micelles are necessary because the corresponding formulations are designed to be finally administrated via intravenous injection [48]. In this study, the stability of these DOX-loaded micelles incubated in PBS-buffered bovine serum albumin (BSA) solution (30.0 mg mL −1 ) was tested by DLS at 25 °C. As shown in Figure 6, all the laden micelles exhibited excellent stability during the measurement of 72 h. Moreover, the Dhs of these laden micelles in PBS with BSA were similar to the above results in PBS without BSA ( Figure 2). It indicated that the DOX-loaded micelles kept excellent stability in BSA solution. As mentioned above, the excellent hemocompatibility is an important precondition of DOX-loaded micelles for the final application in clinic through the intravenous injection, which is the main administration approach for most drug delivery systems [49]. As shown in Figure 7, the hemolytic activities of these micelles and free DOX· HCl were tested by a spectrophotometry approach. The profiles revealed that these micelles were almost no obvious hemolysis activities with DOX· HCl concentrations up to 1.0 mg mL −1 , while free DOX· HCl showed more serious hemolysis of red blood cells (RBCs). The results demonstrated that the DOX-loaded micelles had satisfactory blood compatibility.
Materials
4-Armed PEG with a number-average molecular weight of 10,000 Da was purchased from Shanghai Seebio Biotech, Inc. (Shanghai, China) and used as received. DLA and LLA were provided by Changchun SinoBiomaterials Co., Ltd. (Changchun, China) and recrystallized from ethyl acetate under argon atmosphere before use. 4-Armed PEG-PDLA and PEG-PLLA were synthesized as our previously reported proposal [32]. In briefly, 10.0 g (1.0 mmol) of 4-armed PEG were azeotropically distilled with 200.0 mL of toluene at 120 °C to remove traces of water. And then, 5.8 g (40 mmol) of DLA or LLA and 100.0 mL of dried toluene were added into the PEG solution. The reaction was performed at 110 °C for 24 h. After the reaction, the copolymer was precipitated in 500.0 mL of diethyl ether. Then the product was dissolved in dichloromethane and precipitated in diethyl ether again. This operation was repeated three times. The obtained product was filtered and dried in vacuum overnight. The degree of polymerization (DP) of PLA in each arm was calculated to be 16 based on the data of proton nuclear magnetic resonance. The number-average molecular weight (Mn) of copolymer was estimated to be 14,600 g mol −1 . The cholesterol-modified copolymers were prepared through the condensation reaction between 4-armed PEG-PLA and cholesteryl chloroformate [50]. Cholesteryl chloroformate, MTT, 4′,6-diamidino-2-phenylindole (DAPI), Alexa Fluor 488 phalloidin (Alexa 488), and BSA were purchased from Sigma-Aldrich (Shanghai, China). Doxorubicin hydrochloride (DOX· HCl) was obtained from Beijing HuaFeng United Technology Co., Ltd. (Beijing, China). Clear 6-well and 96-well tissue culture polystyrene (TCP) plates were obtained from Corning Costar Co. (Cambridge, MA, USA). The deionized water was prepared through Milli-Q water purification equipment (Millipore Co., Milford, MA, USA).
DOX Encapsulation
DOX was loaded into micelles through a nanoprecipitation method [51]. In briefly, DOX· HCl (21.3 mg) were dissolved in 6.0 mL of Milli-Q water, and then were slowly added into 10.0 mL of 4-armed PLLA-PEG-cholesterol copolymer solution in N,N-dimethylformamide (DMF) (10.0 mg mL −1 ). After that, 2.0 mL of PBS was added into the mixed solution. It was continuous stirring at room temperature for 12 h and subsequently dialyzed against deionized water for 12 h (molecular weight cut-off (MWCO) = 3500 Da). At last, the CLM/DOX was obtained by lyophilisation. Both CDM/DOX and CSCM/DOX were fabricated by the same protocol.
In order to detect the DLC and DLE, the DOX-loaded micelles were dissolved in DMF and stirred for 12 h at room temperature. After that, the amount of DOX in micelles were detected by the fluorescence spectroscopy with a standard curve method on a Fluorescence Master System with software
Measurements
At the predetermined times after the dissolution of DOX-loaded micelles in PBS at pH 7.4, the Dhs of laden micelles were determined using DLS at 25 °C on a WyattQELS apparatus. The intensity results were obtained, and the average histograms were given. The PDI of Dh was defined as the ratio of standard deviation and mean of Dh. TEM experiments were carried out on a JEOL JEW-1011 instrument operating at an accelerating voltage of 100 kV. 10.0 μL of loading micelle solution (0.1 mg mL −1 ) was dipped on a copper grid and then dried at room temperature in the air.
In Vitro DOX Release
The release profiles of DOX-loaded micelles were assessed in PBS at pH 7.4, a mimicking normal physiological condition. In brief, 1.0 mg of CDM/DOX, CLM/DOX, or CSCM/DOX was dissolved in 10.0 mL PBS and then transferred into a dialysis bag (MWCO = 3500 Da). After that, the dialysis bag was put into a beaker and subsequently 100.0 mL of PBS was added at 37 °C with continuous vibrations of 70 rpm. At the predetermined time points, 2.0 mL of release medium was taken out for test, and an equal volume of fresh PBS was added into the beaker. The amount of released DOX was determined using fluorescence spectroscopy.
Intracellular DOX Release Analyses
The abilities of micelles to transport DOX into RenCa cells were qualitatively detected by CLSM and quantitatively estimated by FCM.
CLSM
The cells were seeded on glass coverslips in 6-well plates at a density of 2.0 × 10 5 cells per well in 2.0 mL of complete high glucose Dulbecco's modified Eagle's medium (HG-DMEM), and cultured at 37 °C for 24 h. CDM/DOX, CLM/DOX, CSCM/DOX, or free DOX· HCl was added to each well with a final DOX· HCl concentration of 10.0 μg mL −1 . After co-incubation for 2 h, the medium was removed and the cells on glass coverslips were washed with PBS five times. After that, the immobilization was executed with 4% (w/v) PBS-buffered paraformaldehyde for 20 min at room temperature. And then, the cells were washed with PBS five times, and added in 0.1% (v/v) Triton X-100 in PBS for 12 min at room temperature. After being washed with PBS five times, the cells were subsequently stained with DAPI for 3 min. Afterwards, the cells were washed with PBS five times. At last, the filamentous actin was dyed with Alexa 488 for 30 min at 37 °C, and washed with PBS five times. The CLSM microimages of cells were photographed by a LSM 780 CLSM (λex = 488 nm; Carl Zeiss, Jena, Germany).
FCM
RenCa cells were seeded in 6-well plates at a density of 2.0 × 10 5 cells per well and cultured with 2.0 mL of complete HG-DMEM for 24 h. And then, CDM/DOX, CLM/DOX, CSCM/DOX, or free DOX· HCl was added to each well with a final DOX· HCl concentration of 10.0 μg mL −1 . Cells without treatment were used as control. After co-culture for 2 h, the medium was removed and the cells were washed with PBS five times. After that, all the cells were digested by trypsin, suspended in PBS, and centrifuged at 3500 rpm for 5 min. The supernatant was discarded and the bottom cells were resuspended in 0.3 mL of PBS. Data was analyzed by a flow cytometer (λex = 488 nm; Beckman, CA, USA).
Cytotoxicity Assays
The cytotoxicities of DOX-loaded micelles and free DOX· HCl with a DOX· HCl concentration from 0.16 to 10.0 μg mL −1 were conducted toward RenCa cells by a MTT assay. In brief, 180.0 μL of cell suspension containing 8.0 × 10 3 cells in complete HG-DMEM was planted into 96-well plates and incubated at 37 °C for 24 h. And then, various DOX formulations in 20.0 μL of PBS were added to each well and cultured for another 48 h. Subsequently, 20.0 μL of MTT at a concentration of 5.0 mg mL −1 was added and incubated for further 4 h. After that, the medium was carefully removed, and 150.0 μL of dimethyl sulfoxide (DMSO) was added to each well to dissolve the MTT formazan generated by the live cells. The plates were vibrated for 5 min before detection. The absorbance of medium was measured at 490 nm using a Bio-Rad 680 microplate reader. The cell viability was calculated as Equation (3) In Equation (3), the Asample and Acontrol represented the absorbances of sample and control wells, respectively.
Serum Albumin-Tolerance Stability Assays
The stability of DOX-loaded micelles in PBS-buffered BSA solution (30 mg mL −1 ) at pH 7.4, 25 °C, was tested by DLS at different time points.
Hemolysis Activity Tests
The hemolytic activity properties of CDM/DOX, CLM/DOX, CSCM/DOX, and free DOX· HCl were evaluated by a spectrophotometry technique. Typically, the fresh rabbit blood was obtained from the Experimental Animal Center of Jilin University, and then the blood was stabilized with dipotassium ethylene diamine tetraacetate in normal saline (NS). The blood was centrifuged at 1500 rpm for 10 min in order to separate the RBCs. Then the obtained RBCs were carefully washed and diluted. Next, CDM/DOX, CLM/DOX, CSCM/DOX, and free DOX· HCl at different concentrations were added in the suspended RBCs at 37 °C for 2 h. NS was used as negative control and Triton X-100 (i.e., a lysing agent of RBCs) was used as positive control. After that, the RBCs were separated at 3000 rpm for 10 min, and 180.0 μL of supernatant of each sample was collected and added into a 96-well plate. Then the free hemoglobin in the supernatant was tested using a Bio-Rad 680 microplate reader at 570 nm. The hemolytic ratio of RBCs was calculated as Equation (4) In Equation (4), the Asample, Anegative control, and Apositive control represented the absorbances of sample, and negative and positive controls, respectively.
Conclusions
In summary, CDM/DOX, CLM/DOX, and CSCM/DOX were constructed with diameters at ~100 nm, which exhibited a proper size for the selective accumulation in tumor tissue through the EPR effect. Compared to CDM/DOX and CLM/DOX composed with a single polymeric component, CSCM/DOX showed smaller particle size and slower DOX release. What's more, all these DOX-loaded micelles, especially CSCM/DOX, could be effectively internalized by RenCa cells. More importantly, CSCM/DOX showed a higher antiproliferative activity on RenCa cells than both CDM/DOX and CLM/DOX, and even free DOX· HCl after incubation for 48 h. In addition, all the DOX-loaded micelles showed satisfactory biocompatibility compared with free DOX· HCl. All in all, the CSCM is probably used as an effective drug delivery system in the clinical chemotherapy of malignancy. | 2016-03-14T22:51:50.573Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "6d5c31527e0114dd1c8355a177c29de73ec2b28b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/8/1/216/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d5c31527e0114dd1c8355a177c29de73ec2b28b",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
52837652 | pes2o/s2orc | v3-fos-license | Electrical Properties of Yttria-Stabilized Zirconia, YSZ Single Crystal: Local AC and Long Range DC Conduction
Widely-used complex plane analysis of impedance data is insufficiently sensitive to characterize fully the bulk properties of YSZ single crystal. Instead, more extensive data analysis is needed which uses a combination of parallel, admittance-based formalisms and series, impedance-based formalisms. Bulk electrical properties are measured at higher frequencies and contain contributions from both long range conduction and local dielectric relaxation. At lower frequencies, electrode–sample contact impedances are measured and are included in full equivalent circuit analysis. The impedance of YSZ crystal of composition 8 mol% Y2O3 in the (110) orientation, with Pt electrodes, was measured over the temperature range 150–750°C and frequency range 0.01 Hz-3 MHz. Full data analysis required (i) a parallel constant phase element (CPE)–resistance (R) combination to model the electrode response, (ii) a series R-C element to represent local reorientation of defect dipoles and (iii) a R-C-CPE element to represent long range oxide-ion conduction; (ii) and (iii) together model the bulk response. The dielectric element underpins all discussions about defect structure and properties of YSZ but has not been included previously in analysis of impedance data. The new equivalent circuit that is proposed should allow better separation of bulk and grain boundary impedances of YSZ ceramics.
Yttria-stabilized zirconia (YSZ) is a very well-known oxide ion conductor that is used as the solid electrolyte in solid oxide fuel cells and oxygen gas sensors. [1][2][3] It usually takes the form of a high-density ceramic in which the bulk resistance in series with a grain boundary resistance gives the overall sample resistance. 4 In almost all cases, the grain boundary resistance is present and cannot be eliminated readily by attention to ceramic processing conditions. The nature of the grain boundary impedance is often unclear, although significant compositional differences from the bulk, associated with dopant segregation, may be involved. 5,6 In order to measure sample impedances, sample-electrode contacts are necessary and therefore, appropriate consideration of their associated impedances forms part of the overall impedance analysis. For YSZ, contact impedances include contributions from the blocking of oxide ions at the sample-electrode interface, charge transfer resistances associated with the O 2− /O 2 redox couple and the diffusion of O 2 molecules between the surrounding atmosphere and the sampleelectrode interface. Usually, electrode contact impedances are wellseparated on a frequency scale from bulk/grain boundary impedances because they have much higher associated capacitances, C: typically (1-10) × 10 −6 F for the electrode contact compared with ∼ 1 × 10 −10 F for a grain boundary capacitance and (2-3) × 10 −12 F for a bulk capacitance. Relaxation frequencies, ω, are given ideally by ωτ = 1, where τ = RC and therefore, the frequency maxima of impedance semicircles, arcs or peaks associated with bulk/grain boundary impedances are usually separated by several decades from those of electrodesample contact impedances. This allows the visualization and characterization of sample properties without the necessity to eliminate sample-electrode contact impedances but, of course, a full analysis of properties, including the modelling of sample-electrode impedances, can also be carried out, as shown here.
Conductivity Arrhenius plots for oxide ion conduction in YSZ have been reported on many occasions and we do not give a comprehensive survey of the early literature here. Many data sets, especially for compositions that exhibit high oxide ion conductivity, show distinct curvature of the Arrhenius plots which is attributed to the trapping at lower temperatures of mobile oxide ion vacancies in vacancydopant defect complexes. 3 temperatures and ca 0.85 eV at high temperatures; the difference of ca 0.25 eV is often regarded as the dissociation enthalpy of the defect complexes. [11][12][13][14][15] It is standard practice in the analysis of impedance data of YSZ ceramics and single crystals to present data in the form of impedance complex plane plots, Z /Z and to obtain bulk conductivity data from the low frequency intercept of the high frequency arc or (distorted) semicircle on the real, Z axis; sometimes, the data are fitted to a semicircle whose center is depressed below the Z axis. 16,17 Impedance complex plane, Z * , plots represent a good method to separate bulk and grain boundary resistances since the appropriate equivalent circuit for data analysis is a series combination of the parallel RC elements that represent the bulk and grain boundary components. However, Z * plots on linear scales give undue weighting to the largest resistances in a sample and effectively, exclude from view any low resistance components such as those associated with inhomogeneous ceramics that may have conductive grain cores but resistive grain boundaries.
A more comprehensive analysis of impedance data that avoids such weighting is obtained by presenting the same impedance data in at least two of the four formalisms: impedance Z * , admittance Y * , permittivity ε * and electric modulus M * using the following interconversions: 18,19 where C 0 is the vacuum capacitance of the conductivity cell, without a sample in place. Each of these formalisms has real and imaginary components, such as: Following these interconversions, data may be plotted as either complex plane (or Nyquist) plots, e.g. Z vs Z or as spectroscopic (or Bode) plots, e.g. Z , M vs log f . It was shown recently that accurate fitting of impedance data of YSZ ceramics to the traditional equivalent circuit consisting of a series combination of bulk and grain boundary impedances was not entirely successful. Instead, better agreement was obtained on introduction of an additional series RC element, that was attributed to a localized dipolar reorientation process, into the equivalent circuit; 20 this element was placed in parallel with the element representing long range dc conduction through the sample.
F967
The objectives of the present work were to (1) obtain impedance data on a YSZ single crystal that therefore, did not contain any grain boundary contribution, (2) find the most appropriate equivalent circuit to model the bulk impedance data by considering the possibility of a parallel combination of both long range and local conduction processes, (3) extract values for the various circuit parameters as a function of temperature and (4) model and characterize the impedance response of the sample-electrode interface.
Experimental
Single crystals of yttria-stabilized zirconia of composition 8 mol% Y 2 O 3 (8YSZ) were obtained from Pi-Kem. The crystals were provided as plates parallel to lattice planes of the set {110} with dimensions 5 × 5 × 0.5 mm. The crystals were already polished on a pair of opposite plate faces and were used as-received. Electrodes were fabricated from Pt-paste on opposite plate faces which was dried and hardened by heating at 900 • C for 2 h. The crystals were then attached to the Pt leads of a conductivity jig which was placed inside a horizontal tube furnace.
Impedance measurements in air were obtained over the temperature range 150 to 750 • C and recorded using two instruments, an Agilent 4294A over the frequency range 40 Hz to 3 MHz and a Solartron 1260A over the frequency range 0.01 Hz to 1 MHz. Most of the data were collected using the Agilent, but use of the Solartron enabled an extra three decades at low frequency to be accessed, especially for measurements at high temperatures; the nominal ac voltage used was 100 mV. At each temperature, the system was allowed to equilibrate for 1 h, without voltage applied, prior to impedance measurements. Data were analyzed using Zview (Scribner Associates Inc.) software. Impedance data were corrected for crystal geometry and electrode contact area; this allowed resistance and capacitance to be reported in resistivity and permittivity units of cm and F cm −1 , respectively. Open circuit measurements of an empty jig were used to obtain the blank parallel capacitance, C 0 , of the jig and leads, which was subtracted from the values obtained with a sample present in the jig. In order to obtain the C 0 value, the jig was assembled with hardened Pt electrodes of similar dimension, but without a sample in place. Closed circuit measurements were obtained by connecting the two electrodes directly and used to correct for the series jig resistance.
The first objective in data analysis was to find the most appropriate equivalent circuit to represent data sets; to achieve this, data were presented in various formats so as to visually, gain an overview of the various components shown by the data. We found the following presentations to be particularly appropriate and were the ones used here.
First, Z vs Z plots served to highlight the main resistive components but had the disadvantage that small additional resistances were effectively hidden. As also shown later, Z vs Z plots were not a good discriminator between the possible equivalent circuits.
Second, log Y vs log f plots gave equal weighting to the various conducting elements and in particular, highlighted the presence of a high frequency dispersion which was modelled using the bulk constant phase element, CPE.
Third, combined Z /M vs log f plots were examined to see whether the main resistance, shown by the largest arc in plots of Z vs Z or the largest peak in plots of Z vs log f , represented the sample bulk. If it did, the Z peak should coincide approximately with the largest peak in M vs log f which corresponds to the bulk response since it represents the element with the smallest (ie bulk) capacitance.
Fourth, log C vs log f [C ≡ ε C 0 ] plots were examined since they gave equal weighting across the frequency spectrum to the various capacitive elements, including the limiting high frequency capacitance, any intermediate frequency capacitances and low frequency, electrode-sample contact capacitances.
The second objective in data analysis was to fit the experimental data to possible equivalent circuits. It is essential to identify the most appropriate equivalent circuit in order to have correct equations to evaluate the component R, C and CPE parameters. Fit quality and accuracy were assessed by both visual inspection of the data in various formalisms, as indicated above and the residuals between experimental and fitted data.
The third objective was to determine the dependence of the various circuit component values on temperature and interpret the component parameters in terms of sample characteristics.
Results and Discussion
Impedance data are shown for the (110) orientation in Figure 1 as (a) a Z * complex plane plot and (b) Y , (c) M /Z , and (d) C spectroscopic plots at one representative temperature, 306 • C.
At this temperature, data are dominated by the sample response whereas sample-electrode contact impedances start to appear in the data at higher frequencies. The Z * data (a) show a slightly distorted high frequency arc with a low frequency inclined spike (inset). The initial interpretations of these data are as follows: Using conventional complex plane analysis, the high frequency arc (a) can be fitted to an appropriate function and the dc resistance value of the sample obtained from the low frequency intercept on the Z axis. The capacitance associated with sample resistance can be obtained from the arc maximum using the relation: As expected for a single crystal, there is no additional arc at lower frequencies associated with a grain boundary impedance. The low frequency spike represents the onset of the sample-electrode contact impedance.
On presenting the data in other formats (b-d), additional features are seen. Y data (b) show a low frequency plateau corresponding to the dc conductivity, that is also obtained from the Z /Z plots, and in addition, a high frequency, power law dispersion. Such dispersions are a characteristic feature of all ionically-conducting materials (and probably, many semiconducting materials as well) and are a manifestation of Jonscher's Universal Dielectric Response. 21 The dispersions correspond to regions of the frequency/time domain where local conduction processes occur but on shorter timescales than dc processes at lower frequencies. With increasing frequency in the dispersion region, increasingly easier conduction processes are detected and the measured ac conductivity rises.
Over the years, various empirical functions had been used to model the dispersion region until the seminal work of Jonscher who recognized the universal occurrence of a power law dependence of ac conductivity on frequency. Most recently, Almond and co-workers demonstrated that such power law dependence is a natural consequence of an equivalent circuit that consists of a large resistorcapacitor network. 22,23 Until their demonstration, the significance of the characteristic slope, n, of the log conductivity-log frequency plots was not well appreciated but is now regarded simply as the ratio between the numbers of capacitive and resistive connections in the network. This high frequency power law conductivity dispersion is modelled in equivalent circuits by inclusion of a CPE whose admittance takes the form: The bulk electrical properties of many ionic conductors are modelled well by an equivalent circuit A, Figure 2, that consists of a parallel combination of a resistance, R, which represents the dc conductivity, a capacitance, C which represents the limiting high frequency permittivity, often given the symbol ε ∞ and a CPE which represents the power law dispersion.
M /Z plots, Figure 1c show one main peak in each spectrum with the peak maximum at slightly higher frequency for M than for Z . The peak maximum of an ideal, Debye-like M peak is inversely proportional to the capacitance of the R-C element responsible for the peak: where C 0 is the capacitance of the empty jig that contains electrodes in the same geometrical arrangement. Since the smallest capacitance in an equivalent circuit usually represents the bulk component, the M peak, and the associated Z peak, enable assignment of these peaks to the bulk sample conductivity. The observed small separation in peak maximum frequencies (c) is a direct consequence of the presence of the CPE in the equivalent circuit A, Figure 2. In addition, the CPE causes the M , Z peaks to broaden asymmetrically: the M peak is Debyelike at frequencies lower than the peak maximum but broadened at higher frequencies whereas, the Z peak is Debye-like at frequencies above the peak maximum but broadened at lower frequencies. 24 C data, Figure 1d, show two dispersions at high and low frequency with some evidence for both a limiting high frequency plateau at ∼2 pFcm −1 and a poorly-resolved intermediate frequency plateau at ∼6 pFcm −1 . The high frequency plateau in C corresponds to a permittivity, of ∼25, using: where e 0 is the permittivity of free space, 8.854 × 10 −14 Fcm −1 . This ε value is attributed to the bulk permittivity, ε ∞ of the crystal. C data at lower temperatures show this plateau more clearly, Figure 1e. The intermediate frequency plateau has an effective permittivity of ∼70. It is not immediately obvious how this should be assigned since data obtained from single crystals should be free from any grain boundary or surface layer impedances and anyway, such a capacitance value of ∼6 pFcm −1 would represent a significant volume fraction of the sample and be much smaller than expected for a grain boundary or surface layer. 19 It therefore, seems likely to represent an additional parallel element in the equivalent circuit.
The high values reached by the low frequency dispersion in C Figure 1d and the observed low frequency spike in Z * , (a), are attributed to blocking capacitance effects at the crystal-electrode interface and in particular, are associated with oxide-ion conduction of the YSZ crystal. 25 Further interpretation of the impedance data required fitting to possible equivalent circuits, to establish the most appropriate circuit. The assessments of the validity of the possible equivalent circuits were carried out in various ways: (1) by visual comparison of fitted and experimental data over the whole frequency range, using data Although impedance data covering 8 decades of frequency were obtained, this was insufficient to fully fit to a complete equivalent circuit at any single temperature. Consequently, partial circuits were used for three temperature ranges: (i) low, (ii) intermediate and (iii) high which were then combined at the end of the analysis to give a master circuit. Finally, at the highest temperatures, (iv), modification to the circuit was required to include the introduction of instrumentationrelated inductive effects.
(i) Low temperature data, 170 to 220 • C The first step to establish the most appropriate equivalent circuit was to find a partial circuit that fitted the lowest temperature data sets since, at these temperatures, only the bulk response was detected over the measuring frequency range. The partial circuit A shown in Figure 2 covers the high frequency data associated with the bulk response and includes both dc conduction and ac conductivities associated with short range, power law effects. 21,26 An excellent fit of low temperature data to this partial circuit was obtained, as shown at 190 • C in Figure 3.
The presence of a CPE in the equivalent circuit was readily apparent in two ways. First, as shown in log Y vs log f , the CPE represents the power law dispersion, at high frequencies, with slope n, Figure 3c. Second, in plots of log C vs log f , the CPE contributes a power law dispersion of slope (n-1) at lower frequencies because C = Y /ω = Bω n−1 ; this is seen over the frequency range ∼10 4 -10 6 Hz in Figure 1d. In the analysis of high frequency data, it is essential that both CPE 1 and C 1 are included in the equivalent circuit. 27 A CPE alone cannot account for experimental data in which both frequency-independent ε ∞ is detected at high frequencies and frequency-dependent C at lower frequencies.
Unfortunately, this point is often not recognized in the literature, perhaps because data may not extend to frequencies that are high enough to detect ε ∞ and therefore, equivalent circuits that are used to represent the bulk response may contain only R 1 and CPE 1 . An alternative reason may be that data presentation is often limited to the use of Z vs Z complex plane plots. These are completely insensitive to the presence of high frequency, power law impedances which occur at frequencies close to the origin of Z vs Z plots.
(ii) Intermediate temperature data, 260 to 440 • C The second stage in finding an appropriate equivalent circuit was to consider data obtained at increasingly higher temperatures; additional impedance components became apparent in the lower frequency C data and required inclusion of additional element(s) in the equivalent circuit. The effect of including various possible additional circuit elements was tested based on two strategies. One was to add a second element in series with the bulk element shown in circuit A, Figure 2. This would represent a second series impedance associated with the single crystal and correspond to an electrical inhomogeneity of some kind. Given the small value of the intermediate frequency capacitance seen in Figures 1d, 1e, this electrical inhomogeneity would correspond to a significant volume fraction of the crystal. The second strategy was to consider an additional impedance in parallel with the bulk conductivity represented by circuit A, Figure 2; in order for this to be detected as a separate element, it should have dielectric character and involve a series R-C combination.
It was found that partial circuit B, Figure 2, containing an additional parallel impedance, gave the best fit to the experimental data at intermediate temperatures. This partial circuit has the logical simplicity of combining, in parallel, a conductive element, R 1 -C 1 -CPE 1 and a dielectric element represented by the C 2 -R 2 series combination. Circuit B also contains a series element CPE 3 to represent the onset of impedances associated with the sample-electrode interface. Fits to experimental data recorded at 306 • C, are shown in Figures 4a-4d; the residuals are shown in (e) and are small over the entire frequency range.
The suitability of several other plausible equivalent circuits was tested, as shown in Figure 5 and Table I, circuits (D) to (J). Each of these circuits contains the same element, CPE 3 to represent the onset of the electrode-sample interfacial impedance and, therefore, the circuits differ only in the element(s) that represent the bulk response. The results, Figure 5, show that all of these circuits were unsatisfactory for various reasons, as follows.
Circuits (D) and (E) are simple circuits that have a single conducting element to represent the sample bulk. Circuit (D) has the parallel element R 1 -CPE 1 whereas (E) has the parallel element R 1 -CPE 1 -C 1 . Both of these are often used in the literature to represent a bulk conductive response; they are in series with element CPE 3 to represent the sample-electrode interface. Residuals and fits for these circuits are not good.
Circuit (F) is the classic circuit used to represent many ceramics with a series combination of elements such as grains, grain boundaries and surface layers. Poor quality of the fits, of C' in particular, as well as poor residuals, show that this circuit is unsuitable.
Circuits (G), (H), (I) and (J) are other possible circuits that combine conductive and dielectric components although none have the logical consistency of a conducting element R 1 -C 1 -CPE 1 , in parallel with a separate dielectric element R 2 -C 2 that is present in circuit A. However, these circuits all gave poorer residuals than circuit (B) as well as an unrealistically high value of C 1 for circuit (I).
(iii) High temperature data, 500 to 600 • C The final step in obtaining a circuit that represents the complete range of impedance data was to fully characterize the sample-electrode contact impedance that is seen with increasing temperature and at lower frequencies. The complete, or master, equivalent circuit that includes partial circuits A and B is shown as circuit C in Figure 2, although at these high temperatures, it was not possible to have a sufficiently wide range of frequencies to include refinement of the parameters, R 2 , C 2 and C 1 in data fitting. Element CPE 3 , that represents the sampleelectrode interface at high temperatures, is modified by the addition of a parallel resistance, R 3 . Consequently, this interfacial impedance has a finite resistance, as shown by an extrapolated limiting low frequency intercept on the real Z' axis of the impedance complex plane plot, Figure 6a. Since, at high temperatures, data do not extend to frequencies that are high enough to include a significant contribution from elements C 1 and R 2 -C 2 , circuit C is simplified to give the partial circuit shown in Figure 6a. Fits of Y and C spectroscopic plots to this partial circuit are shown in Figures 6b, 6c with residuals in 6d.
(iv) Highest temperature data, 650 to 750 • C, with inductive effects Impedance data at the highest temperatures were similar to those shown in Figure 6, but with one main difference. Instead of the impedance data showing the onset of the R 1 -CPE 1 high frequency arc, an inductive effect was seen in which the impedance data at high frequencies cross the Z' axis to give positive values of Z''. This is shown in Figure 7a together with an equivalent circuit containing a series inductance L 1 which gave a good fit to the data at these high temperatures. The effect of the inductance on log Y at the highest frequencies is shown in (b) and is also seen as a resonance effect in log C data (c). There was no evidence for an inductive effect in lower temperature data, < 700 • C, and therefore the inductance is not included in the master circuit C.
Arrhenius plots for the conductivities σ 1 , σ 2 and σ t obtained from fitting to circuit B are shown in Figures 8a, 8b. The Arrhenius plot for σ t , (b), is not linear, consistent with that reported for various YSZ samples on many other occasions. 3,7,8,10,28,29 This non-linearity is widely attributed to trapping of oxygen vacancies in vacancy-dopant complexes at low temperatures: 30 at higher temperatures, dissociation of the complexes occurs and the trapping enthalpy is not included in the activation energy. The total conductivity, σ t , at high temperatures, would therefore represent the hopping of free vacancies.
The temperature dependence of R 3 is shown in Figure 8c and that of the CPE 1 parameters and C 1 is in Figure 8d. Resistance R 3 controls the total dc resistance of the sample-electrode arrangement and is associated, in some way, with the sample-electrode-air interface. It also has a very high activation energy, 2.5(1) eV. The two main processes taking place in the vicinity of the interface are redox electron transfer between oxygen species and the diffusion of O 2 molecules through the Pt electrode between the surrounding atmosphere and the sample-electrode interface, both of which could have a significant associated impedance. These processes may be significantly different for the flat, single crystal surfaces used here and the higher surface area, intrinsically rough, surfaces of most YSZ ceramics. Further work is required to better understand the nature of the interface reactions and their effect on resistance R 3 .
The Arrhenius plot for σ 1 is parallel to that for σ t at low temperatures and therefore, has the same activation energy. The similarity of the conductivity data for σ 1 and σ t at low temperatures is rationalized using the equation for Y * of circuit B. The bulk component of circuit B has four elements in parallel: R 1 , C 1 , CPE 1 and R 2 -C 2 and therefore, its admittance, Y * , can be written as the summation of their individual admittances, as follows: In the low frequency limit, as ω → 0, Y * = 1/R 1 = σ 1 = σ t and therefore, σ t contains no contribution from the dielectric resistance R 2 . Consequently, R 2 makes no contribution to the intercept values of R t in impedance complex plane plots and could not be detected by standard impedance complex plane analysis.
The Arrhenius plot for σ 2 has lower activation energy than that for σ 1 , and is similar to that of σ t at high temperatures. σ 2 appears to represent the reorientation of the vacancy-dopant complexes. The interpretation of its lower activation energy would be that dipole reorientation, represented by series element R 2 -C 2 , does not require dissociation of the complexes; hopping of the oxygen vacancies within the complexes is therefore similar to the hopping of free vacancies at (7) high temperature, without the need in either case for vacancy-complex dissociation.
In order to investigate the effect of heating to high temperatures and subsequent cooling rate on the conductivity, (110)-oriented YSZ single crystals were annealed in air at 1200 • C for 90 minutes and cooled at different rates. Figure 9a shows the Arrhenius plots of the samples cooled at different rates. All three data sets show non-linear Arrhenius plots, but slight differences can be observed in both high temperature (b) and low temperature (c) ranges. At high temperature, the sample quenched in liquid N 2 shows slightly lower conductivity than the samples cooled at 10 and 0.5 • C/min, which show similar conductivities. Conversely, at low temperature, the quenched sample shows higher conductivity values than the samples cooled at intermediate and low cooling rates.
The equivalent circuit analysis results reported above have enabled us to identify the most appropriate equivalent circuits to represent the experimental data sets. At lower temperatures where the bulk response of the crystals can be seen in the available frequency range, it is clear that the bulk response contains two components which are in parallel, rather than a series-connected circuit which is usually appropriate for ceramic materials consisting of grain and grain boundary components. Thus, as expected, there is no evidence of a component attributable to grain boundaries or, indeed, to a surface layer or crystal inhomogeneity. We are therefore now in a position to consider the possible mechanistic origins of the two parallel components, one of which represents long range conduction and the other Figure 6. Impedance spectra for 8YSZ single crystal with the field perpendicular to (110) measured at 500 • C. (a) Experimental and fitted data shown for impedance complex plane plot with the equivalent circuit used, (b) Y' spectroscopic plot, (c) C' spectroscopic plot and (d) residuals. which appears to represent local conduction or a dielectric relaxation process.
The traditional explanation of curvature in conductivity Arrhenius plots of YSZ ceramics and single crystals is the so-called 'dipoletrapping model', which invokes the trapping of mobile oxygen vacancies by the Y acceptor dopants. The trapping arises because the dipole components have charges of opposite sign, i.e. using Kröger-Vink notation, they are Y and V •• o . At lower temperatures, an additional dissociation enthalpy is required to enable long range conduction of the oxygen vacancies and this gives rise to an activation energy which contains terms for both dipole dissociation and vacancy migration. At higher temperatures, above the region of curvature in the Arrhenius plots, it is presumed that a sufficient number of dipoles are dissociated and therefore, the observed lower activation energy contains only the vacancy migration term.
Our equivalent circuit B is at least partly consistent with this model; the series element R 2 -C 2 represents hopping of oxygen vacancies within the dipoles and therefore, leads to dipole reorientation but not long range vacancy migration. The dipole reorientation is an ac process only but occurs at the same time as long range dc conduction; therefore, R 2 does not contribute to the total crystal resistance R t (= R 1 ). From the difference in activation energies of σ 1 and σ 2 , the value of ∼0.2 eV may be assigned to the dissociation enthalpy. This value is similar to that reported in the literature [11][12][13][14][15] based on high and low temperature activation energies whereas here, both values are obtained from the same, low temperature, data sets. This simple dipole trapping model has certain drawbacks. As pointed out by Ahamer et al., 15 these YSZ materials cannot be regarded as dilute defect systems since the dopant Y concentrations are far too high; it is difficult to imagine how genuinely-free oxygen vacancies could arise since there will always be Y dopants in the near vicinity of the oxygen vacancies. It is also difficult to explain the differences in conductivity observed between quenched and slow-cooled crystals using this model. As shown by Ahamer et al. 15 and also in Figure 9, quenched crystals have a higher conductivity at low temperatures followed by a slightly smaller conductivity at high temperatures. The increased conductivity at lower temperatures in the quenched samples could be interpreted reasonably as an increase in number of free oxygen vacancies arising from the dipole dissociation at high temperatures prior to quenching. However, the higher temperature data imply a reduction in mobile vacancy concentration which cannot be explained by a simple model of dipole dissociation.
Given the large concentration of both Y dopants and oxygen vacancies, other reversible structural changes may occur as a function of temperature which influence the mobile carrier concentration at high temperatures. High temperature neutron diffraction studies on YSZ powders showed additional broad diffuse scattering peaks which disappeared above 650 • C and a discontinuity in thermal expansion coefficient data was used as evidence for the occurrence of a second order phase transition. 31 We now consider the recently-proposed 'two different barrier heights' model 15 in which the conduction pathway involves a sequential combination of hops over two different barrier heights. This model can also account for curvature in the Arrhenius plots. Thus, at low temperatures the higher barrier limits the long range conductivity. With increasing temperature, the higher barrier becomes less important since it has a higher activation energy than that for hops over the lower barriers. Consequently, at high temperatures, the lower barriers limit the long range conductivity.
A drawback of this model is that it is a series model and therefore, impedance data at low temperatures should fit an equivalent circuit that has two R, C components, representing the two barrier heights, which are placed in series. In addition, the spectrum of conductivity, Y vs frequency should show two plateaux, one representing the overall conductivity at low frequencies and a second one at higher frequencies that includes the conductivity of the easier hops. In our impedance data, there is no evidence of a second series component in the equivalent circuit nor of two plateaux in the conductivity, Y , spectra. As with the dipole dissociation model, there is also the difficulty in explaining the differences in conductivity of quenched and slow-cooled samples.
In conclusion, there is a closer fit of the dipole model to the equivalent circuit B that contains two parallel conduction pathways, but the dipole model is a significant approximation to what must be complex, co-operative conduction mechanisms in which two activation barriers can be identified. Further, the nature of the defect clusters may change at a second order transition and involve more structural reorganization than simple dipole dissociation.
Crystallographic evidence for defect clusters has been obtained by single crystal neutron diffraction studies on YSZ crystals with a range of Y contents 32 and Sc-doped YSZ ceramics. 33,34 An important cluster appears to be a pair of oxygen vacancies, separated by a cation, in the < 111> direction. These are reported to be stable to high temperatures, close to melting. However, the precise nature of the structural changes to the defect complexes responsible for curvature in the Arrhenius plots is, at present, unknown. Possibly, two separate cluster formation mechanisms are involved; one involves Y Zr − V •• O pairs and the other involves O oxygen vacancy pairs. The increase in concentration of one kind of cluster may be at the expense of the second kind and this may be reflected in the conductivity data showing an enhanced conductivity at lower temperatures at the expense of a reduced conductivity at higher temperatures. However, this is speculation and requires further study.
Conclusions
Accurate representation of bulk impedance data of single crystal YSZ samples requires the presence of a dielectric element in the equivalent circuit in addition to the usual element that represents the bulk conductivity. The circuit that best fits the bulk response is a parallel combination of the R 1 -C 1 -CPE 1 conducting element with the R 2 -C 2 dielectric element, Figure 10. R 1 represents the dc resistance of the sample and is the same as the total resistance R t obtained by conventional complex plane analysis. R 2 represents the resistance to defect complex reorientation and has similar activation energy to the total resistance at high temperatures. We are, therefore, able to determine the parameters for local hopping or dipole reorientation separate from the long range, conductivity parameters.
Previous studies on YSZ ceramics showed the need for inclusion of the dipole element but a full assessment of the most appropriate circuit to represent the data was not made. 20 Here we show that, with single crystal data and no contribution from grain boundary impedances, it is possible to identify unambiguously the most appropriate equivalent circuit. Circuit B is also the most logical circuit since it represents the two parallel processes of conduction and dielectric relaxation.
Choice of the most appropriate equivalent circuit to fit and analyse data requires data presentation in numerous ways so as to give equal weighting to all impedance components over the entire frequency range. Conventional impedance complex plane plots on linear scales, Z vs Z , which have been widely used previously to analyse YSZ impedance data, are insensitive to impedance phenomena at high frequencies and were unable to discriminate between the various equivalent circuits that were considered and tested. It was found to be particularly useful to present impedance data as log Y vs log f , which showed the distribution of conductivities and log C vs log f , which showed the distribution of capacitances. These presentations were sensitive to additional impedance components because the equivalent circuit, Figure 10, has a parallel combination of contributing elements that are best separated using admittance-based formalisms and are an example of the truism: admittances add in parallel whereas impedances add in series.
As far as we are aware, the contribution of short range dielectric processes in parallel with long range ionic conduction has not been well-recognized previously in the analysis of impedance data of YSZ. However, the occurrence of series-based, local ac conduction processes as part of overall, long range dc conduction, is widely attributed to the frequency-dependent, power law ac conductivity at high frequencies, such as shown in Figures 1b, 3c, etc. Such processes are usually represented by a CPE, which can be deconvoluted into resistive and capacitive components, whose relative contribution is given by the CPE parameter, n. From present results, both CPE 1 and the dielectric processes, R 2 -C 2 , contribute to the overall impedance response of YSZ materials.
The activation energy for σ 2 , which represents dipole reorientation, is similar to that of σ t at high temperatures, where it is presumed that oxygen vacancies require no dissociation energy in order to move. The higher activation energy of σ t at low temperatures therefore contains a contribution from dipole dissociation, estimated at ∼0.2 eV. This simple model of dipole dissociation needs modification to take account of first, structural studies of defect complexes, including temperaturedependent cluster formation in diffuse scattering neutron diffraction data and second, the high concentration of dopants and oxygen vacancies which exceed greatly the limit for considerations using dilute defect equilibria.
These results on a single crystal sample show that the bulk response contains two components, representing dielectric and conduction processes. Recognition and modelling of this complexity may help to shed light on grain boundary contributions to the impedance of ceramic samples. The intermediate frequency capacitance plateau that we identify with the dielectric component C 2 has been in evidence in the impedance response of numerous other single crystal and ceramic samples, 35,36 not only of YSZ; it may therefore be a common feature of the impedance data of many ionic conductors. | 2019-04-10T13:11:46.067Z | 2018-08-25T00:00:00.000 | {
"year": 2018,
"sha1": "68924da6fee16e992790341fd359f9fa3b99b0ce",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/1411104/files/J.%20Electrochem.%20Soc.-2018-Vendrell-F966-75.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d80a43fdacc274e90f219ecf59541269a4daff5a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
153302777 | pes2o/s2orc | v3-fos-license | Nicotine excites VIP interneurons to disinhibit pyramidal neurons in auditory cortex
Abstract Nicotine activates nicotinic acetylcholine receptors and improves cognitive and sensory function, in part by its actions in cortical regions. Physiological studies show that nicotine amplifies stimulus‐evoked responses in sensory cortex, potentially contributing to enhancement of sensory processing. However, the role of specific cell types and circuits in the nicotinic modulation of sensory cortex remains unclear. Here, we performed whole‐cell recordings from pyramidal (Pyr) neurons and inhibitory interneurons expressing parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) in mouse auditory cortex, in vitro. Bath application of nicotine strongly depolarized and excited VIP neurons, weakly depolarized Pyr neurons, and had no effect on the membrane potential of SOM or PV neurons. The use of receptor antagonists showed that nicotine's effects on VIP and Pyr neurons were direct and indirect, respectively. Nicotine also enhanced the frequency of spontaneous inhibitory postsynaptic currents (sIPSCs) in Pyr, VIP, and SOM, but not PV, cells. Using Designer Receptors Exclusively Activated by Designer Drugs (DREADDs), we show that chemogenetic inhibition of VIP neurons prevents nicotine's effects on Pyr neurons. Since VIP cells preferentially contact other inhibitory interneurons, we suggest that nicotine drives VIP cell firing to disinhibit Pyr cell somata, potentially making Pyr cells more responsive to auditory stimuli. In parallel, activation of VIP cells also directly inhibits Pyr neurons, likely altering integration of other synaptic inputs. These cellular and synaptic mechanisms likely contribute to nicotine's beneficial effects on cognitive and sensory function.
Here, we performed whole-cell recordings from Pyr, VIP, SOM, and PV cells in acute brain slices containing mouse auditory cortex to examine the cell-type specificity of nicotinic modulation. We found that nicotine weakly depolarizes Pyr cells, while potently depolarizing and exciting VIP cells. Additionally, using Designer Receptors Activated by Designer Drugs (DREADDs) to silence VIP cell activity, we found that VIP neurons mediate nicotinic effects on Pyr cells. Thus, nicotine-induced, VIP-mediated disinhibition of Pyr neurons likely leads to the increased responsiveness observed in other studies, providing a probable mechanism for nicotine's beneficial effects on cognitive and sensory function.
| Animals
Male and female mice, 25-50 days old, were used for all experiments. The care and use of mice were approved by the University of California, Irvine Institutional Animal Care and Use Committee. To identify interneuron subtypes for recording, we used three different mouse lines that expressed the fluorescent protein tdTomato under interneuron-specific promoters. For VIP, SOM, and PV cells, we crossed the respective homozygous mice VIP-ires-cre (VIP tm1(cre)Zjh ), SOM-ires-cre (Sst tm2.1(cre)Zjh ), or PV-ires-cre (Pvalb tm1(cre)Arb ) with the homozygous tdTomato reporter mouse Ai9 (B6.Cg-Gt(ROSA)26Sor tm9(CAG-tdTomato)/Hze ). All mice were obtained from The Jackson Laboratory. In one animal, immunohistochemistry was used to confirm near-complete overlap between tdTomato fluorescence and anti-VIP antibodies for neurons in auditory cortex (VIP antibody H-6, sc-25347, Santa Cruz Biotech). To generate mice for injection of DREADDs, we crossed homozygous VIP-ires-cre mice with FVB mice. Recordings from Pyr cells were performed in either FVB mice or the offspring of VIP-ires-cre/FVB mice.
| Electrophysiology
Slices were transferred to a submersion chamber for recording and maintained in continuous bath flow of ACSF (~2.5-3 ml/min) at room temperature. Whole-cell recordings were obtained with patch pipettes (1.5-5 MΩ) filled with either a K + -based solution (for current-clamp recordings) containing (in mM) 135 K-gluconate, 1 KCl, 2 MgCl 2 , 1 Na-ATP, 0.5 Na-GTP, 1 EGTA, 10 HEPES, or a Cs + -based solution (for voltage-clamp recordings) containing (in mM) 135 CsMeSO 4 , 5 CsCl, 2 MgCl 2 , 1 Na-ATP, 0.5 Na-GTP, 1 EGTA, 10 HEPES (pH 7.3 and 270 mOsm). Responses were acquired in voltage-clamp or current-clamp mode with a MultiClamp 700B amplifier (Molecular Devices) and AxoGraph software. Signals were amplified and low-pass filtered at 2 kHz and digitally sampled at 10 kHz. Series resistance (6-15 MΩ) was continuously monitored, and data were discarded if the resistances changed more than 30%. Voltages were not adjusted to compensate for the liquid junction potential (~10 mV). Neurons were visualized using infrared differential interference contrast (IR-DIC) and fluorescence (Zeiss Axioskop 2). The recording location in auditory cortex was based on previous studies in the mouse thalamocortical slice (Cruikshank et al., 2002) and confirmed in some recordings by a short-latency response to stimulation of the thalamocortical pathway.
For current-clamp recordings, neurons were selected only if the resting membrane potential was negative to −50 mV and experiments were conducted at resting membrane potential. For voltage-clamp recordings of isolated inhibitory postsynaptic currents (IPSCs), the reversal potential for excitatory postsynaptic currents (EPSC) was presumed to be around 0 mV. Recordings at 0 mV contained small negative amplitude spontaneous currents in addition to the large positive amplitude currents; these negative currents are likely spontaneous EPSCs and confirm that the positive amplitude currents are exclusively IPSCs. To isolate EPSCs, we first estimated stimulus-evoked IPSC reversal potential in 1 mV steps in a subset of neurons and obtained a value of ~52 mV; this clamp potential was then used for EPSC measurements.
| Viral infusion
Three-week old male and female hemizygous VIP-Cre mice received 2 x 0.5-μL unilateral infusions to auditory cortex (from bregma:
| Immunohistochemistry
Mice were anesthetized with 50 mg/kg sodium pentobarbital and perfused with ice-cold 0.1 M PBS and 4% paraformaldehyde. Brains were harvested, soaked in 4% paraformaldehyde for 24 hr at 4°C, and cryoprotected in 30% sucrose at 4°C until completely submerged. Tissue was then flash frozen in dry ice-chilled isopentane and 40 μm histological sections containing auditory cortex were collected using a Leica CM 1850 Following secondary incubation, tissue was washed in 0.1% PBS-Tween20 and incubated in DAPI [1:15000] in 0.1 M PBS. Sections were then slide mounted and cover slipped using VectaShield Mounting Medium (product #H-1000).
Membrane potential, spontaneous IPSC/EPSC frequency and amplitude, and firing rate were all determined from a 1-min recording span; pre-nicotine data were measured from the 1 min immediately prior to nicotine application and nicotine data were measured from 4 to 5 min after the start of nicotine application. Paired statistical tests (e.g., paired t tests) were used to compare pre-nicotine to nicotine data, with the exact test along with number of cells and number of animals reported for each comparison in the Results. Membrane potential was the average over the 1-min period. Nicotinic effects on membrane potential were expressed in the figures as "Depolarization", i.e., nicotine membrane potential-pre-nicotine membrane potential. IPSC/EPSC frequency (in Hertz) was determined from the number of events during the 1 min.
Amplitude for spontaneous IPSCs/EPSCs was determined by measuring peak amplitude for each event, then averaging all event amplitudes over the 1 min. The amplitude for evoked IPSCs/EPSCs was measured as peak amplitude over the period 0-100 ms after stimulation, averaged from 5 to 10 repetitions.
| RE SULTS
Nicotine selectively modulates specific cell types, a feature that is likely critical to understanding the neural basis of nicotinic effects (Gulledge et al., 2007;Porter et al., 1999). To determine the specificity of nicotine in auditory cortex, we obtained whole-cell recordings from four nonoverlapping classes of cells that constitute the majority of cortical neurons: Pyr, VIP, SOM, and PV neurons . Pyr cells were identified by their pyramidal-shaped soma and prominent apical dendrite. VIP, SOM, and PV cells were identified by crossing VIP-Cre, SOM-Cre, or PV-Cre mice with the Cre reporter mouse Ai9, thus conferring tdTomato fluorescence to one cell type in each experiment.
Slices were taken from 25-to 50 day-old mice and although P25-P50 represents a time of continuing cortical and nAChR development (Kawai, Kang, & Metherate, 2011;Slotkin, 2002), we found no correlation of major nicotine effects with age (details below). All recordings were performed in the auditory cortex in a thalamocortical slice preparation, and each data set includes cells from all cortical layers; nicotine was bath-applied (1 μΜ).
| Nicotine selectively depolarizes VIP and Pyr neurons
First, we examined how nicotine alters the membrane potential of each cell type using current-clamp whole-cell recordings and a K + -based pipette solution. We found that nicotine weakly but consistently depolarized Pyr cells an average of 1.39 ± 0.22 mV (Figure 1a Depolarization was stronger in deeper layers for Pyr neurons. (d) Nicotine increased action potential firing rate in VIP cells (left) and firing appears to peak after several minutes (right, firing rate examined in 30 s bins). For statistical comparisons in this and subsequent figures, asterisks indicate: *p < 0.05, **p < 0.01, ***p < 0.001 Furthermore, nicotine strongly depolarized VIP interneurons an average of 10.76 ± 1.08 mV (Figure 1a,b; paired t test: n = 33 cells, 14 mice, t (32) = 9.94, p < 0.0001). This potent response caused a majority (22/33) of VIP cells to fire action potentials, as reflected in spike frequency measures before and after nicotine application (Figure 1d, left; paired t test: n = 33 cells, 14 mice, t (32) = 3.64, p = 0.0009).
In contrast to its effects on Pyr and VIP cells, nicotine did not alter the membrane potential of SOM or PV neurons (Figure 1a,b; paired t test: SOM n = 13 cells, 5 mice, t (12) = 0.95, p = 0.36; PV n = 13 cells, 5 mice, t (12) = 0.12, p = 0.90). Therefore, nicotine selectively depolarizes VIP and Pyr cells across cortical layers, with the most powerful effect being on VIP cells.
| Nicotine directly depolarizes VIP neurons via β2-containing nAChRs and indirectly depolarizes Pyr cells
To determine if the depolarization of Pyr and VIP cells resulted from direct activation of nAChRs located on these cell types, we applied nicotine after blocking synaptic activity. We bath-applied 10 μΜ CNQX and 50 μΜ PTX for 7-10 min prior to nicotine, to block AMPA and GABA-A receptors, respectively. CNQX and PTX prevented nicotinic depolarization of Pyr cells (Figure 2a; paired t test: n = 7 cells, 2 mice, t (6) = 1.99, p = 0.09), suggesting that nicotine's effects occurred indirectly. However, nicotinic depolarization of VIP cells persisted in the presence of CNQX and PTX, implying direct nAChR activation (Figure 2a; paired t test: n = 12 cells, 6 mice, t (11) = 6.21, p < 0.0001).
Overall, the results indicate that nicotine indirectly depolarizes Pyr cells and directly depolarizes VIP neurons via β2-containing, but not α7, nAChRs.
More notable was nicotine's effect on SOM cells. Prior to nicotine application, SOM neurons had few sIPSCs, e.g., relative to Pyr cells (Figure 3a,b; unpaired t test comparing frequency: t (30) = 3.42, p = 0.0019). However, nicotine strongly increased sIPSC frequency in SOM cells (Figure 3a,b; paired t test: n = 6 cells, 3 mice, t (5) = 4.19, p = 0.0086). Although the degree of nicotinic effects on sIPSCs varied considerably among Pyr, VIP, and SOM cells, the enhancement occurred in almost all of these cells, as seen in individual plots (Figure 3c). The time course of nicotine effects was slow, on the order of minutes, similar to that of VIP cell depolarization and spiking (Figure 1). Finally, nicotine had no effect on the mean amplitude of sIPSCs in any cell type (Figure 3b; paired t test: Pyr t (25) = 1.09, p = 0.29; VIP t (9) = 1.39, p = 0.20; SOM t (5) = 0.46, p = 0.67; PV t (7) = 0.24, p = 0.82), although we did observe in some cells that the largest amplitude responses appeared only with nicotine application.
We additionally examined spontaneous excitatory postsynaptic currents (sEPSC) in Pyr cells by clamping the membrane potential at −52 mV, the observed reversal potential of stimulus-evoked IPSCs (see Methods). Nicotine had no effect on frequency or amplitude of sEPSCs Finally, we recorded thalamic afferent-evoked EPSCs and IPSCs by placing a stimulating electrode in the thalamocortical pathway and stimulating with above-minimal intensities. As with spontaneous events, evoked IPSCs were recorded at 0 mV and evoked EPSCs at −52 mV.
| Nicotine disinhibits Pyr neurons via VIP interneurons
Recent studies have shown that VIP interneurons preferentially inhibit other inhibitory interneurons that, in turn, inhibit Pyr cells. Consequently, VIP cell activation results in the disinhibition of Pyr cells (Lee et al., 2013;Pfeffer et al., 2013). Since our results show that nicotine directly
activates VIP cells and indirectly depolarizes Pyr cells, it is possible that nicotinic depolarization of Pyr cells depends on activation of VIP in-
terneurons. To address this, we silenced VIP interneurons using inhibitory DREADDs that primarily prevent synaptic release of neurotransmitter from HM4D-expressing cells (Amat et al., 2017;Lichtenberg et al., 2017;Stachniak, Ghosh, & Sternson, 2014). Cre-inducible AAV hM4D viruses were injected into the auditory cortex of VIP-Cre mice (Figure 5a), and inhibitory DREADDs expressed in VIP neurons were activated by the agonist CNO (100 nM). We initially used higher concentrations of CNO (1-10 μM) as in prior electrophysiology studies (Alexander et al., 2009;Krashes et al., 2011;Urban et al., 2016), but found in control studies that these higher concentrations depolarized about half of pyramidal cells 1-2 mV (data not shown).
| D ISCUSS I ON
In this study, we examined the cell-type specificity of nicotine's effects in auditory cortex to reveal three key findings: (a) Nicotine depolarizes cell types selectively; i.e., nicotine weakly depolarizes Pyr cells, strongly depolarizes and excites VIP cells, and does not alter membrane potential of SOM or PV cells. (b) Nicotine enhances the frequency of sIPSCs selectively; i.e., weakly in Pyr and VIP cells, strongly in SOM cells and not at all in PV cells. (c) Nicotine-induced depolarization and enhanced sIPSC frequency in Pyr cells require activation of VIP neurons, implicating nicotinic activation of a disinhibitory neural circuit as well as direct VIP neuron projections to Pyr neurons.
It is important to note that in our study nicotine bath application lasts several minutes, whereas many studies use rapid and brief application to avoid desensitizing nAChRs. Thus, our results-sustained activation of VIP cells and elevation of sIPSC frequency-reflect weakly desensitizing or non-desensitizing effects of nicotine that are relevant to understanding the effects of in vivo systemic administration of nicotine (which is our rationale for using bath application). Conversely, endogenous ACh activation of nAChRs presents a more complex picture, with both phasic and tonic actions contributing to effects (Klinkenberg, Sambeth, & Blokland, 2011;Sarter, Parikh, & Howe, 2009).
| Nicotine's effects on Pyr cells
We observed that nicotine weakly and indirectly depolarizes Pyr cells in cortical layers 2-6. Other studies similarly find little evidence of direct nAChR activation on Pyr cells (Christophe et al., 2002;Disney et al., 2007;Gulledge et al., 2007). A few exceptions include direct F I G U R E 4 Nicotine had no effect on EPSCs or evoked IPSCs in Pyr neurons. (a) Example recording of sEPSCs in a Pyr neuron (holding potential, −52 mV, see Results). (b) Group data showing that nicotine had no effect on the frequency (left) or amplitude (right) of sEPSCs. (c) Example recordings of evoked IPSCs (top, holding potential 0 mV) and evoked EPSCs (bottom, holding potential −52 mV) in two separate Pyr neurons. (d) Group data showing that nicotine had no effect on the peak amplitude of evoked EPSCs or evoked IPSCs nAChR activation of L5 and L6 Pyr cells, although these studies used higher concentrations of nicotine (10 μM-1 mM) or ACh (1 mM) and were performed in regions other than auditory cortex (Kassam, Herman, Goodfellow, Alves, & Lambe, 2008;Zolles et al., 2009). It is possible that some Pyr cells in auditory cortex, especially in deeper layers, express nAChRs that may respond to higher concentrations or rapid application of nicotine.
We also found that nicotine enhances the frequency of sIPSCs in Pyr cells, consistent with prior studies (Couey et al., 2007). Our experiments additionally show that this enhancement is mediated through VIP cells, presumably by the weak but direct VIP projection to Pyr cells seen in previous studies (Lee et al., 2013;Pfeffer et al., 2013).
Although we saw no change in the amplitude of thalamic afferent-evoked responses under voltage clamp, this doesn't preclude the ability of nicotine to modify cortical responses to sensory stimuli in vivo, as seen previously (Askew, Intskirveli, & Metherate, 2017;Intskirveli & Metherate, 2012). Rather, when not under voltage-clamp control, nicotinic depolarization of Pyr cells would move the membrane potential closer to spike threshold, resulting in heightened responsiveness and potentially contributing to the increased gain of acoustic responses observed previously. Also, the nicotinic effects on neural circuit dynamics observed here and in prior in vitro studies will likely have complex outcomes on intracortical processing of sensory stimuli, potentially contributing to the increased gain within narrowed acoustic receptive fields observed previously (Askew et al., 2017;Intskirveli & Metherate, 2012).
| Nicotine's effects on interneurons
We found that nicotine has distinct effects on specific interneuron types. Nicotine strongly depolarizes and excites VIP cells via β2-containing nAChRs, while having no effect on SOM or PV cell membrane potential. Also, nicotine potently enhanced the frequency of sIPSCs in SOM neurons, weakly enhanced the frequency of sIPSCs in VIP cells, and had no effect on sIPSCs in PV cells.
VIP cell activation increases evoked responses in visual cortex (Fu et al., 2014) and in frontal cortex improves behavioral performance in a F I G U R E 5 Nicotine depolarized and enhanced sIPSC frequency in Pyr neurons via VIP neurons. (a) Coronal section with immunohistochemistry against DAPI (blue), mCherry from the HM4D construct (red), and VIP (green) in HM4D-transduced auditory cortex. Inset shows co-labeling of mCherry and VIP in an example cell. (b) Example recordings from Pyr neurons in HM4D-expressing mice demonstrating that CNO application prevents nicotine's effects on membrane potential (top) and sIPSCs (bottom). (c) Group data from Pyr cells in HM4D-expressing mice; nicotine depolarized Pyr cells and CNO prevented the nicotinic depolarization of Pyr cells. (d) Group data from Pyr cells in HM4D-expressing mice; nicotine enhanced the frequency of sIPSCs and CNO prevented the nicotinic enhancement of sIPSC frequency memory-dependent task (Kamigaki & Dan, 2017), so nicotinic activation of VIP cells may have similar systems-level effects. Our experiments extend these results by demonstrating direct and potent VIP cell depolarization and sustained spiking by low concentrations of nicotine relevant to therapeutic administration (Newhouse et al., 2012;Rezvani & Levin, 2001), as well as revealing functional consequences for neural circuitry.
PV and SOM generally do not seem to express nAChRs, although there is some indication that subpopulations within these groups may contain nAChRs (Gulledge et al., 2007;Porter et al., 1999). One study demonstrated L2/3 SOM neurons with functional nAChRs and fast-spiking (presumably PV) neurons expressing α7 receptors (Poorthuis et al., 2013). However, as mentioned above, rapid desensitization of α7 nA-ChRs would preclude observation of effects in the present study with bath application of nicotine. Moreover, in hippocampus, a subpopulation of putative SOM interneurons exhibit a non-desensitizing α2 nAChR-mediated response to bath application of 1 μM nicotine (Jia, Yamazaki, Nakauchi, & Sumikawa, 2009), though in our study such depolarization might be masked by counteracting enhanced inhibition. Further studies of specific interneuron subtypes may be needed to resolve these discrepancies.
There is substantial evidence that VIP neurons preferentially innervate and inhibit SOM cells, consistent with the powerful, nicotine-induced enhancement of sIPSC frequency that we observed in SOM cells (Lee et al., 2013;Pfeffer et al., 2013;Pi et al., 2013). Although we did not directly demonstrate VIP involvement, it appears likely that the nicotinic excitation of VIP cells causes the sIPSC enhancement in SOM neurons.
| VIP interneuron-mediated inhibitory mechanisms
Nicotine increased the frequency of sIPSCs in three cell types, yet we found no evidence of a corresponding hyperpolarization that might be expected with enhanced inhibitory input. This could be due to the space-clamp limitations of our current-clamp recordings; i.e., cortical cells are known to have extensive dendritic processes, which may not be accurately recorded from by somatic recordings, especially with K + -based internal solutions. If VIP cells primarily innervate distal dendrites of their postsynaptic target, the inhibition and hyperpolarization evoked by VIP cells might remain localized to this cellular compartment. Some SOM neurons are characterized by this type of specificity and their preferential inhibition of Pyr cell distal dendrites is thought to alter the balance of synaptic integration (Di Cristo et al., 2004). VIP cells may similarly target dendritic processes in select cell types.
In fact, in voltage-clamp recordings (using a Cs + -based solution that blocks potassium channels and reduces space-clamp error) nicotine does appear to alter the baseline holding current in SOM cells (see example in Figure 3a). This change in baseline holding current may reflect a small hyperpolarization that cannot be seen with the K + -based solution, suggesting that alterations in the membrane potential of SOM neurons may occur distant from the soma. On the other hand, previous studies found that VIP cells target both the dendrites and soma of Pyr cells (Kawaguchi & Kubota, 1996, 1997. Even though direct VIP inhibition of Pyr cells is weak, it is possible that it still results in the hyperpolarization of Pyr cells. Yet it appears in our experiments and in other studies that the predominate effect of exciting VIP interneurons is that of disinhibition (Fu et al., 2014;Lee et al., 2013;Pfeffer et al., 2013;Pi et al., 2013), thus any direct hyperpolarization of Pyr cells may be overridden by the counteracting depolarization.
| Disinhibition of Pyr cells by VIP neurons
We also observed that nicotine depolarizes Pyr cells via activation of VIP interneurons. This finding supports growing evidence that activating VIP neurons exerts a disinhibitory effect on Pyr cells. In the auditory cortex of awake mice, optogenetic activation of VIP neurons suppresses SOM cells and increases tone-evoked responses in principal neurons (Pi et al., 2013). A similar effect occurs in visual cortex, where VIP cell activation also enhances sensory-evoked responses (Fu et al., 2014). Given that VIP cells strongly inhibit SOM cells (Lee et al., 2013;Pfeffer et al., 2013;Pi et al., 2013), it is probable that the nicotinic disinhibition of Pyr cells involves SOM cells that tonically inhibit Pyr cells (Gentet et al., 2012). That is, in our experiments, nicotine activates VIP cells which then inhibit SOM cells, causing a release from tonic inhibition of Pyr cells.
| Conclusions and broader implications
We conclude that non-desensitizing (or weakly desensitizing) effects of nicotine selectively excite VIP neurons in auditory cortex to directly inhibit VIP, SOM, and Pyr neurons, and indirectly disinhibit Pyr cells. VIP cell excitation may alter cortical processing by making Pyr cells more responsive to inputs near the soma, i.e., the site of nicotine-induced depolarization, and less responsive to inputs near the site of direct inhibition. Presumed direct VIP neuron projections to interneurons and Pyr cells produced sIPSCs but not somatic hyperpolarization, suggesting that the projections are to distal dendrites (particularly in SOM neurons; in VIP and Pyr neurons somatic inhibition may have been masked by depolarization). It is not clear how inhibition of distal dendrites would alter intracortical processing, but the overall result of nicotine's actions may contribute to increased Pyr neuron responsiveness and selectivity to acoustic inputs (Askew et al., 2017;Intskirveli & Metherate, 2012).
Given the complexity of nAChR-mediated cellular actions in this and prior studies, a full understanding of nicotinic regulation will require integrating the contributions of diverse nAChRs with varying subunit composition, cellular distribution, and response to agonist (Gil & Metherate, 2019;Poorthuis et al., 2013). Overall, nicotinic modulation may serve as a preparatory mechanism for incoming input, resulting in improved cortical processing. Although these experiments were performed in auditory cortex, other cortical regions may contain similar networks and nAChR functionality. Thus, these data provide insight into potential mechanisms underlying the pro-cognitive and sensory processing effects of nicotine in multiple cortical regions.
CO N FLI C T O F I NTE R E S T
The authors declare no competing financial interests. The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2019-05-14T14:01:47.822Z | 2019-05-17T00:00:00.000 | {
"year": 2019,
"sha1": "b0d8edf51c04455b73ae014668d7dbf3c3e39531",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/syn.22116",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "72d9e88e4f0df043db6585a9dd579d4ff7a5710d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
265242574 | pes2o/s2orc | v3-fos-license | Thermo-pH-Salt Environmental Terpolymers Influenced by 2-((Dimethylamino)methyl)-4-methylphenyl Acrylate: A Comparative Study for Tuning Phase Separation Temperature
This study offers a comparison between three different types of thermoresponsive (TR) and thermo-pH-salt (TPR) multiresponsive polymers including homopoly(N-isopropylacrylamide) (PNIPAAm), copolymers with three different monomers, 2-hydroxyethyl methacrylate (HEMA), N,N-dimethylacrylamide (DMAAm), and styrene (S) at three different concentrations (5, 10, and 20 mol %), and a PNIPAAm terpolymer with 5, 10, and 20 mol % 2-((dimethylamino)methyl)-4-methylphenyl acrylate (DMAMCA) and 10 mol % HEMA, DMAAm, and S monomers. All polymers were chemically analyzed with 1H NMR and Fourier transform infrared spectroscopy (FT-IR) as well as gel permeation chromatography (GPC) for the molecular weights and dispersity and differential scanning calorimeter (DSC) for the glass transition temperatures. The cloud point, also known as the phase separation temperature (Cp), was determined for all polymers by a turbidity test using a UV–vis spectrophotometer; a micro-differential scanning calorimeter was used for measuring the cloud point in deionized water. The influence of a tertiary amine cationic group of DMAMCA changed the behavior of TR copolymers into TPR by shifting the cloud point of the TPR to higher values in acidic solutions (lower pH) and to lower values in alkaline solutions. The Cp was measured at different concentrations of Hofmeister kosmotropic and chaotropic anion salt solutions in a range of pH solutions for the terpolymers. It demonstrated the same behavior as mentioned in pH solutions besides the effect of salt ions. By measuring the Tc and Cp of these polymers, we can exploit various applications of stimuli-responsive materials for sensors and biomedical technology.
INTRODUCTION
The importance of smart materials is increasing daily due to their multiple uses in the medical and biological fields. 1,2−16 Poly N-isopropylacrylamide (PNIPAAm) is the most popular material among the thermoresponsive (TR) group.It possesses both hydrophilic and hydrophobic groups, which result in phase separation of the polymer solution by raising its temperature above a lower critical solution temperature (LCST) (32 °C) due to a change from a hydration (hydrophilic) to dehydration (hydrophobic) state.
Recent works have studied the factors that affect the hydrophilicity and hydrophobicity of PNIPAAm and hence the LCST. 17−25 Copolymerization with 2-hydroxyethyl methacrylate (HEMA) has been widely studied due to its good mechanical properties and functionality. 26−29 A recent study reported a method to control the responsive temperature of the poly(NIPAM-co-HEMA) copolymer via ultrasonic polymerization. 30The incorporation of N,N-dimethylacrylamide into the PNIPAAm chain has been used to improve the hydrophilicity of the copolymer solution and therefore raise the LCST higher than the normal of homo-PNIPAAm. 31Copolymerization of NIPAAm and DMAAm was reported using different polymerization mechanisms, such as, free radical polymerization and control radical polymerization. 32,33A styrene monomer was copolymerized with NIPAAm to tune the LCST of the copolymer solution.
Poly(NIPAAm-co-styrene) copolymers were synthesized using styrene for controlling the hydrophilicity in PNIPAAm main chain and their biological applications. 34,35nother important category of smart polymers is pHresponsive (PR) polymers.−41 The pH poly(cations) include monomers with pyridine or tertiary amine, e.g., poly[ (2diethylamino)ethyl methacrylate] (PDMA), poly[ (2diisopropylamino)ethyl methacrylate] (PDEA), and poly(4vinylpyridine) (P4VP), 42−44 whereas the pH poly(anions) include poly(acrylic acid) (PAA), poly(methacrylic acid) (PMAA), poly(4-styrenesulfonic acid), and phosphoruscontaining (meth)acrylate monomers. 45,46These smart polymers are particularly attractive in environmental and biomedical applications which involve changes in pH that take place in several specific or pathological parts. 47,48−61 For example, P-cresol (3-hydroxytoluene) was used in preparing antioxidants, in addition to nonstaining surfaces. 61−64 Hofmeister series, introduced by Franz Hofmeister, manifested the influence of salt ions (cations and anions) on the precipitation of proteins. 65,66A series of anions were classified, based on their strength in the chaotropic act, as a weak water interaction, e.g., (Cl − , NO 3 − , SCN − ), and a strong water interaction, e.g., ( ), which breaks the hydrogen bonds, so-called kosmotropes. 67,68−75 In this study, a facile method was used to prepare a new cationic monomer.A series of hydrophilic/hydrophobic thermoresponsive copolymers were fabricated.Another new series of thermo-pH terpolymers was prepared using the same monomers with incorporation of the cationic monomer 2-((dimethylamino)methyl)-4-methylphenyl acrylate (DMAM-CA) in the polymer chain.The study focused on the changes that resulted from the addition of the new monomer, with emphasis on tuning the lower critical solution temperatures.
Instrumentation for Chemical and Physical Investigations. 2.2.1. Chemical Evaluations. A Bruker
Avance high-resolution nuclear magnetic resonance (NMR) spectrometer system, operating at 500 and 125 MHz and connected to helium recycling, was used for detecting 1 H and 13 C NMR.Samples of monomers and polymers were dissolved in deuterium chloroform-d, d CDCl 3 (99.8atom % D, Sigma-Aldrich), or dimethyl sulfoxide-d 6 d 6 DMSO (99.9 atom % D, Merck).A Bruker VERTEX FT-IR instrument was used to conduct Fourier transform infrared spectroscopy (FT-IR) for all samples of monomers and polymers via a Vertex 70 of the high-performance, wide range instrument, from 400 and 4000 cm −1 .
2.2.2.Molecular Weight and Dispersity.The molecular weights, including the weight-average molecular weight M w , the number-average molecular weight M n , as well as the dispersity (Đ) of both copolymers and terpolymers samples, were measured by gel permeation chromatography (GPC); they were dissolved in THF 16 mg/mL as the eluent.The molecular weight of PS (M w = 1300, Đ 1.06) was provided as the standard of measurement.
Physical Characterizations. 2.2.3.1. Glass Transition Temperature.
A PerkinElmer Diamond differential scanning calorimeter (DSC) was used to record the glass transition temperatures of the solid polymer (T g ) by heating the samples at 5 °C/min.The onset point on the thermograph was taken as the glass transition temperature.
Contact Angle.
It is a particular method to determine the extent of the hydrophilicity and hydrophobicity of the polymer.The liquid solution was dropped on the surface of the polymer using a micropipet; meanwhile, a digital camera was used to take an image during the dropping process and falling over the polymeric surface.
Lower Critical Solution Temperature (LCST) (T c ) and Cloud Point (C p ).
A PerkinElmer (Lambda 45) UV−vis spectrophotometer was used to measure the change in the UV transmittance in the polymer solution with temperature changes.This technique is also known as turbidity measurement.The temperature was varied in the range of 10−75 °C for 0.1 wt % polymer solution.A plot was drawn, and the cloud point (C p ) at 50% transmittance was measured.Additionally, we used micro-differential scanning calorimeter (SETARAM III microcalorimeter) for measuring LCSTs at the same concentration of the polymer solution (0.1%).The thermograms were used to determine the LCST at the onset value.The polymeric materials were dissolved in deionized water (pH 6.8−6.9)solutions, while they were heated/cooled at 1 °C/min.
Synthesis of 2-((Dimethylamino)methyl)-4-methylphenyl Acrylate (DMAMCA). 2.3.1. 2-((Dimethylamino)methyl)-4-methylphenol (DMAMC) (I).
A mixture of 11.0 g (0.1 mol) of p-cresol, 11.0 g (0.2 mol) of dimethylamine, and 11.0 g (0.4 mol) of formaldehyde 37% was mixed in a 250 mL two-neck round-bottom flask and hung with a water condenser and water trap.3.0 g of sodium hydroxide was dissolved in 150 mL of ethanol and added using a funnel.It was stirred and refluxed in an oil bath for 4 h at 135−140 °C.The yellowish precipitate, which appeared after 1.5 h of refluxing, was filtered off, and the crude product was recrystallized from ethanol.The product was dried under reduced pressure.The general physical properties illustrated a yellowish solid; yield of 95% mp 69 °C.
1 H NMR (500 MHz, CDCl 3 ): δ(ppm) = 2.37 (s, 6H, 3 J = 13.31Hz and 3 J = 8.41 Hz, j-CH 3 ), 2.62 (s, 3H, f-CH 3 ), 3.75 (s, 2H, i-CH 2 ), 4.79 (br., s, 1H, a-OH), 7.14 (dd, 1H, 3 J = 15.65 Hz, c-Ar-CH), 7.33 (dd, 2H, 3 J = 9.10 Hz, d, g-Ar-H). 13C NMR (125 MHz, CDCl 2.3.2.2-((Dimethylamino)methyl)-4-methylphenyl Acrylate (DMAMCA).5.00 g (0.02 mol) portion of 2-((diethylamino)methyl)-4-methylphenol compound (I) and 4 g of sodium hydroxide were suspended in 150 mL of dry dichloromethane; the mixture was carried to a three-neck flask connected with a reflux condenser and a funnel.Nitrogen gas was purged for 15 min from the nitrogen source by a needle and a balloon.The reaction vessel was allowed to stir in an ice bath at 0−5 °C.After 20 min of stirring, a solution of 1.82 g (0.02 mol) of acryloyl chloride dissolved in 20 mL of dry dichloromethane was added dropwise.The reaction was continued in the ice bath for 1 h, and then it was stirred at room temperature for 6 h.By the time, the change in the reaction appeared, i.e., the yellowish color of the precipitate changed into an orange color, it was filtered, and the solvent was removed using a rotatory evaporator.The crude product was purified by redissolving in dry CH 2 Cl 2 and then washed with distilled water three times, followed by 0.1 M HCl, 0.1 M sodium bicarbonate solution, and again with distilled water.The organic phase was used and then dried overnight with MgSO 4 .The solvent was evaporated to get the product as a yellowish viscous material; yielding 76%.
2-((Dimethylamino)methyl)-4-methylphenyl Acrylate (DMAMCA) Monomer.
A new pH-responsive monomer was prepared using p-cresol and a group of thermoresponsive copolymers based on N-isopropylacrylamide with different hydrophilic/hydrophobic monomers.The new monomer was incorporated into the polymer chain in the form of copolymers and terpolymers.The new pH-responsive monomer with the tertiary amine group was synthesized as described in our previous works for analogous monomers of the same family of the cationic monomers. 21,22,25,26,54,76The new monomer was named 2-((dimethylamino)methyl)-4methylphenyl acrylate (DMAMCA), as illustrated in Scheme 1.
At first, p-cresol was reacted with formaldehyde, N,Ndimethylamine to create 2-((dimethylamino)methyl)-4-methylphenol DMAMC (I) with the cationic tertiary amine group, which manages the pH-responsiveness after polymerization.The reaction was carried out in basic conditions (pH = 11.6); the reaction mechanism proceeded according to the Mannich reaction.The new compound was evaluated by 1 H, 13 C NMR, as shown in Figure 1A,B, as well as the FT-IR (shown in the Supporting Information).The protons and 13 C that were detected were sufficient to confirm the chemical structure; it was noticed that the presence of 6H, dimethyl groups at δ = 2.37 ppm, and 2H, methylene N-CH 2 at δ = 2.62 ppm, and the corresponding 13 C at δ = 44.32ppm and δ = 56.01ppm was observed.Meanwhile, the specific methyl group and the hydroxyl group of p-cresol were revealed at δ = 2.62 and δ = 4.79 ppm and, consequently, 13 −NCH 2 (CH 3 ) 2 tertiary amine group.Ultimately, the synthetic product 2-((dimethylamino)methyl)-4-methylphenyl acrylate (DMAMCA) was obtained by the reaction of compound (I) under the basic conditions with the acryloyl chloride, as described in Scheme 1.The reaction was carried out in an inert and cool atmosphere during the addition of acid chloride.
Characterization of DMAMCA was performed by 1 H, 13 C NMR, and FT-IR as previously described.We noticed that the most essential protons and 13 C of the chemical structure were detected; the formation of the vinyl group has been observed clearly at δ = 5.91, 6.23, and 6.51 ppm, and these also have been detected via the 13 C at δ = 126.05and 138.83 ppm.Another 1 H and 13 C were recorded for the tertiary amine group in the presence of Ar-N-CH 2 (CH 3 ) 2 at δ = 3.73 ppm, the corresponding 13 C at δ = 56.09ppm, 6H, dimethyl groups Ar-N-CH 2 (CH 3 ) 2 at δ = 2.44 ppm, and 13 C δ = 58.09and 46.18 ppm, as illustrated in Figure 1C,D.The absorptions of the functional groups were detected by the FT-IR spectrum.Peaks for the essential groups for instance, at ν = 2310−2360 cm −1 stretched (CN) of the −NCH 2 (CH 3 ) 2 tertiary amine group and −C�C− vinyl group at ν = 1647−1660 cm −1 .
Copolymers and Terpolymers. 3.2.1. Copolymer Groups (A, B, and C) P(NIPAAm-co-HEMA), P(NIPAAm-co-DMAAm), and P(NIPAAm-co-S).
In this study, we synthesized three groups of copolymers (A, B, and C), consisting of hydrophilic or hydrophobic monomers, based on NIPAAm.For group (A) P(NIPAAm-co-HEMA), the copolymerization was performed between NIPAAm and 5, 10, and 20 mol % HEMA in the presence of AIBN (initiator) and absolute ethanol by the free radical mechanism, as illustrated in Scheme 1.The polymers were separated in diethyl ether, and the chemical structures were evaluated by 1 H NMR and FT-IR.Figure 2A demonstrates the 1 H NMR of the copolymers of group A. All spectra exhibited protons that were expected to be seen in the polymer chain; the presence of multiple 6H of (−CH(CH 3 ) 2 ) isopropyl (NIPAAm) methyl groups at δ = 0.71−1.46ppm and the repeating units (−CH−CH 2 −) at δ = 1.17−1.64and at δ = 1.67−1.81ppm, and the 3H of methyl group in HEMA at δ = 3.70−3.87ppm attributed to the multiple methylene groups of HEMA.The distinguished protons that were used in the calculation of the existing molar concentration after the polymerization process was depending on the 1 H of 3H of the −CH 3 group in HEMA and 1 H of the −CH isopropyl group of NIPAAm; they are added in Table 1. Figure 4A shows the absorption of the functional groups as was recorded by FT-IR; it exhibits the −OH group at ν = 3286− 3630 cm −1 , stretched −C�O ester of −COO− at ν = 1710 and 1725 cm −1 , and the stretched −CONH− in the polymer chain at ν = 1595−1610 cm −1 .
For group (B), the copolymerization process of NIPAAm with 5, 10, and 20 mol % DMAAm in the PNIPAAm main chain was carried out.The copolymers are thermoresponsive polymers with additional hydrophilic/hydrophobic groups, which will influence the general characteristics of PNIPAAm.Chemical analysis using 1 H NMR shows the most essential multiple protons in the polymer main chain at δ = 0.95−1.28ppm 6H of (−CH(CH 3 ) 2 ) isopropyl (NIPAAm) methyl groups, the repeating −CH−CH 2 − has appeared at δ = 1.47− 2.36 ppm, and the specific protons of DMAAm were achieved for 6H of the dimethyl groups −N(CH 3 ) 2 at δ = 2.57−3.14ppm; they are used with the isopropyl −CH− of NIPAAm to estimate the molar percent of each monomer in the copolymer chain; these are tabulated in Table 1 and Figure
Molecular Weights (M n and M w ) and Dispersity (Đ).
Here, we will discuss the change in the molecular weight and the dispersity of PNIPAAm and its copolymers and terpolymers and the effect of the incorporation of the HEMA, DMAAm, and S on the growth of the copolymers' chains.Furthermore, the interaction between the new monomers and DMAMCA in the terpolymer chain is investigated.The molecular weights (weight-average molecular weight M w and number-average molecular weight M n ) and the dispersity (Đ) were measured by gel permeation chromatography (GPC) with a UV spectrophotometer in tetrahydrofuran (THF), as their eluent.Figure 5A, illustrates the molecular weights of homo-PNIPAAm and PNIPAAM copolymers with 5, 10, and 20 mol % HEMA; all data are presented in Table 1.It was noticed that the chromatogram consisted of one plateau, which corroborates the vanishing of the lower molecular weight and that the polymerization has been successfully carried out.The measurement of PNIPAAm yielded M n = 23.000g/mol, M w = 71.000g/mol, and Đ = 3.The copolymerization with 5, 10, and 20 mol % HEMA VII (group A) exhibited M n = 26.000,25.000, and 22.000 g/mol, M w = 67.000,64.000, and 43.000 g/ mol, and Đ = 2.6, 2.2, and 2.0, respectively.A gradual decrease in M n , M w , and Đ by increasing the molar concentration of HEMA was observed.This can be attributed to a decrease in the reactivity of the polymer chain by increasing the concentration of HEMA. 54he group (B) copolymerization with 5, 10, and 20 mol % DMAAm, as illustrated in Figure 5B, yielded M n = 20.000,44.000, and 30.000 g/mol, M w = 50.000,84.000, and 50.000 g/ mol, and Đ = 2.5, 1.9, and 1.6, respectively.These results can also be interpreted, as previously mentioned.The last group of PNPAAm copolymers was prepared with styrene, which resulted in M n = 43.000,32.000, and 15.000 g/mol, M w = 80.000, 76.000, and 32.000 g/mol, and Đ = 1.9, 2.4, and 2.1; respectively.A change in the dispersity value at 10 mol % styrene (IX-10) was noticed, which is higher than (IX-05), and (IX-20); the interpretation of this observation is under study.
The incorporation of 5, 10, and 20 mol % DMAMCA into the chain of the copolymers is based on the NIPAAm with 10 mol % HEMA, DMAAm, and S to produce the terpolymers groups.We noticed the influence of the molecular weights and dispersity of terpolymers by the changes in their chemical structure.The measurements of the first group (D) (X-10-05, X-10-10, and X-10-20) indicated a decrease in M n (18.000, 19.000, and 13.000 g/mol), M w (46.000, 44.000, and 28.000 g/ mol), and Đ (2.5, 2.3, and 2.1) compared with their corresponding copolymers VII (A) without DMAMCA.The steric hindrances of DMAMCA restricted the free rotation and the growth of the polymer chain. 54This effect was dominant with all groups of terpolymers, resulting in lower M n , M w , and Đ with higher concentrations of DMAMCA, as illustrated in Figure 5A−F and Table 1.The lowest values of M n (4.000, 4.000, and 3.000 g/mol), M w (6.000, 8.000, and 3.000 g/mol), and Đ (1.5, 2.0, and 1.3) were observed with terpolymers XII (F) and styrene, as shown in Figure 5F.−28 3.4.Glass Transition Temperature (T g ).The glass transition temperature (T g ) refers to the amorphous state of polymers in which polymers undergo a transition from a glassy state to a rubbery state.It is an essential characteristic of polymeric material at which dramatic changes in the physical and mechanical properties occur.The glass transition temperatures of all used polymers, including homo-, co-, and terpolymers, were recorded using a differential scanning calorimeter (DSC).The onset point was recorded by the glass transition temperature of the polymer. 3The effect of the addition of HEMA, DMAAm, and S to the PNIPAAm copolymer and DMAMCA was clearly visible in the recorded T g 's of the polymeric materials.The T g of homo-PNIPAAm was detected at 141 °C, as shown in Figure 6A and Table 1.The group of copolymers VII (A) with HEMA in the main chain revealed T g 's at 138, 133, and 126 °C for VII-05, VII-10, and VII-20, respectively, as shown in Figure 6A; a decrease in the T g was noticed with increasing concentration of the HEMA in the polymer main chain.This can be attributed to the interaction between NIPAAm and HEMA in the polymer chain which increases the free volume in the polymer chain and, consequently, lowers T g . 26Other groups of the PNIPAAm copolymer with DMAAm and S VIII (B), XI (C) exhibited T g 's at 136, 130, and 119 °C for VIII-05, VIII-10, and VIII-20, respectively, and 134, 129, and 115 °C for XI-05, XI-10, and XI-20, respectively, as illustrated in Figure 6A.
The terpolymerization and incorporation of DMAMCA in the polymer chain have dramatically influenced the glass transition temperatures.Figure 6B exhibits the thermograms of terpolymers groups with HEMA, DMAAm, S, and 5, 10, and 20 mol % DMAMCA; X (D), XI (E), and XII (F).The T g measurements of group D are 127, 121, and 106 °C.The lower T g values than group (A) (NIPAAm and HEMA only) emphasized the effect of DMAMCA on the polymer chain, i.e., the increase of the free volume in the chain by increasing the concentration of DMAMCA.The next group (E) recorded T g values of 126, 122, and 108 °C, whereas the last group (F) exhibited the lowest T g 's for all synthetic copolymers and terpolymers, 123, 112, and 103 °C.This is due to the effect of both styrene and DMAMCA, which spikes the free volume and gap distance in the polymer chain and consequently decreases the glass transition temperatures with increasing the molar concentration of DMAMCA, as shown in Figure 6B.
Wettability and Contact Angles of Polymers.
One of the most essential measurements of surface wettability is the contact angle (Θ).This measurement is used to determine the tendency of polymeric materials to be hydrophilic or hydrophobic.The degree of hydrophilicity and hydrophobicity for polymers and their relationship with the contact angle have been discussed in detail in a recent article; 45 the contact angle for hydrophobic surfaces should be found at Θ > 90°, while for hydrophilic surfaces, at Θ < 90°, whereas for superhydrophobic surfaces, at Θ ≥ 145°.This study will show how the copolymerization of NIPAAm with different monomers will change its physical properties, and the incorporation of DMAMCA in the terpolymer chain plays a vital rule in changing the course of the polymer material.A recent study discussed the measurement of the contact angle of homo-PNIPAAm at 74.5°. 72,73The copolymer groups with HEMA, DMAAm, and S were tested at pH 6.9 (deionized water).The first group VII (A) exhibited Θ = 73, 72, and 70.5°with a gradual decline in the contact angle with increasing HEMA in the polymer chain, as shown in Figure 7A and Table 2.This is attributed to the increase in hydrogen bonds formed by the interaction between water molecules with the hydroxyl group of HEMA and the amide group of NIPAAm.This generally led to spikes in the hydrophilicity of the polymeric material.
For VIII (B) copolymers with DMAAm, the contact angles were measured at Θ = 74, 73, and 71.5°, similar to the previous copolymers with a slight increase in the contact angles due to the difference in the amount of the hydrophilic groups in the polymer chain and their interaction with water on the surface.The contact angle measurements for copolymers IX (C) with styrene exhibited Θ = 76, 79, and 82°, with the increase in the concentration of styrene as shown in Figure 7A.
The contact angle of terpolymers (Θ) was measured in various pH solutions (pH = 1.4,3.7, 5.6, 6.9 (DI H 2 O), 9.7, 11.2, and 13.4).The presence of DMAMCA with different molar concentrations 5, 10, and 20 mol % and different monomers HEMA (10 mol %), X (X-10-05, X-10-10, and X-10-20) (D), DMAAm (E), and S (F) revealed a significant impact on the contact angle measurements, as shown in Figure 7B and Table 2.The highest degree of hydrophilicity (lowest Θ) for terpolymers X (D) was achieved with X-10-05, the lowest molar concentration of DMAMCA, pH 1.4 (strongest acidic conditions), and Θ = 75°.On the other hand, the opposite behavior was realized by increasing both the molar concentration of DMAMCA and the pH solution to reach the strongest basic condition at pH 13.4,where the polymeric material (X-10-20) changed its behavior gradually to hydrophobic.The sequence of the change in the contact angles has been added in Table 2.
For terpolymers XI (XI-10-05, XI-10-10, and XI-10-20) (E), DMAMCA, and DMAAm, the measurements exhibited values very close to those of the previous terpolymers.The lowest contact angle was observed for XI-10-05 at pH 1.4 (Θ = 76°), the most hydrophilic terpolymer in this group, and the highest contact angle was observed for XI-10-20 at pH 13.4 (Θ = 94°).The last case in this study for terpolymers XII (XII-10-05, XII-10-10, and XII-10-20) with DMAMCA and S was measured.The measurements revealed the highest contact angle and the lowest hydrophilicity of all of the investigated copolymers and terpolymers.The lowest contact angle (Θ = 75°) was achieved at the lowest concentration of DMAMCA, XII-10-05 in pH 1.4; meanwhile, the highest contact angle (Θ = 102°) was achieved for XII-10-20 in pH 13.4.This can be attributed to a couple of aromatic groups that led to an increase in the hydrophobic chains and restricted the formation of hydrogen bonding. 45,54,72Figure 7C illustrates the change in contact angles of all terpolymers in different pH solutions.Statistical analysis of the contact angle measurements was performed using one-way ANOVA test, which compares the means of two or more independent groups in order to determine whether there is statistical evidence that the associated population means are significantly different.The standard error of the mean values of the contact angles is shown in Figure 7D; the lowest value was recorded for X-10-05 (SE ± 1.09), while the highest value was recorded for X-10-05 (SE ± 2.56).The p-value was 0.459, and the homogeneity of variances is indicated at 0.06 levels.The data analysis revealed that the means were not significantly different.
Tuning of the Phase Separation Temperature of Homo-PNIPAAm, PNIPAAm Copolymers, and PNIPAAm
Terpolymers.A main objective in this study is to compare the cloud point C p , also known as the phase separation temperature, of different PNIPAAm copolymers and terpolymers.Copolymerization of homo-PNIPAAm with hydrophilic or hydrophobic groups will change the phase separation temperature by increasing or decreasing the hydrogen bonding between the polymer chain (and its groups) with the aqueous solution.C p 's was measured using a UV−vis spectrophotometer through the change of UV transmittance with temperature, also known as the turbidity test.The temperature at which the solution becomes turbid (milky) is known as the cloud point (C p ) or the separation temperature.Additionally, a microdifferential scanning calorimeter was used for LCST (T c ) measurements of both copolymers and terpolymers in deionized water.
Figure 8A illustrates the transmittance−temperature relationship for homo-PNIPAAm and PNIPAAm copolymers.Data are recorded in Table 1.The homo-PNIPAAM (IIIa) exhibited C p temperatures at 33 °C.Copolymerization with HEMA for copolymers (VII-05, VII-10, and VII-20) resulted in C p temperatures of 34.5, 37.7, and 41 °C, respectively, whereas copolymerization with DMAAm (VIII-05, VIII-10, and VIII-20) resulted in C p temperatures of 37, 43, and 44.5 °C, respectively.Both copolymers demonstrated an increase in the phase separation temperature or the cloud point by increasing the molar concentration of HEMA or DMAAm in the polymer chain.This can be attributed to the domination of the hydrophilic chain by increasing the hydrogen bonds between the polymer groups acting in the amide groups in NIPAAm (or DMAAm) and the hydroxyl of HEMA and their interaction with the hydrogen bonds in the aqueous solution.
Meanwhile, copolymerization with styrene exhibited opposite behavior.For copolymers (IX-05, IX-10, and IX-20), C p 's were measured as 28.6, 26.7, and 21.4 °C, respectively.This is due to the domination of the hydrophobic chains created by the aromatic ring of styrene, which restricted the formation of hydrogen bonds.This in turn increased the consistency of turbid solution and lowered the C p of the polymer solution. 21,26The C p 's of thermo-pH-responsive terpolymers were recorded in a strong acidic solution (pH 1.4) as well as in neutral (pH 6.9) and strong basic (pH 12.6) solutions.The changes that appeared in the terpolymer solutions are due to the variations in the hydrophilic/hydrophobic groups and how they interact with the polar groups of the solution.The ionization of the DMAMCA in the terpolymer solution led to an increase in the solution charge, and a massive hydrogen bonding was formed; subsequently, a higher temperature was consumed to make a phase separation. 25,45,76igure 8E respectively.The effect of the concentration of DMAMCA and the pH solution on C p is consistent with the earlier results; the highest C p (47.6 °C) was recorded at the lowest pH (pH 1.4) for XI-10-20, while the lowest C p (30 °C) was recorded at the highest pH (pH 12.6) for XI-10-20.The notable differences between the highest C p values of XI and X terpolymers in the acidic and basic solutions are attributed to the higher interaction of the amide groups of DMAAm with other groups to improve the formation of hydrogen bonds. 25revealed the lowest C p within all groups of copolymers and terpolymers at the lowest pH (pH 1.4, acidic solutions) and at the highest pH (pH 12.6, basic solutions).The domination of the hydrophobic groups in the terpolymer chains extensively appeared by going from the neutral solution to the basic solution.Figure S3A shows the relationship between the pH solutions and the C p for PNIPAAm terpolymer groups (D−F).The highest C p was recorded for the terpolymer XI-10-20 at pH 1.4, whereas the lowest C p was recorded for the terpolymer XII-10-20 at pH 12.6.The lower critical solution temperatures of PNIPAAm copolymers (VII, VIII, and IX), and PNIPAAm terpolymers in pH 6.9 (deionized water) are shown in Figure S3B; the highest value of T c was observed for VIII-20 (43.9 °C), and the lowest T c was observed for XII-10-20 (18.5 °C).
Finally, a micro-differential scanning calorimeter was used to estimate the lower critical solution temperature of the homo-PNIPAAm, PNIPAAm copolymer, and PNIPAAm terpolymer at pH 6.9 (DI water); the instrument was run by cooling/ heating at 1 °C/min.The thermograms were used to record the T c 's at each onset value.Polymers (IIIa, VII-10, VIII-10, IX-10-10, X-10-10, XI-10-10, and XII-10-10) were selected for these measurements; they exhibited their T c 's at (32.7, 38, 42.8, 27, 28, 33.5, and 22.6 °C), as shown in Figure 10(A−G) and Table 1.The differences between the measurements of the C p 's obtained by the turbidity test and the T c 's obtained using the micro-differential scanning calorimeter can be attributed to the mechanisms and physical parameters measured in each method. 45,76The highest value for T c was measured for the PNIPAAm copolymer VIII-10, whereas the lowest value was measured for the PNIPAAm terpolymer XII-10-10, as shown in Figure 10H.
Measuring the Cloud Point (C p ) in the Hofmeister
Anions.The cloud points (C p 's) of PNIPAAm copolymers and terpolymers were measured in different kosmotropic and chaotropic anion salt solutions.Three salts were chosen (sodium sulfate Na 2 SO 4 , sodium chloride NaCl, and sodium thiocyanate NaSCN) and dissolved in deionized water at different concentrations (0.1 to 0.5 wt %).The turbidity test was performed via a UV−vis spectrophotometer.The following samples were used in the measurements: VII-10, VIII-10, IX-10, X-10-10, XI-10-10, and XII-10-10; the transmittance/temperature diagram was obtained for all polymers in each salt concentration as shown in Figure 11A and Table 3.The first run was performed on the PNIPAAm copolymer VII-10 with 10 mol % HEMA in 0. A decrease in C p was observed with increasing Na 2 SO 4 salt concentration due to the breaking in the hydrogen bonding; therefore, the domination of the hydrophobic chains restricted the hydrophilic chains and the speed of formation of the turbid solution.By using a weak chaotropic salt solution, the balance between the hydrophilic and hydrophobic chains switched to the hydrophilic side, which appeared in the C p 's.The strong chaotropic salt solution NaSCN exhibited the opposite effect to Na 2 SO 4 by facilitating the formation of hydrogen bonds, increasing the hydrophilicity, slowing down the phase separation, and subsequently increasing the C p 's of the polymer solution. 45,54or the VIII-10 terpolymer, the C p 's were measured at 0.1, 0.2, 0. The presence of DMAMCA resulted in lower C p 's in the Na 2 SO 4 salt solution than copolymers with only HEMA or DMAAm (VII-10 and VIII-10) due to the effect of salt anions on the tertiary amine group and the aromaticity of DMAMCA.Both factors restricted the ionization and breaking of the hydrogen bonds, which resulted in fast phase separation with the higher salt concentrations.The opposite effect was observed in the case of the weak chaotropic salt (NaCl), in which the weak interaction of the salt resulted in C p values being very close to those obtained in DI water (pH 6.9).The strong chaotropic salt solution (NaSCN) exhibited higher values of C p 's which increased with increasing salt concentration; these are lower than the values obtained for VII-10 and VIII-10, highlighting the effect of the hydrophobic chains produced by DMAACA.The terpolymer XII-10-10 offered lower values of C p 's than the corresponding copolymer IX-10 due to higher hydrophilicity based on the two aromatic rings (DMAMCA and S), as mentioned previously.
CONCLUSIONS
This study offers a thorough comparison between homo-PNIPAAm, groups of thermoresponsive copolymers based on N-isopropylacrylamide with 5, 10, and 20 mol % HEMA, DMAAm, and S, and groups of thermo-pH terpolymers by incorporating 5, 10, 20 mol % of a new cationic monomer called 2-((dimethylamino)methyl)-4-methylphenyl acrylate (DMAMCA) into NIPAAm and 10 mol % HEMA, DMAAm and S. The molecular weight and dispersity of the polymers exhibited a decrease with increasing the mol % of the incorporated monomers in all groups.The glass transition temperature exhibited a similar behavior.The hydrophilicity of the copolymers, measured by the contact angle, revealed higher hydrophilic properties with lower contents of HEMA and DMAAm.An opposite behavior was observed with styrene.The cloud points of copolymers and terpolymers were influenced by the change in the hydrophilic and hydrophobic properties of the polymer chains.While PNIPAAm exhibited a C p of 32 °C, the highest C p in the copolymer groups was measured at 43.9 °C and the lowest LCST was measured at 21 °C.Moreover, the terpolymers manifested the greatest C p at 47 °C and the lowest C p at 14.6 °C.The effect of Hoffmeister anions with different salt concentrations on the C p exhibited a variation of temperatures in different salt solutions (Na 2 SO 4 , NaCl, and NaSCN) and concentrations (0.1 to 0.5 wt %).The highest C p was observed at 42 °C, whereas the lowest C p was measured at 15 °C.The ability to vary the phase transition temperatures through different means and methods will allow us to exploit more applications of these smart polymers in sensors and biotechnology.
C at δ = 25.48 and 154.54 ppm.The FT-IR spectrum showed absorptions of several groups, specifically the peaks of the hydroxyl group (OH) at ν = 3355−3576 cm −1 and at 2170−2295 stretched (CN) of
2B.The FT-IR was used to determine the absorption of the functional groups, for instance, the −NH at ν = 3280−3640 cm −1 and at ν = 1695−1610 cm −1 stretched the −C�O amide of −CONH− in the polymer chain, as shown in Figure 4A.The last copolymer group (C) is based on the copolymerization of NIPAAm with styrene in 5, 10, and 20 mol %.
FigureFigure 3 . 1 H
Figure 2C illustrates the most specific multiple protons for NIPAAm and styrene in the copolymer chain at δ = 0.97−1.38 and 3.95−4.25ppm due to the methyl groups (−CH(CH 3 ) 2 ) isopropyl and −CH−(NIPAAm); the aromatic Ar-H protons of styrene appeared at δ = 6.86−7.67ppm.The last issued protons were used for the determination of the chemical composition of each monomer.FT-IR was successfully used to
Figure 7 .
Figure 7. (A) Contact angle of copolymers (pH 6.9, DI water), (B) terpolymer in different pH solutions, (C) change of contact angles with pH for each terpolymer, and (D) analysis of variance (ANOVA) test SE of the mean contact angles of terpolymers.
Figure 8 .
Figure 8. (A) Turbidity measurements via transmittance/temperature for the determination of C p in PNIPAAm and PNIPAAm copolymers; (B−J) C p for terpolymers in different pH solutions.
Figure 9 .
Figure 9. (A) ANOVA test (standard error SE and mean of LCST) for terpolymers with a change in pH solutions.(B) Homo-PIPAAm, copolymers, and terpolymers at pH 6.9 were recorded via a UV−vis spectrophotometer and a micro-differential scanning calorimeter.
Figure 10 .
Figure 10.(A−G) Thermograms of the lower critical solution temperature (T c ) for selective polymers in DI water (pH 6.9) using the microdifferential scanning calorimeter and (H) combined measurements of all recorded T c 's.
3, 0.4, and 0.5 wt % Na 2 SO 4 as 32.5, 32, 30, 28, and 27 °C; in NaCl as 34, 33, 31, 29, and 28 °C; and in NaSCN as 34, 35.6, 37.3, 39.3, and 41 °C, respectively.For the IX-10 terpolymer with styrene, it exhibited C p 's in Na 2 SO 4 as 26.4,26, 25, 24, and 23.7 °C; in NaCl as 27.6, 29, 30, 31, and 32 °C; and in NaSCN as 28.6, 29, 31, 34, and 34.4 °C, respectively.The increase in C p 's is due to the hydrophobic chains caused by the aromatic ring of styrene combined with the effect of the sulfate group in Na 2 SO 4 that led to the breaking of the hydrogen bonding between amide NIPAAm and water molecules, which spiked the separation of the polymer from the solution.The C p 's of PNIPAAm terpolymers in the Hofmeister salt solution exhibited a similar behavior to PNIPAAm copolymers with different values, which is due to
Figure 11B illustrates the change of the C p 's for all polymers at different concentrations of salts.The statistical analysis of the mean values of C p 's of polymers in different concentrations of Na 2 SO 4 , NaCl, and NaSCN salts was performed by ANOVA one-way tests, as shown in Figure 12.The p-value was found to be 0.0076, well below 0.05, indicating no significant difference in the mean values of the C p 's.
Figure 12 .
Figure 12.ANOVA test (standard error ± SE and mean C p 's) for terpolymers with different salt concentrations.
Table 1 .
Yield Percentage, the Actual Composition from the 1 H NMR Spectra in Moles, Average Molecular Weights (the Number-Average and the Weight-Average Molecular Weight), Dispersity, Glass Transition Temperature, the Phase Transition Temperatures, and the Cloud Points in Different pH Solutions for the Groups of Copolymers A, B, and C (VII, VIII, and IX) and the Groups of Terpolymers D, E, and F (X, XI, and XII) a Number-average molecular weight.b Weight-average molecular weight.c Dispersity. d Glass transition temperature.e Cloud point (50% transmittance).f Lower critical solution temperature.
Table 3 .
Cloud Point C p (50% Transmittance) for Selected Copolymers and Terpolymers with 10 mol % (DMAMCA) Influenced by Different Kinds and Concentrations of | 2023-11-17T16:19:15.904Z | 2023-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "e4782f3c926af14d6815adf1b6e7ee87669f7964",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1021/acsomega.3c06634",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ad83d7f5e412a3e78c1a4ab65d9471ecc92cb5e",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264719037 | pes2o/s2orc | v3-fos-license | Comparison of stratified primary care management for low back pain with current best practice (STarT Back): a randomised controlled trial
Summary Background Back pain remains a challenge for primary care internationally. One model that has not been tested is stratification of the management according to the patient's prognosis (low, medium, or high risk). We compared the clinical effectiveness and cost-effectiveness of stratified primary care (intervention) with non-stratified current best practice (control). Methods 1573 adults (aged ≥18 years) with back pain (with or without radiculopathy) consultations at ten general practices in England responded to invitations to attend an assessment clinic. Eligible participants were randomly assigned by use of computer-generated stratified blocks with a 2:1 ratio to intervention or control group. Primary outcome was the effect of treatment on the Roland Morris Disability Questionnaire (RMDQ) score at 12 months. In the economic evaluation, we focused on estimating incremental quality-adjusted life years (QALYs) and health-care costs related to back pain. Analysis was by intention to treat. This study is registered, number ISRCTN37113406. Findings 851 patients were assigned to the intervention (n=568) and control groups (n=283). Overall, adjusted mean changes in RMDQ scores were significantly higher in the intervention group than in the control group at 4 months (4·7 [SD 5·9] vs 3·0 [5·9], between-group difference 1·81 [95% CI 1·06–2·57]) and at 12 months (4·3 [6·4] vs 3·3 [6·2], 1·06 [0·25–1·86]), equating to effect sizes of 0·32 (0·19–0·45) and 0·19 (0·04–0·33), respectively. At 12 months, stratified care was associated with a mean increase in generic health benefit (0·039 additional QALYs) and cost savings (£240·01 vs £274·40) compared with the control group. Interpretation The results show that a stratified approach, by use of prognostic screening with matched pathways, will have important implications for the future management of back pain in primary care. Funding Arthritis Research UK.
Patients who consulted with back pain at one of the 10 participating GP practices within the Keele GP Research Partnership were automatically invited using practice computer systems to attend one of three half-day initial assessment clinics held weekly. Clinics were held in NHS local community physiotherapy premises staffed by physiotherapists, a study nurse and an administrator. On arrival at clinic patients first met with a study nurse to check eligibility, gain informed consent and complete the baseline questionnaire including the Keele STarT Back Screening Tool. A study administrator then telephoned the clinical trials unit telephone randomisation service and provided patient answers to each of the STarT Back Tool 9-items, so that randomisation could be stratified by risk-group. Once the patient was allocated to a treatment arm, the administrator passed them on to either the control or intervention physiotherapist as appropriate to undertake a 30 minute individual consultation including assessment, examination and initial treatment.
Control clinic consultation
There were nine Physiotherapists who undertook the control arm's clinic assessment and treatment. Six of these nine therapists were provided by 2 local NHS physiotherapy service managers who were responsible for providing the NHS physiotherapy services for the 10 GP practices involved in the study. Three of the nine clinic control arm therapists were hired on temporary contracts by the study team as recruitment was quicker than expected and more staff were therefore required to staff the clinics. Six of the nine physiotherapists (66%) had more than 5 years experience specialised in treating back pain problems. Therapists were provided with a half-day of training by the study team to standardise case report form data for treatment quality control purposes. Decisions about onward referral to further NHS physical therapy treatment were made using clinical judgement, based on clinical need, without knowledge of an individual's STarT Back Tool classification.
Ongoing control physiotherapy
These sessions were individualised treatments lasting 30-minutes, and were held in NHS local community outpatient premises delivered by 37 physiotherapists who already provided NHS care to back pain patients for the 10 general practices involved.
-No guidance was provided on the number of sessions or length of treatment course. However, local physiotherapy managers informed the trial team that up to 6 treatments, over a 3-month period matched their local practice.
-The first session re-assessed/examined the patient and included a detailed differential diagnosis (particularly for patients with referred leg pain/radiculopathy).
-The main treatment modalities used were Maitland and McKenzie approaches including advice, reassurance, education, exercise (some in gym classes), manual therapy and acupuncture.
Stratified intervention clinic consultation
There were six Physiotherapists who undertook the stratified arm's clinic assessment and treatment. These therapists were provided by two sources: the 2 local NHS physiotherapy service managers who were responsible for providing the NHS physiotherapy services for the 10 GP practices involved in the study, and by clinical members of research team. Four of the six physiotherapists (66%) had more than 5 years experience specialised in treating back pain problemsthe same as the control therapists. Therapists were provided with a day of training by the study team to standardise the treatment provided and to explain how to treat patients within the stratified model of care including: -The use of Keele STarT Back Screening Tool score and risk-group pathways. Prior to meeting the patient, physiotherapists were provided with administrative clinical information about the patient including their STarT Back Screening Tool score.
-A structure of the standardised 30-minute assessment and examination which included a screen for potential serious pathology (red flags) and neurological examination (lower limb changes to reflexes, sensation and muscle power).
Patients were asked about their symptom history, concerns and treatment expectations. A brief examination was also made of back pain movements (including optional testing for a directional preference) and to identify any hip pathology.
-Patients received reassurance to address concerns related to their back pain and any resulting loss of function. Reassurance topics were guided by the results of the patient's STarT Back Tool score so that specific concerns could be identified and addressed on an individual basis. Messages of advice focussed on: appropriate levels of activity including return to work (if appropriate) and avoiding bed rest. This was supplemented with information of local exercise venues and self-help groups together with a 15-minute educational video entitled 'Get Back Active' 20 to reinforce messages (which was to be organised by the clinic administrator on request from the therapist).
addressing patient fears supported by the 'Back Book'. 21 addressing an individual's uncertainty about issues such as use of pain relief (medication), the role of further investigations, work issues, and the patient's likely future prognosis including methods to deal with future episodes of back pain.
Low risk-group
Patients allocated to the 'low risk-group' received the one-off clinic appointment described above, were reassured that further treatment was unlikely to be beneficial or necessary and were encouraged not to seek further treatment. They were, however, advised that if their symptoms deteriorated they should re-visit their GP. They were therefore discharged from further physiotherapy care at the end of the clinic consultation. Physiotherapists were responsible for providing good clinical governance to their patients and were allowed to over-rule the stratified tool if they believed the pathway being recommended for a patient was inappropriate.
Medium risk-group
In addition to the first clinic session described above, all medium-risk patients were recommended for referral to ongoing physiotherapy treatment with one of five physiotherapists who attended three days training. The training was designed to standardise the pathway for medium-risk patients as follows: -Individualised 30-minute physiotherapy sessions focussed on restoring function and targeting physical characteristics (disabling back pain, referred leg pain and co-morbid pain).
-Treatments were held in NHS local community physiotherapy outpatient premises staffed with guidance that patients should receive up to 6 sessions over a 3-month period.
-The first session re-assessed/examined the patient and included making a differential diagnosis particularly for patients with referred leg pain/radiculopathy.
-The main focus of treatment was to reduce back-related disability. A tailored management plan was negotiated using evidence-based treatments, including advice and explanation, reassurance, education, exercise, manual therapy and acupuncture.
-Consistent with evidence based guidelines, 10 bed rest, traction, massage and electrotherapy were not included in the treatment protocol.
-Moderate levels of psychological prognostic indicators were addressed, but specific training on techniques to target psychological factors was not provided for physiotherapists treating the medium risk-group of patients.
-Therapists were advised to refer non-responders on for further investigations or secondary care interventions, with supervision provided if required from a spinal specialist physiotherapist.
High risk-group
In addition to the first clinic session described above, all high-risk patients were recommended for referral to ongoing physiotherapy treatment with one of four physiotherapists who attended a total of nine days training. The training was designed to standardise the pathway for high-risk patients as follows: -Individualised 45-minute physiotherapy sessions focussed on restoring function using combined physical and psychological approaches and targeting physical and psychological obstacles to recovery.
-Treatments were held in NHS community outpatient premises with guidance that patients should receive up to 6 sessions over a 3-month period.
-The first session re-assessed/examined the patient and included a differential diagnosis particularly for patients with referred leg pain/radiculopathy, and biopsychosocial assessment to explore patient concerns, adopting cognitive behavioural principles to address unhelpful beliefs and behaviours.
-Therapists were trained to use 'stem & leaf' questions to identify unhelpful beliefs and behaviours.
-Physical treatment modalities (exercise and manual therapy) were integrated with psychologically informed techniques to provide a credible explanation for symptoms, reassurance, education, collaborative goal setting, problem solving, pacing, graded activity, and relaxation.
-There was a specific focus on the prognostic psychological indicators identified by the STarT Back Tool such as low mood, anxiety, pain-related fear and catastrophising.
-Reasons for psychological distress were addressed using enhanced communication skills with a focus on promoting appropriate levels of activity, return to normal activities and the management of future back pain recurrences.
-Patient expectations about prognosis and implications for function were addressed and the role of active selfmanagement emphasised. Advice about sleep and work was provided and if necessary a return to work plan implemented.
-Patients were encouraged to put management plans into practice between treatment sessions and help was given to problem solve any difficulties that arose.
-Monthly group mentoring sessions were held for physiotherapists to discuss individual cases and consolidate the training throughout the trial, with supervision provided from a Consultant Physiotherapist (pain management expertise) and a Professor of Clinical Psychology. The true value of the '0·00' figure is positive, rounded to two decimal places. d Incremental QALY estimates following multiple regression-based adjustment for age, gender, duration of pain at baseline, and baseline scores on the RMDQ and EQ-5D. Probability that the stratified intervention is cost effective Willingness to pay for additional QALYs (£) | 2016-10-19T13:53:51.552Z | 2011-10-29T00:00:00.000 | {
"year": 2011,
"sha1": "abf897c8a7e31d9e488d87546e017c94fa5e78e4",
"oa_license": null,
"oa_url": "http://www.thelancet.com/article/S0140673611609379/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "abf897c8a7e31d9e488d87546e017c94fa5e78e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270362033 | pes2o/s2orc | v3-fos-license | Korean Denuclearization and the US-China Strategic Rivalry
ABSTRACT This paper discusses the ways in which the growing strategic rivalry between the United States and China will complicate regional efforts to denuclearize the Korean peninsula. In particular, it reviews how relative gains, strategic mistrust, domestic politics, and coercive bargaining will all make cooperation on Korean denuclearization increasingly challenging for the United States and China. Our paper concludes by reflecting on how the US, the Republic of Korea, and Japan can navigate this increasingly complex security environment.
Introduction
Strategic rivalry between the United States and China creates formidable challenges for regional efforts to denuclearize the Democratic People's Republic of Korea (DPRK).Prior to the onset of this rivalry, the United States and China struggled to coordinate their policies toward the DPRK due to differing priorities and policy preferences.However, as the United States and China increasingly view their relationship in competitive terms, additional obstacles are emerging.As this paper will discuss, concerns about relative gains, strategic mistrust, political constraints, and coercive bargaining are all likely to impede US-China cooperation toward the DPRK.This, in turn, will complicate ongoing regional efforts to bring an end to the DPRK's nuclear program.
Existing Obstacles
The United States, Japan, the Republic of Korea (ROK), and China have long seen a denuclearized peninsula as beneficial for their respective interests.For the United States and its allies, an end to the DPRK's nuclear program would remove a major security threat, allow them to adjust their military footprints in the Indo-Pacific, render their alliances more credible, and bolster the global non-proliferation regime.For China, a denuclearized DPRK reduces the chances that more of its neighbors, including the ROK and Japan, might themselves pursue nuclear armaments.
Despite these common interests, however, regional cooperation to rein in the DPRK's nuclear program has been challenging.The United States and China in particular have clashed over the proper approach to the DPRK's growing arsenal.For the United States and its allies, denuclearization and deterrence against DPRK aggression are arguably their primary objectives.For China, on the other hand, maintaining the DPRK as a stable buffer state is its top goal; denuclearization is an important but secondary objective.
As a result of their different priorities, the United States and China have traditionally endorsed different policies to encourage denuclearization.The United States has focused on compelling the DPRK to denuclearize while keeping its Japanese and ROK allies secure; to do this, it has sought to isolate Pyongyang diplomatically and economically through sanctions while also upgrading its regional military presence and alliances.The United States frequently calls for greater cooperation from China with respect to these policies, emphasizing that China must curtail its ties with the DPRK and acquiesce to strengthened US alliances.
China, meanwhile, views these steps as destabilizing and usually prefers offering economic assistance and security guarantees to the DPRK regime to reduce tensions, improve stability, and induce the DPRK to denuclearize.China often calls for the United States, Japan, and the ROK to adopt this approach, demanding that the allies lift their sanctions and scale back their military capabilities.These fundamentally different visions of pathways to denuclearization continue to hinder regional cooperation toward the DPRK to this day.We argue, however, that the growing strategic rivalry between the United States and China will create new hurdles to regional cooperation on Korean denuclearization.
Strategic Rivalry
Strategic rivalry can be best defined as a state of comprehensive competition between two great powers that believe their competitor is challenging their fundamental interests (McDonald 2020).Several key issues and trends have driven the United States and China toward a state of strategic rivalry.The United States and its allies, including the ROK, Japan, the Philippines, and Australia, have grown increasingly concerned about China's rapid military modernization, excessive maritime claims, and military and economic coercion.At the same time, US leaders have grown more critical of China's nationalist economic policies, human rights practices, and undemocratic political system.Ultimately, both the Trump and Biden administrations have argued that China poses the leading threat to the US interest in a free, open, and rules-based international system (White House, 2017House, , 2022)).The United States has therefore embraced competition with China as its foremost foreign policy priority.China, meanwhile, regards the US efforts to enhance its regional partnerships, assert its navigational rights, and rectify economic imbalances as a major threat (China State Council Information Office 2017).Chinese elites have generally concluded that the United States and its allies are embracing a thinly-veiled strategy of containment that threatens China's fundamental interests and its continued rise to a position of power, prestige, and prosperity in the international system.As a result of these contentious issues and the broader ongoing shift in relative power between the two states, American and Chinese leaders have come to view their relationship as a rivalry.Polling data indicates that this shift in elite perceptions is echoed by the general public, with growing majorities in both states expressing antagonism toward each other (Silver 2022).
Relative Gains
This strategic rivalry creates several obstacles to regional cooperation on the DPRK nuclear program.Critically, strategic rivalry will encourage both states to prioritize relative rather than absolute gains.A state experiences absolute gains when cooperation improves that state's security, prosperity, and influence such that the state is better off than it was before the onset of cooperation.A state experiences relative gain, on the other hand, when cooperation improves that state's security, prosperity, and influence such that the state is better off in comparison to its peers prior to the onset of cooperation.Under anarchy, preoccupation with relative gains can impede mutually beneficial cooperation.Given that states must depend on self-help for their defense they will be wary of any cooperation that might provide a partner with a relative gain.After all, that partner could potentially use its new advantage to threaten or coerce the state, undermining its core interests.This concern may lead states to forgo cooperation that might otherwise bring about mutually beneficial absolute gains (Grieco, Powell, and Snidal 1993).
Strategic rivalry should make states' concerns over relative gains especially acute.If a state's strategic rival achieves a relative gain through cooperation, it constitutes a clear and imminent threat that is almost certain to be leveraged to undermine the state's fundamental interests.As such, states are likely to see mutually beneficial solutions that provide their leading competitor with comparatively greater benefits as undesirable.
Concerns over relative gains will likely hinder cooperation between the United States and China on the Korean peninsula.Both states have a clear interest in terminating the DPRK's nuclear program, as mentioned above; any deal that increases the chances of denuclearizing the DPRK offers the prospects of absolute gains for both great powers.Nevertheless, a denuclearized DPRK would give the United States and its allies a number of new advantages over China.
First, the end of the DPRK's nuclear and missile program would eliminate a troublesome vulnerability in the United States' alliance relationships with Japan and the ROK.As the DPRK has improved its ability to strike the continental United States with a nuclear intercontinental ballistic missile, the ROK and Japan have both harbored doubts about the United States' reliability as a military partner.In particular, some worry that the DPRK's nuclear arsenal might decouple the United States from its partners, deterring the US from supporting either treaty ally in the event DPRK aggression or provocation (Rapp-Hooper 2017).These fears of decoupling and abandonment put serious strains on the US alliances with Japan and the ROK.In particular, as these fears grow more intense, the allies are likely to attach less value to their military partnerships with the United States.As a result, the US ability to call on its allies to support efforts to counterbalance or contain China's power in the Indo-Pacific could be diminished.
The elimination, or even reduction, of the DPRK's nuclear capabilities would go a long way to reassuring these partners of the US commitment.If these partners are confident in the reliability and value of their military alliances with the United States, they will likely be more receptive to US calls for assistance and support in its strategic competition with China.As such, by bolstering its alliances, denuclearization would offer the United States a critical advantage over China.
Second, with the DPRK nuclear program gone, the United States and its allies would be better able to redirect diplomatic, economic, and military resources toward countering China's revisionism elsewhere in the Indo-Pacific.The United States and its partners are increasingly concerned about China's extraordinary maritime claims and expanding military presence in the South China Sea.Furthermore, the United States is wary of China's mounting pressure toward Japan in the East China Sea over the Japaneseadministered Senkaku Islands.Finally, the United States has expressed alarm over increasing Chinese military coercion against the island of Taiwan.
Currently the United States has roughly 28,500 of its troops based in ROK territory to help manage the threat posed by the DPRK.These forces conduct regular military exercises alongside the ROK military focused explicitly on defeating an attack by the DPRK.The United States spent an average of $3.34 billion over the course of 2016-2019, alongside important ROK contributions, to maintain and strengthen this military presence (United States Government Accountability Office, 2021).It is, admittedly, doubtful the United States would completely eliminate this military presence in the event of DPRK denuclearization given the DPRK's significant conventional armed forces.It would, however, be able to reallocate a larger portion of its resources toward deterring China in the South China, East China Sea, and Taiwan Strait.
The ROK and Japan would similarly have more of their resources available to deter China should the DPRK denuclearize.The ROK would need to continue to focus on deterring the DPRK's considerable conventional forces.Nevertheless, it could redirect funds from capabilities designed expressly to counter DPRK nuclear weapons as part of the Kill Chain, Korean Air and Missile Defense, and Korean Massive Punishment and Retaliation concepts toward other programs designed to counterbalance China's growing military presence in the Indo-Pacific.Japan, which unlike the ROK is not directly threatened by the DPRK's conventional military forces, would be even better positioned to shift its resources and attention southwards toward threats in the South and East China Sea.
Third, progress toward denuclearization might allow the United States, Japan, and the ROK to cultivate stronger relations with the DPRK, pulling it away from China's orbit.While other disputes plague relations between the allies and the DPRK, particularly with respect to human rights and Japanese abductees, a negotiated settlement to end the nuclear dispute would go a long way toward repairing relations.With sanctions lifted, trade and investment from the United States, the ROK, and Japan would begin to flow across the DPRK's borders, weaning the DPRK off of its traditional dependence on China.China's influence in the DPRK would likely decline as United States, ROK, and Japanese influence expanded.
This last possibility is increasingly worrisome for many Chinese policy experts.During the Trump administration's negotiations with the DPRK, Shi Yinhong, a professor at Renmin University and one of the CCP's top foreign policy scholars, expressed concern that China was losing out compared to its rival as a result of the talks.As the talks progressed, Shin fretted that "China has been completely bypassed" as North Korea and the United States temporarily improved relations at China's expense (Le 2018).
Ultimately, Shin concluded that "of all the parties, China gets the least" out of the ongoing talks, having alienated the DPRK by participating in United States-led sanctions to get the DPRK to the negotiating table (Le 2018).Around the same time, Zhang Liangui, an expert in DPRK at the Central Party School in Beijing, emphasized that there was a risk that progress toward denuclearization would drive the DPRK and the United States closer together to China's disadvantage (Wong and Zhou 2018).Similarly, an anonymous senior Chinese official was similarly reported as saying that during the Trump-Kim talks "We were concerned that North Korea was drifting away, our influence in Pyongyang was declining, and our interests were not being protected" (Chhabra et al. 2021).
Strategic Mistrust
Mutual strategic mistrust stemming from the intensifying US-China rivalry will likely pose a major hurdle to cooperation toward Korean denuclearization.As highlighted above, states in a strategic rivalry believe their competitor seeks to undermine their fundamental interests.As such, states will tend to view their rival's offers of mutually beneficial compromise as disingenuous and fueled by ulterior motives.They also will be more concerned that rivals will defect from any cooperative agreement in an effort to secure an advantage in their unfolding strategic competition.This generates an atmosphere of intense distrust rendering strategic cooperation in the form of alliances, condominiums, or concerts of power untenable except under extraordinary circumstances.This distrust will also "trickle down" to impede cooperation on the full range of narrower issues facing the rivals in their bilateral relationship, from nonproliferation to narcotrafficking.
This atmosphere of distrust is likely to render US-China cooperation far more difficult on the Korean peninsula.China already seems to view US proposals for cooperation on DPRK denuclearization as disingenuous.When the Biden administration called for Chinese support for new United Nations Security Council resolutions condemning and sanctioning the DPRK's missile tests in 2022, China's Permanent Representative to the UN Zhang Jun expressed skepticism: How shall we go about solving the issue of the Korean peninsula?The answer does not hinge on whether or not the Council adopts a new draft resolution.The crux of the matter is whether or not anyone wants to use this issue as a card in their so-called Indo-Pacific strategy or treat it as a chess piece on the chessboard of that strategy.(United Nations Security Council, 2022a) Here, Zhang suggests that the United States is not actually seeking to bring about denuclearization on the Korean Peninsula but is instead using it as an opportunity to seek a regional military advantage over China.At other times, Zhang has argued that the US proposals are designed with "cynical intentions" to create discord between China and other members of the Security Council (United Nations Security Council, 2022a).Zhang would later imply that the United States was seeking conflict rather than pursuing genuine cooperation, arguing that "some are covertly devising other plans, with the end result being the spread of the flames of war to North-East Asia and the Korean peninsula" (United Nations Security Council, 2022a).This perspective reflects a deep skepticism of US professed cooperative intentions on the Korean peninsula.
The United States is similarly likely to see Chinese offers of cooperation on the DPRK as driven by ulterior motives.Given the challenges that China poses to the longstanding rules-based international order elsewhere, particularly in the maritime domain, the Biden administration may rightly be skeptical of whether China is genuinely committed to the global non-proliferation regime.This distrust has only been compounded by China's growing track record of enabling DPRK sanctions evasion (Nichols 2023).As Blinken emphasized in the 2021 US-ROK Foreign and Defense Ministerial (2 + 2), "We are cleareyed about Beijing's consistent failure to uphold its commitments" (US Department of Defense, 2021).
Strategic distrust has already begun to color the US views on China's proposals for cooperation on the Korean peninsula.At the United Nations Security Council, China has called for sanctions relief on the DPRK as a pathway to restarting denuclearization talks.In response, US Permanent Representative Thomas-Greenfield argued that this proposal had ulterior motives, claiming that "this is a clear effort by China and Russia to reward the Democratic People's Republic of Korea for its bad actions" (United Nations Security Council, 2022b).Later, Thomas-Greenfield made this point still more forcefully, arguing that China offered "blanket protection" to the DPRK and has "enabled" it because it "think[s] they make a good regional buffer to the United States" (United Nations Security Council, 2022c).In doing so, she indicated that the United States increasingly views China's policy on the Korean peninsula as aimed at competing with the United States rather than working toward denuclearization.
Political Constraints
Strategic rivalry will also create domestic political obstacles to cooperation on DPRK denuclearization for both states.To compete with their rivals, states will need to pour more resources into diplomatic, economic, and military policies designed to counter those rivals.As such, states in a strategic rivalry face incentives to mobilize their public behind these costly policies (Christensen 1996).In order to do this, states will emphasize the significance of the threat posed by their strategic rivalry and the necessity of more assertive diplomacy and more powerful military capabilities.Leaders may seek to do this through various means, including public speeches, remarks, and testimony, as well as public relations and propaganda campaigns.If these efforts are successful, the state's broader public will become increasingly hostile toward its rival.
This public hostility can be a double-edged sword.As animosity toward a rival grows, the public may come to view any compromise with the rival as undesirable, even if that cooperation advances a shared interest.As such, domestic resentment toward a rival may inhibit a government's ability to cooperate with its rival selectively for mutual benefit.This dynamic is likely to have a substantial effect on the US ability to cooperate with China on Korean denuclearization.US presidents' electoral prospects, and those of their political party, hinge on public support.As such, they are often sensitive to public opinion on policy matters.This may make the president hesitant to work too closely with China on the DPRK for fear of a potential public backlash.Additionally, the US Congress, which is even more accountable to the electorate, will at times act to constrain or compel the executive on matters of foreign affairs through the legislative process.Even if the president were to attempt to coordinate with China on Korean denuclearization, Congress might actively undercut this effort due for political purposes.
Xi Jinping and the Chinese Communist Party are more insulated from public sentiment due to the Chinese state's authoritarian character; nevertheless, scholars have pointed out that China's foreign policy may still be shaped by public opinion.China's leaders cannot claim the political legitimacy that comes from having been chosen in a free, fair, and transparent electoral process.Instead, they must look for political legitimacy elsewhere.China's authoritarian leaders often do this by emphasizing their performance, particularly the improvements they have made to China's economic prosperity.But Chinese leaders also attempt to bolster their political legitimacy by appearing sensitive and responsive to the public will (Fang, Li, and Liu 2022).As such, Xi may be reluctant to pursue any policies that directly contradict public sentiment and could undermine his legitimacy.If the Chinese public grows too hostile toward the United States, it may constrain Xi's ability to work with the United States even to pursue the shared goal of DPRK denuclearization.
These dynamics are already at work in the US-China relationship.The American public increasingly views China negatively.A series of Gallup polls found that an average of 51% of American respondents viewed China unfavorably between 2010 and 2018.From 2019 to 2023, this figure rose by an average of 6.6% points each year.The most recent Gallup poll, collected in February of 2023, indicates that 84% of Americans now view China unfavorably.Similarly, while only 11% of American survey respondents viewed China as the US greatest enemy in 2018, as of 2023 50% of survey respondents now hold this view (Gallup, n.d.).This view has led to skepticism of any cooperation with China.A 2023 Pew survey indicates that only 33% of Americans think the United States and China can cooperate to resolve international conflicts.Similarly, only 32% think climate change cooperation is possible (Silver et al. 2023).
This public hostility has raised the political costs of any US cooperation with its rival.Perhaps most noticeably, the Biden administration's efforts in mid-2023 to promote a "thaw" in relations with China prompted significant domestic pushback.Republican legislators criticized Blinken's June 2023 visit to Beijing to meet with Xi Jinping as "weak" and "unacceptable", arguing that it undermined national security (Demirjian 2023;Shaw 2023).In response to the trip, the Chairman of the House Foreign Affairs Committee, Michael McCaul, subpoenaed the State Department for documentation detailing US competitive policies toward China (AP News 2023).Legislators similarly pushed back against efforts to revive bilateral climate cooperation by the US President's Special Envoy for Climate, John Kerry.The Chairman of the House Subcommittee on China, Mike Gallagher, emphasized that "John Kerry and many others in the Biden administration fail to see what is obvious, which is that the CCP itself, not climate change is the greatest threat we face" (Silverio 2023).House legislators have also emphasized that they will block the Biden administration if it attempts to remove human rights sanctions on China in exchange for counter-narcotics cooperation (Elkind 2023).
Importantly, negative views of China have risen among US allies as well, creating additional political constraints to regional cooperation.Pew surveys indicate that the percentage of Japanese citizens with unfavorable views of China rose from 42% in 2002 to 86% in 2020.Similarly, the percentage of ROK citizens who view China unfavorably rose from 31% in 2002 to 75% in 2020 (Silver, Devlin, and Huang 2020).
China has also seen a decline in its public's views of the United States.In 2012, 48% of Chinese surveyed by the BBC World Service viewed US influence in the world as mostly negative (BBC World Service 2012).By 2017, that number had risen to 61% (BBC World Service 2017).Other surveys indicate that Chinese views of the United States have only continued to decline; for instance, a recent survey found that 75% of Chinese citizens viewed the United States unfavorably in 2021 (Liu, Li, and Fang 2021).With the Chinese nationalist online community of "netizens" frequently calling for more confrontational policies toward the United States, the Chinese government may well face greater pressure to refrain from cooperating with its rival (Baptista 2022).
Coercive Bargaining
Finally, strategic rivalry gives states greater incentives to withhold cooperation on key issues as part of coercive bargaining with their rivals.When a state faces a serious dispute with a partner engaged in mutually beneficial cooperation, the state may elect to suspend that cooperation until the partner grants concessions in the dispute.States are especially likely to use this approach when they believe that the stakes of the dispute are more significant than the benefits of cooperation and when they believe that withholding cooperation will be more harmful for their partner than for themselves.While even friendly states may make use of this tactic, strategic rivals are particularly likely to employ it.This is in part because disputes between strategic rivals will be more severe and frequent than those between other states.Furthermore, strategic rivals may be less concerned about the reputational costs of this tactic.Violating past commitments by withholding cooperation on an issue undermines a state's reputation as a reliable partner in the eyes of its counterpart.Given that states in strategic rivalries are focused primarily on comprehensive competition with their rival, however, it is doubtful that they will see too much value in maintaining their reputation for reliable cooperation with that rival.
Even if the United States and China were to strike a grand bargain to work together on Korean denuclearization, the bargain's survival would be constantly threatened by clashes between the two rivals elsewhere in the world.If, for instance, China sought to ramp up pressure on the United States over a dispute in the South China Sea, China could open the floodgates for DPRK sanctions evasion and return to shielding the DPRK from UNSC action.If the United States aimed to pressure China to scale back its harassment of American naval vessels in the South China Sea, it could reinstate suspended sanctions or deploy additional forces to the Korean peninsula.Of the two states, however, the United States cares most about the denuclearization issue while China is primarily concerned with maintaining the DPRK as a buffer state.As such, China is more likely than the United States to use cooperation on Korean denuclearization as a coercive tool.
While neither the US nor China has taken this approach to Korean denuclearization so far, China has repeatedly shown it is willing to withhold cooperation for coercive purposes.Most notably, in August 2022, China suspended a slew of cooperative initiatives with the United States in an attempt to punish the United States for the former Speaker of the House Nancy Pelosi's visit to Taiwan.This included cooperation on military affairs (The Theater Commanders and Defense Policy Coordination talks), homeland security (repatriation of illegal immigration, legal assistance on criminal matters, cooperation on transnational crime, and counternarcotics cooperation), and talks on climate change (Ministry of Foreign Affairs of the People's Republic of China, 2022).While China has resumed talks with the United States on climate change, the other suspensions remain in place (Kine and Lau 2023).Chinese Foreign Minister Wang Yi summarized China's rationale for this approach to cooperation with the United States succinctly: "For cooperation to be win-win, there needs to be necessary conditions and atmosphere.It won't do if the US undermines China's core interests . . . on the one hand, and on the other, expects China to cooperate unconditionally" (Wang et al. 2022).In other words, China will only engage in cooperation with the United States if United States is willing to comply with Chinese demands on issues that China views as its "core interests", including Taiwan, the South China Sea, Xinjiang, and Hong Kong.
China has engaged in this approach in the past as well.In 2018, China temporarily suspended the US-China Diplomatic and Security Dialogue in an attempt to ramp up pressure on the United States over disputes over trade and the East China Sea (Tiezzi 2018).It also has a long track-record of suspending military-to-military contacts and confidence-building measures in order to coerce the United States into ending arms sales to Taiwan and reconnaissance collection in the West Pacific (Kan 2014).
The United States has made use of this strategy to a more limited extent outside the Korean peninsula.The Trump administration suspended its extradition agreement with Hong Kong and its Fulbright program in mainland China and Hong Kong in 2020 to punish China for its rising repression in Hong Kong (Zheng 2020).The United States also disinvited China from the Rim of the Pacific multilateral naval exercise in 2018 in retaliation for China's growing militarization of the South China Sea (Eckstein 2018).
Conclusion
None of these obstacles makes cooperation impossible; even the fiercest of strategic rivals have occasionally been able to surmount them in the past.Nevertheless, they render cooperation far more challenging and therefore less likely.In the absence of US-China coordination, the DPRK will likely continue to pursue nuclear proliferation.The current regime sees nuclear forces as vital to the survival and strength of the country.It is unlikely to pursue disarmament unless it is convinced that a) it cannot survive if it maintains the nuclear weapons and b) it can survive if it gives up nuclear weapons.Without China's cooperation, the economic pressure the United States, Japan, and ROK can bring to bear against the Kim regime is likely to be inadequate to convince the DPRK of the former.Without security guarantees from the United States and its allies, similarly, any Chinese efforts to discourage DPRK nuclear proliferation will be unlikely to convince the DPRK of the latter.In particular, the DPRK would likely require reassurances from the allies in the form of concrete changes in their conventional military postures before it felt secure enough to denuclearize (Bowers and Stålhane Hiim 2021).Ultimately, a lack of US-China cooperation incentivizes the DPRK to continue its weapon program.For the United States, Japan, and the ROK, this creates a significant -perhaps insurmountablechallenge when it comes to denuclearization.This does not mean that denuclearization should be abandoned; but it does mean that the allies should temper their expectations about denuclearization for the foreseeable future.
In light of these challenges, we recommend the allies focus their efforts on deterrence and containment of the DPRK threat first and foremost.They can do this most effectively by continuing to upgrade their bilateral and trilateral cooperation with one another.The recent Washington Declaration clarifying and reaffirming the US commitment to the ROK's security is a vital positive step in this direction (White House, 2023).The Camp David principles laid out by Washington, Tokyo, and Seoul to facilitate coordination and consultation between the three states will similarly enhance their ability to deter DPRK provocations.The United States, Japan, and the ROK will also need to improve their conventional military capabilities to offset the DPRK's growing nuclear arsenal.Seoul should continue to enhance its three-axis deterrence system, bolstering its ability to deny or punish a DPRK nuclear attack (Song 2022).Japan, similarly, should seek to enhance its missile defenses and civil defenses.
Despite the considerable obstacles that exist to denuclearization, there are a few measures the United States, ROK, and Japan could pursue to attempt to mitigate these obstacles.In particular, the allies must consider how to confront the challenge of relative gains.China is unlikely to agree to a cooperative deal if it believes the deal would ultimately leave it at a disadvantage compared to its rival.As such, the allies should consider how to ensure the potential benefits of a denuclearized peninsula are balanced evenly between the United States and China.For instance, given the sensitivity of some Chinese analysts to the possibility of closer DPRK-United States relations if denuclearization moves forward, it might be necessary for the allies to guarantee that the DPRK would remain neutral and not establish military ties with the United States in the event of denuclearization.
The allies should also consider how to minimize the influence of strategic mistrust in US-China cooperation on Korean denuclearization.As the United States and China are likely to view each other's policy proposals with considerable suspicion, it may be advisable for the allies to seek out a more neutral arbitrator or mediator to help play a role in developing and overseeing collaborative solutions to this policy problem.1 Any proposed cooperation should also start small, focusing on incremental changes to the status quo that pose little initial risk and could allow the United States and China to begin rebuilding mutual trust on the issue of DPRK denuclearization.
Secret talks have occasionally been used by statesmen to bypass political opposition to cooperation with a rival.Kissinger's secret trips to China in 1971, for instance, gave the United States and China the opportunity to discuss rapprochement without significant domestic political costs.The United States, Japan, and ROK should consider taking a similar approach to cooperation with China on Korean denuclearization.Covert negotiations might allow US and Chinese diplomats to circumvent the growing animosity between their publics and explore mutually acceptable policy solutions.
In conclusion, the intensifying strategic rivalry between the United States and China significantly reduces the chances that these two states will coordinate in pursuit of denuclearization on the Korean peninsula.Any US-China cooperation will be hampered by the problem of relative gains, mutual suspicion and mistrust, domestic political sentiment, and incentives to engage in coercive bargaining.Without support from both of these great powers, it will be virtually impossible to convince Pyongyang to give up its growing nuclear arsenal.As such, we recommend that the allies hedge against the possibility that DPRK nuclear weapons are here to stay by strengthening their ability to deter nuclear aggression by the DPRK.At the same time, the allies should explore opportunities to overcome the constraints of US-China strategic rivalry given the importance of this issue to peace and stability in Northeast Asia.
Disclosure Statement
No potential conflict of interest was reported by the author(s).
Notes on Contributors
Jihoon Yu is a research fellow at the Korea Institute for Defense Analyses.His expertise includes the ROK-US alliance, ROK-Europe security cooperation, inter-Korean relations, defense policy, and maritime security.He earned his MA in National Security Affairs from US Naval Postgraduate School and his Ph.D. in Political Science from the Maxwell School of Syracuse University.
Erik French is an assistant professor of political science at SUNY Brockport and an affiliated scholar with the America in the World Consortium.His teaching and research focuses on alliance politics and deterrence.He received his Ph.D. at the Maxwell School of Syracuse University. | 2024-06-10T15:10:04.991Z | 2024-01-02T00:00:00.000 | {
"year": 2024,
"sha1": "127d973e188ccaaf87bc47d735425150e52263bc",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/25751654.2024.2364422?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "929e721254fc16faf7a1d7b35d3626fd7d314152",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
252209582 | pes2o/s2orc | v3-fos-license | N-cadherin protects oral cancer cells from NK cell killing in the circulation by inducing NK cell functional exhaustion via the KLRG1 receptor
Background Circulating tumor cells (CTCs) can survive in the circulation and return to primary tumors through a self-seeding process. However, the mechanisms underlying CTCs escape from natural killer (NK) cell-mediated immune surveillance remain unclear. Method Self-seeded tumor cells were isolated and characterized using a modified contralateral seeding model. A comparison of transcriptional profiles was performed between the parental cells and self-seeded cells. The molecular mechanism of self-seeded tumor cells escaping from NK cell was demonstrated through in vitro experiments and verified in a CTC-mimicking in vivo model. Then, the expression level of key protein mediating CTCs immune escape was detected in 24 paired primary and recurrent tumor samples of patients with oral cancer by the immunohistochemical method. Result Self-seeded cells displayed resistance to NK cell-mediated lysis and a higher tumor seeding ability than their parental cells. Elevated expression levels of the CDH2 gene and its protein product, N-cadherin were found in self-seeded cells. NK cells secreted cytokines, and fluid shear stress facilitated N-cadherin release by promoting A disintegrin and metalloprotease 10 (ADAM10) translation or converting the precursor ADAM10 to the mature form. Soluble N-cadherin triggered NK cell functional exhaustion by interacting with the killer cell lectin-like receptor subfamily G member 1 (KLRG1) receptor and therefore protected tumor cells from NK cell killing in the circulation. In vivo experimental results showed that overexpression of N-cadherin promoted tumor self-seeding and facilitated the survival of CTCs. Compared with primary tumors, N-cadherin expression was significantly increased in matched recurrent tumor tissues. Conclusion Together, our findings illustrate an unknown mechanism by which CTCs evaded NK cell-mediated immune surveillance, and indicate that targeting N-cadherin is an effective strategy to prevent CTCs from homing to primary tumor.
ABSTRACT Background Circulating tumor cells (CTCs) can survive in the circulation and return to primary tumors through a self-seeding process. However, the mechanisms underlying CTCs escape from natural killer (NK) cellmediated immune surveillance remain unclear. Method Self-seeded tumor cells were isolated and characterized using a modified contralateral seeding model. A comparison of transcriptional profiles was performed between the parental cells and self-seeded cells. The molecular mechanism of self-seeded tumor cells escaping from NK cell was demonstrated through in vitro experiments and verified in a CTC-mimicking in vivo model. Then, the expression level of key protein mediating CTCs immune escape was detected in 24 paired primary and recurrent tumor samples of patients with oral cancer by the immunohistochemical method. Result Self-seeded cells displayed resistance to NK cellmediated lysis and a higher tumor seeding ability than their parental cells. Elevated expression levels of the CDH2 gene and its protein product, N-cadherin were found in self-seeded cells. NK cells secreted cytokines, and fluid shear stress facilitated N-cadherin release by promoting A disintegrin and metalloprotease 10 (ADAM10) translation or converting the precursor ADAM10 to the mature form. Soluble N-cadherin triggered NK cell functional exhaustion by interacting with the killer cell lectin-like receptor subfamily G member 1 (KLRG1) receptor and therefore protected tumor cells from NK cell killing in the circulation. In vivo experimental results showed that overexpression of N-cadherin promoted tumor self-seeding and facilitated the survival of CTCs. Compared with primary tumors, N-cadherin expression was significantly increased in matched recurrent tumor tissues. Conclusion Together, our findings illustrate an unknown mechanism by which CTCs evaded NK cell-mediated immune surveillance, and indicate that targeting Ncadherin is an effective strategy to prevent CTCs from homing to primary tumor.
BACKGROUND
Circulating tumor cells (CTCs) are believed to play critical roles in cancer metastasis. [1][2][3] Even in the early stage of tumor formation, a large number of tumor cells may enter the circulation. 4 Most CTCs are eliminated by immune cells and various stresses in the circulation; thus, only a minority of CTCs are capable of surviving and eventually contribute to the spread of tumor. 5 Understanding of the immune evasion mechanisms of 'surviving CTCs' will provide new insights into tumor cell biology and cancer therapy. However, because CTCs are extremely rare in the circulation, it is difficult to obtain surviving CTC populations with conventional in vitro methods, which impedes such research.
Kim et al revealed a distinctive pathological outcome of CTCs in which these cells can return to and grow in the primary tumors through a process termed 'self-seeding'. 6 This finding showed that the flow of CTCs is more complicated than a one-way path. In addition to causing distant metastasis, CTCs can also affect host tumor growth. 7 Most importantly, self-seeded tumor cells originate from CTCs capable of surviving in the circulation, and thus offer an unique opportunity to trace and WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Thousands of circulating tumor cells (CTCs) are released daily into the bloodstream, however, only a small fraction of them can evade immunocytemediated killing and form secondary lesions. Our understanding of this immune escape mechanism remains poor.
WHAT THIS STUDY ADDS
⇒ The cross-talk between tumor cells and natural killer (NK) cells in the circulation can trigger NK cell functional exhaustion via N-cadherin/KLRG1 axis, thus enabling CTCs to avoid immune attack.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Our study provided a basis for development of therapeutic strategies targeting N-cadherin/KLRG1 axis to promote NK cell antitumor activity, especially against CTCs.
Open access obtain the surviving CTC subpopulations for investigating their immune evasion mechanism. Natural killer (NK) cells exert crucial roles in innate immune responses. Unlike cytotoxic T lymphocytes, NK cells mediate cytotoxic functions independent of major histocompatibility complex-mediated antigen presentation. 8 Therefore, NK cells exhibit a rapid response against infected and malignant target cells, which are critical for immune surveillance, particularly in the control of metastasis and hematological cancers. 9 10 NK cells can directly kill tumor cells or secrete various cytokines, such as interferon (IFN)-γ and tumor necrosis factor (TNF)-α, to eliminate tumor cells. 9 NK cell function is regulated by a balance between activating and inhibitory signals. Positive signals for cytolysis are provided by the ligation of several activating receptors, such as NKG2D, NKp46, NTB-A, and natural cytotoxicity receptors NKp30, NKp80. Conversely, NK cell inhibitory signaling 'exhausts' NK cells by suppressing the cytokine secretion and cytotoxicity of NK cells. Inhibitory receptors include NKG2A, killer immunoglobulin receptors, KLRG1, and CD85j. [11][12][13] Tumor cells can effectively suppress NK cell function by upregulating inhibitory molecules, which is believed to be one of the major mechanisms by which tumor cells escape immune surveillance. 14 15 Functional impairment of circulating NK cells has been reported in multiple types of tumors. [16][17][18][19] However, whether CTCs have immunosuppressive effect on NK cells is still unknown.
In this study, we sought to demonstrate the immune escape mechanisms mediated by the cross-talk between tumor cells and NK cells in the circulation. We provide evidence that self-seeded tumor cells can release soluble N-cadherin, which contributes to KLRG1 signalinginduced NK cell functional exhaustion and facilitates tumor cells escape from NK cells-mediated immune surveillance. We also extended our findings from the selfseeded tumor cells to CTCs in an animal model. Furthermore, targeting N-cadherin has been proved to be an effective strategy to clear CTCs and prevent tumor cells from homing to primary tumor.
Patient samples
Primary NK cells were isolated from 10 healthy donors and 10 patients with oral cancer. Human CTCs sample were also isolated from these 10 patients with oral cance. Primary and matched recurrent tumor tissue samples were collected retrospectively from 24 patients with oral cancer (stage T3-T4) who underwent surgical resection from 2015 to 2020 in the Department of Oral and Maxillofacial Surgery. Recurrent was defined as any local or regional recurrence. Tissue samples were obtained by biopsy or surgical resection.
Self-seeding animal model Rag1 -/mice were purchased from Shanghai Model Organisms. Contralateral seeding experiments were performed on Rag1 -/mice (n=6) to establish the 'selfseeding' model as described previously. 6 Briefly, 2×10 6 paired GFP-labeled cells (designated recipient tumor cells) or RFP-labeled cells (designated donor tumor cells) were subcutaneously implanted in the contralateral flank regions. All mice were sacrificed after 30 or 60 days, with dissections of bilateral tumors. The seeding of tumor cells was observed under a fluorescence microscope. In the functional analysis groups, the donor tumor cells were changed into specific cells. The animal study was reviewed and approved by the Animal Ethics Committee of Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine (2017-314).
Additional methods are detailed in online supplemental material, including cell lines and cell culture, fluorescence activated cell sorting (FACS), flow cytometry analysis, RNA sequencing (RNA-seq), blocking antibody preparation, Western Blot, immunofluorescence, NK cells isolation, NK cell cytotoxicity assay, RNA interference and lentiviral expression, ELISA, capturing and staining of CTCs, in vivo flow cytometry (IVFC), microdissection, capillary Western assay, etc and statistical analysis.
RESULT
NK cell-dependent self-seeding model Previously, tumor self-seeding was confirmed in several tumor types, such as breast cancer, colon adenocarcinomas and melanomas, in nude mouse models. 6 To determine the self-seeding capacity of oral cancer, a type of cancer that frequently metastasizes via the lymphatic rather than the hematogenous pathway, 20 contralateral seeding experiments were performed. We modified the xenograft model by using Rag1 -/mice, which have normal NK cells but are deficient in functional T cells and B cells, to focus on the role of NK cells in the control of tumor self-seeding. To specifically trace the self-seeded tumor cells, RFP-labeled SCC9 cells were defined as donor cells, and GFP-labeled SCC9 cells were defined as recipient cells. Sixty days after inoculation, obvious seeding of donor SCC9 cells could be detected in the recipient tumor mass (figure 1A, left, online supplemental figure 1A). To assess the potential preventive effect of NK cells on tumor self-seeding, Rag1 -/mice were treated with antiasialo GM1 antibodies to deplete NK cells (online supplemental figure 1B). The depletion of NK cells significantly promoted tumor self-seeding. Increased self-seeded cells (4.28-fold higher) were observed in NK cell-depleted Rag1 -/mice on day 60 postinoculation (figure 1A, middle and right). In addition, NK cell depletion accelerated tumor self-seeding. Thirty days after inoculation, rare tumor cell seeding formation was observed in the control group ( figure 1B, left). However, the presence of numerous seeding cells was found at the same time point in NK cell-depleted Rag1 -/mice (figure 1B, middle and right). These results indicate that tumor self-seeding is a common event in tumors, regardless of the pathways Open access they metastasize, and NK cells play critical roles during this process.
Isolation and characterization of self-seeded cells These self-seeded tumor cells represent distinct subpopulations that can evade NK cell-mediated immune surveillance in the circulation and finally home to primary tumor. We, therefore, isolated and cultured these seeded cells (SCC9-seeded) from a single cell suspension of digested recipient tumors by FACS ( figure 1C,D). No significant differences in the migration ability, colony formation or proliferation were observed between the SCC9-seeded cells and SSC9-parental cells (figure 1E-G). SCC9-seeded cells exhibited molecular changes consistent with epithelial to mesenchymal transition (EMT). Decreased expression of E-cadherin and increased expression of Vimentin, Snail, ZEB1 and ZEB2 were observed in SCC9-seeded cells ( figure 1H). Analysis of cleaved PARP, a marker for apoptosis, was tested in both monolayer-cultured and suspension-cultured cancer cells. A decreased cleaved PARP level was observed in suspension-cultured SCC9seeded cells compared with suspension-cultured SCC9parental cells, although these two cell lines had similar cleaved PARP levels in monolayer cultures ( figure 1I). This result indicates that SCC9-seeded cells are more resistant to anoikis, a form of apoptosis resulting from the loss of cell-cell interactions. In the cytotoxicity assay, SCC9seeded cells and SCC9-parental cells were cocultured with freshly isolated NK cells (online supplemental figure 1C). Lower lysis of SCC9-seeded cells relative to SCC9-parental cells was observed ( figure 1J). The tumor seeding ability of SCC9-seeded cells was also examined. SCC9-seeded cells displayed an approximately 3.97-fold higher seeding ability than SCC9-parental cells ( figure 1K). In addition, SCC9-seeded cells developed self-seeding much sooner (30 days after inoculation) than did SCC9-parental cells (no tumor cell seeding formation at the same time point) ( figure 1L). These data indicate that the isolated SCC9seeded cells are subpopulations with the ability to avoid NK cell killing and anoikis in the circulation.
High expression of N-cadherin in self-seeded cells
To reveal the underlying mechanism that triggers the escape of SCC9-seeded cells from NK cell mediated immune surveillance, transcriptional profiling and comparative analysis were performed (figure 2A). Differential expression of a set of genes (62 genes upregulated and 74 genes downregulated in SCC9-seeded cells compared with SCC9-parental cells) was observed (fold change more than 20 times) (set 1). Then, we reviewed 11 published studies and determined 67 membrane molecules related to NK cell function (set 2) (online supplemental table 1). A Venn diagram analysis indicated three overlapping genes (CDH2, FN1 and HLA-E) between set 1 and set 2 (figure 2B). Using qRT-PCR, we verified the upregulatiom of CDH2, but not FN1 and HLA-E, in SCC9-seeded cells (figure 2C, online supplemental Open access figure 1D). Therefore, CDH2 was considered to be potentially involved in tumor cells escaping immune surveillance of NK cells. Two other classical members of the CDH family, E-cadherin (CDH1) and R-cadherin (CDH4), are also known as ligands for KLRG1, 21 and the homology between the human and murine N-cadherin proteins was the highest (96.55%) (figure 2D). Elevated CDH2 mRNA, but not CDH1 and CDH4 mRNA, and increased N-cadherin were confirmed in SCC9-seeded cells (figure 2E-G). To determine whether self-seeded cells express N-cadherin during the circulation process, CTCs were harvested from 10 patients with oral cancer as well as the contralateral seeding model and stained with N-cadherin antibody. More than 60% of CTCs were found to express N-cadherin in both patients and animal model (figure 2H, online supplemental figure 1E and table 2). In addition to N-cadherin, the expression of its receptor, KLRG1, was also measured in NK cells isolated from patients with oral cancer, healthy individuals and Rag1 -/mice. Positive KLRG1 expression was observed on numerous human and mice NK cells (figure 2I and online supplemental figure 1F). Moreover, an increased proportion of KLRG1 (+) NK cells was observed in patients with oral cancer (64.22%±13.11%) compared with healthy individuals (39.89%±14.53%), and these increased KLRG1 (+) NK cells were mainly CD56 dim subsets, which are known to have higher cytotoxicity than CD56 bright subsets (figure 2I). 22 N-cadherin protects self-seeded cells from NK cytolysis through KLRG1 receptor High expression of N-cadherin was observed in selfseeded cells, and because N-cadherin binding to its receptor KLRG1 reportedly impaired the NK-cell functional and immune responses. 23 We assume that the interaction between N-cadherin and the KLRG1 receptor may contribute to the inhibition of NK cell antitumor activity and immune evasion of self-seeded tumor cells. To verify the influence of tumor cell N-cadherin expression on NK cell cytotoxicity, N-cadherin was silenced in SCC9-seeded cells (SCC9-seeded shN-cad) (online supplemental figure 2A, B). SCC9-seeded cells, SCC9-parental cells and SCC9seeded shN-cad cells were cocultured with NK cells. Silencing of N-cadherin resulted in a much higher lysis rate in SCC9-seeded shN-cad cells than in SCC9-seeded cells and SCC9-parental cells (figure 3A, left). With the N-cadherin blocking antibody treatment, lysis of SCC9seeded cells by NK cells was significantly increased. This result suggested that the N-cadherin expression levels in tumor cells determined their susceptibility to NK cell killing. To clarify that binding to its receptor KLRG1 is required for N-cadherin-induced impaired antitumor efficacy of NK cells, KLRG1 (−)/(+) NK subsets were freshly isolated (online supplemental figure 2C), and cocultured with SCC9-seeded cells, SCC9-parental cells and SCC9-seeded shN-cad cells. Notably, KLRG1 (+) NK subsets produced lysis patterns resembling those of total NK populations (figure 3A, middle). However, no difference in lysis rates was found in the KLRG1 (−) NK subset group (figure 3A, right). These results indicate that the expression of N-cadherin can protect tumor cells from NK cytolysis, which is mediated by interacting with the KLRG1 receptor.
In addition to direct killing, self-seeded cell-induced dysfunction in NK cells was also evaluated in an indirect culture system using the NK-sensitive cell line K562. We compared the NK cell cytotoxicity after exposure to various tumor cell culture supernatants. Incubation of the SCC9-parental cell supernatant slightly inhibited the cytotoxic activity of NK cells against K562 target cells when compared with normal culture medium. Treatment of SCC9-seeded cell supernatant, but not SCC9-seeded shNcad cells supernatant, caused a more profound inhibitory effect, which could be reversed by the application of an N-cadherin blocking antibody (figure 3B, left). Similar results were observed in KLRG1 (+) NK subsets exposed to tumor cell supernatants (figure 3B, middle). However, none of the tumor supernatants had any effect on the KLRG1 (−) NK subsets (figure 3B, right). These data suggest that soluble molecules released by SCC9-seeded cells are involved in the suppression of NK cell antitumor function in a KLRG1 receptor-dependent manner. Given that blocking N-cadherin could restore NK cell function, we reasoned that this effect factor might be soluble N-cadherin. The levels of soluble N-cadherin in these tumor cell supernatants were determined by ELISA. Soluble N-cadherin levels were elevated in SCC9-seeded cell supernatants compared with SCC9-parental cells and SCC9-seeded shN-cad cells (figure 3C). We next sought to determine whether the higher expression of soluble N-cadherin could induce the impaired NK cell cytotoxicity function. Total NK populations and KLRG1 (−)/ (+) NK subsets were administered recombinant human soluble N-cadherin (rhN-cadherin) (500 ng/mL) with or without an N-cadherin blocking antibody. RhN-cadherin treatment resulted in an obvious cytotoxicity inhibition in the total NK cell and KLRG1 (+) NK cell subset groups but not in the KLRG1 (−) NK cell subset group, and the application of an N-cadherin blocking antibody reversed this effect ( figure 3D). This result shows that self-seeded cell-derived soluble N-cadherin could suppress NK cell cytotoxicity by interacting with the KLRG1 receptor.
Soluble N-cadherin-induced and KLRG1 interaction-induced functional exhaustion of NK cells Inhibitory signals may facilitate the exhausted status of NK cells. Typically, exhausted NK cells are characterized by (1) diminished cytolytic activity; (2) decreased production of effector cytokines; and (3) an exhausted phenotype characterized by downregulated expression of activating receptors and upregulation of inhibitory receptors. 24 To verify that self-seeded cell-derived soluble N-cadherin could trigger NK cells functional exhaustion, the expression of cytolytic molecules (granzymes and perforin), cytokine production (IFN-γ and TNF-α) and the exhausted phenotype of NK cells were Open access measured. NK cells incubated with SCC9-seeded cell supernatant or treated with rhN-cadherin displayed an obvious exhausted phenotype, as evidenced by decreased expression of granzymes and perforin and lower IFN-γ (but not TNF-α) production, along with the downregulation of four main activating receptors (CD16, NKG2D, DNAM-1 and NKp46) but upregulation of three inhibitory receptors (NKG2A, TIM-3 and KLRG1 itself) when compared with the SCC9parental cell supernatant-treated NK cells (figure 3E, left, figure 4A and online supplemental figure 3). In rescue experiments, the SCC9-seeded cell supernatantcadherin-induced or rhN-cadherin-induced NK cell exhaustion was abrogated with the use of N-cadherin blocking antibody. These experiments were also tested in KLRG1 (−)/(+) NK subsets. Due to the limitation of our specimens, we only assessed the influence of rhNcadherin on the phenotype of these two subsets. Similar but more pronounced alterations were observed in KLRG1 (+) NK subsets. In contrast, the rhN-cadherin treatments had no effect on the KLRG1 (−) NK subsets (figure 3E, middle and right, and figure 4A and online supplemental figure 3). These findings demonstrate Open access that self-seeded cell-derived soluble N-cadherin facilitates the functional exhaustion of NK cells, which is dependent on the presence of the KLRG1 receptor.
In addition to activating and inhibitory receptors, the expression of death ligands also affects the antitumor function of NK cells. Therefore, we examined the expression of FasL and TRAIL in KLRG1 (−) /(+) NK subsets after exposure to various tumor cell culture supernatants. No significant difference in surface expression of FasL and TRAIL was observed (figure 4B), suggesting that death ligands may not be involved in N-cadherin/KLRG1mediated inhibition of NK cell cytotoxicity.
NK cell-secreted cytokines and fluid shear stress facilitate N-cadherin release from tumor cells by promoting A disintegrin and metalloprotease 10 translation or converting the pro-ADAM10 to the m-ADAM10 A disintegrin and metalloprotease 10 (ADAM10), a metalloproteinase, has previously been reported to cleave and induce ectodomain shedding of cadherin family members. 25 Similar levels of ADAM10 protein and mRNA transcript corresponding to full-length ADAM10 with proteolytic activity were observed in SCC9-parental and SCC9-seeded cells (figure 5A and online supplemental figure 4A). A 90 KD band corresponds to the precursor protein (pro-ADAM10), and a 70 KD band corresponds to the mature and catalytically active form of ADAM10 (m-ADAM10). To determine whether ADAM10 mediated soluble N-cadherin release from tumor cells, we silenced ADAM10 in SCC9-seeded cells (SCC9-seeded shADAM10) (figure 5A and online supplemental figure 4B). Knockdown of ADAM10 significantly suppressed the soluble N-cadherin levels in SCC9-seeded cell supernatants (figure 5B) and abolished the inhibitory effect of tumor supernatants on NK cell cytolytic activity (figure 5C), thus supporting that ADAM10-mediated production of soluble N-cadherin was responsible for the self-seeded cell-induced NK cell dysfunction.
Open access
Most interestingly, when tumor cells (but not N-cadherinsilenced or ADAM10-silenced tumor cells) were preexposed to NK cell supernatants, a significant increase in soluble N-cadherin levels was observed ( figure 5D). This result suggested that some NK cell-sourced signaling molecules can promote ADAM10-induced N-cadherin shedding from tumor cells. Therefore, the influence of IFN-γ and TNF-α, two major cytokines secreted by NK cells, 26 on ADAM10 proteolytic activity and soluble N-cadherin production in tumor cells was tested. As shown in figure 5E,F, combined treatment with IFN-γ and TNF-α was sufficient to increase ADAM10 activity and soluble N-cadherin levels. In contrast, no soluble N-cadherin expression alteration was observed in ADAM10-silenced tumor cells that received treatment with IFN-γ and TNF-α.
Open access
To clarify the activation mechanism of ADAM10 by IFN-γ/TNF-α, the full-length mRNA and protein expression levels of ADAM10 were measured in tumor cells following IFN-γ/TNF-α treatment. We found that IFN-γ/ TNF-α treatment did not significantly change ADAM10 mRNA levels, but increased the protein levels of both active and precursor forms of ADAM10 (figure 5G, online supplemental figure 4C,D). In addition, the translation inhibitor cycloheximide (CHX) but not the transcription inhibitor actinomycin D (ActD) abrogated the IFN-γ/TNF-α-induced increase in ADAM10 expression (online supplemental figure 4E). These results indicate that IFN-γ/TNF-α affects ADAM10 expression at the translational level. IFN-γ and TNF-α were reported to activate PI3K/AKT/mTOR signaling, 27 28 which has been well characterized to control protein translation via its two downstream effectors, p70 ribosomal protein S6 kinase (p70-S6K) and eukaryotic translation initiation factor 4E (eIF4E) binding protein 1 (4E-BP1). 29 As shown in figure 5G, IFN-γ/TNF-α treatment resulted in enhanced phosphorylation of AKT, p70-S6K and 4E-BP1, indicating the activation of PI3K/AKT/mTOR signaling in response to IFN-γ/TNF-α. Treatment of the mTOR inhibitor, rapamycin, significantly blocked the IFN-γ/TNF-α-induced phosphorylation of p70-S6K and 4E-BP1 but had no effect on the phosphorylation status of the upstream protein Akt. In addition, pretreatment of SCC9-seeded cells with rapamycin and IFN-γ/TNF-α neutralizing antibodies completely abrogated the IFN-γ/ TNF-α-induced enhancement of ADAM10 protein and soluble N-cadherin levels ( figure 5G,H), and reversed the NK cell supernatants-induced and IFN-γ/TNF-α-induced cytotoxicity inhibition in NK cells (figure 5I, online supplemental figure 4F). These results demonstrate that IFN-γ/TNF-α promotes ADAM10 protein translation by activating PI3K/AKT/mTOR signaling, thus promoting N-cadherin shedding and causing NK cell exhaustion.
Previous studies reported that shear stress could activate ADAM10 via the Piezo1 force sensor. 30 We then tested the effect of fluid flow on ADAM10 activity and expression. ADAM10 activity and m-ADAM10 expression levels, but not the mRNA levels, were elevated in tumor cells cultured in a shaker incubator compared with those cultured under normal conditions (figure 5J,K and online supplemental figure 4G). Piezo1 is a calcium-permeable non-selective cationic channel. Activation of piezo1 leads to calcium influx. Our results showed that shear stress did not alter the expression of Piezo1 (supplemental figure 4H), but caused elevation of the cytosolic calcium concentration, which could be blocked by GsMTx4, a specific inhibitor of the Piezo1 channel ( figure 5L). This result suggests that shear stress can trigger Piezo1 activation. Accompanied by the activation of Piezo1, the levels of m-ADAM10 increased, while the levels of pro-ADAM10 decreased ( figure 5K,M), which could be reversed by GsMTx4, suggesting that shear stress converted the pro-ADAM10 to the m-ADAM10 via activating Piezo1. GsMTx4 treatment also abrogate the shear stress-induced enhancement of soluble N-cadherin and inhibitory effect on NK cell cytotoxicity ( figure 5N,O).
A time-course analysis showed that m-ADAM10 expression returned to its original value within 72 hours or 36 hours after removing IFN-γ/TNF-α or shear stress treatment, indicating that IFN-γ/TNF-α and shear stress might only exert a short-lasting effect on ADAM10 expression (supplemental figure 4I).
Combined treatment with cytokines and fluid flow led to more soluble N-cadherin release and enhanced inhibitory effect on NK cell cytolysis, but had no effect on tumor cells lacking ADAM10 (figure 5P,Q and online supplemental figure 4J).
NK cell challenging is not sufficient to cause a SCC9-seeded cell like phenotype in SCC9-parental cells To demonstrate whether exposure to NK cells may induce a SCC9-seeded-like phenotype in SCC9-parental cells, we cocultured SCC9-parental cells with freshly isolated NK cells in the indirect culture system. The expression of N-cadherin and ADAM10 was detected in the tumor cells. The N-cadherin expression level remained at a low level, while ADAM10 expression increased in these cells (online supplemental figure 5A and figure 4C). In spite of the increase of ADAM10 expression, we observed only small increases in the levels of soluble N-cadherin (figure 5D), which may be attributed to the low expression of N-cadherin in these cells. These cells displayed a similar susceptibility to NK-mediated lysis as the untreated SCC9parental cells, and their supernatants only had a weak inhibitory effect on the cytotoxic activity of NK cells when compared with SCC9-seeded cells (online supplemental figure 5B). These results show that NK challenging is not sufficient to cause a SCC9-seeded cell like phenotype in SCC9-parental cells.
SCC9-parental and SCC9-seeded cell enriched samples were obtained from both Rag1 -/and NK cell-depleted Rag1 -/self-seeding models by laser capture microdissection (online supplemental figure 5C), and the expression of N-cadherin and ADAM10 were assessed by capillary Western assays. There was no significant difference in the expression of N-cadherin and ADAM10 between two groups (figure 5R,S), further indicating that the sustained expression of N-cadherin and ADAM10 in tumor cells is independent of the presence of NK cells.
N-cadherin and ADAM10 expression affects the tumor selfseeding process Next, we assessed whether the expression levels of N-cadherin and ADAM10 on tumor cells could affect tumor self-seeding in vivo. Contralateral seeding experiments were performed in Rag1 -/mice with or without anti-asialo GM1 antibody treatment. We forced SCC9 cells to stably express N-cadherin (SCC9 oeN-cad) (online supplemental figure 2B). SCC9-parental, SCC9-seeded, SCC9seeded shN-cad, SCC9-seeded shADAM10 and SCC9 oeN-cad cells were inoculated as donor cells, and GFPlabeled SCC9-parental cells were inoculated as recipient Open access cells. Silencing of N-cadherin dramatically inhibited the seeding ability of SCC9-seeded cells (tumor cell seeding formation was hardly observed 60 days after inoculation) ( figure 6A and online supplemental figure 6A). ADAM10 silencing also greatly suppressed tumor cell seeding ability (6.93-fold lower than that of the SCC9-seeded cells, 60 days after inoculation) (figure 6B and online supplemental figure 6B). Depletion of NK cells could restore the seeding ability of these tumor cells (figure 6C,D, online supplemental figure 6C,D). SCC9 oeN-cad cells showed a stronger (3.43-fold higher than that of the SCC9-parental cells, 60 days after inoculation) and faster (seeding formation was observed as early as 30 days after inoculation) tumor seeding ability than SCC9-parental cells ( figure 6E,F, online supplemental figure 6E,F). These results indicate that downregulation of N-cadherin and ADAM10 could inhibit tumor self-seeding, which is dependent on NK cells.
N-cadherin and ADAM10 expression facilitates survival of CTCs
To test whether N-cadherin and ADAM10 indeed favor CTCs survival, IVFC was used to quantitatively assess the CTCs in mice at 5 hours after tail vein injections of indicated cell lines (online supplemental figure 7A). Typical data traces were presented in figure 7A-H, each fluorescence peak corresponded to a single surviving CTC. The phenotype of NK cells isolated from the spleen of mice in each experimental group was also examined. NK cells from mice injected with SCC9-seeded or SCC9 oeN-cad cells exhibited an obvious exhausted phenotype, as evidenced by decreased expression of granzymes and perforin, lower cytokines production, and downregulation of two main activating receptors on mouse NK cells (NKG2D and NKp46) but upregulation of two inhibitory receptors on mouse NK cells (NKG2A and KLRG1 itself) when compared with the NK cells isolated from mice injections of SCC9-parental cells. Knockdown of N-cadherin and ADAM10 in tumor cells reversed the exhausted phenotype of NK cells (figure 7J, online supplemental figure 7B). These results show that CTCs with high expression of N-cadherin exhaust NK cells in vivo.
Elevated N-cadherin expression in recurrent oral cancer samples We measured and compared the expression levels of N-cadherin and ADAM10 in both primary and matched recurrent tumor tissues from 24 patients. N-cadherin and ADAM10 expression was semiquantitatively Open access assessed by immunohistochemistry. Overall, N-cadherin expression in recurrent tissues was significantly higher (172.41±14.27 score value) than that in primary disease tissues (136.87±11.14 score value) (figure 7K). N-cadherin staining in primary tumors was localized mainly on the membranes, while a wider expression of N-cadherin from the membrane to the cytoplasm was observed in the recurrent counterparts. In contrast, no significant difference in the expression of ADAM10 was found between primary and recurrent tumor tissues, which is consistent with the ADAM10 expression patterns observed in SCC9seeded and SCC9-parental cells ( figure 7L). These data indicate that tumor cells with high expression of N-cadherin were enriched during tumor recurrence, probably through tumor self-seeding.
DISCUSSION
In our study, we successfully isolated self-seeded tumor cells and confirmed that NK cell antitumor immunity Open access plays a critical role in eliminating tumor cells in the circulation. 31 ADAM10-mediated tumor cell shedding of N-cadherin can induce NK cell functional exhaustion via the KLRG1 receptor. We documented that CTCs adopt this mechanism to protect themselves from the clearance of circulating NK cells.
The fact that NK cells depletion could accelerate and increase tumor self-seeding confirms that NK cells exert a key role in immune surveillance against tumor cells in circulation. The successfully obtained self-seeding cells exhibited molecular changes consistent with EMT and increased anoikis resistance when compared with their parental cells. The tumor seeding ability of the harvested self-seeded cells was much higher than that of their parental cells, suggesting that in addition to anoikis resistance, these subpopulations may also be able to escape NK cell-mediated immune surveillance. This notion was supported by the NK cell cytotoxicity assay results. Selfseeded cells exhibited lower susceptibility to lysis by NK cells than parental cells.
Remarkable molecular heterogeneity has been reported within CTCs or between CTCs and their primary tumors, which is related to tumor progression and responses to immunotherapy. 32 However, these studies did not distinguish surviving CTCs from total CTC populations and, therefore, may not accurately reflect the molecular characteristics of 'functional CTCs'. In this study, high expression of CDH2 gene and its protein product, N-cadherin, was observed in self-seeded cell lines, which represent 'surviving CTCs'. N-cadherin is considered a signature of EMT through which tumor cells can shed from primary tumors into the peripheral blood. 33 Given that SCC9-seeded cells display an EMT phenotype, we speculate that the EMT process may contribute to the high expression of N-cadherin in SCC9 seeded cells, via cadherin switching. 34 N-cadherin has been identified as a natural ligand of the inhibitory receptor KLRG1 that is shared between NK cells and CD8 (+) T cells. 21 After binding to its receptor KLRG1, N-cadherin can transduce inhibitory signals to NK cells. We, therefore, hypothesize that N-cadherin may protect tumor cells from immune surveillance through KLRG1 signaling. We found that the expression of N-cadherin in tumor cells determined their susceptibility to lysis by the total NK populations and the KLRG1 (+) NK subsets, but not by the KLRG1 (−) NK subsets. KLRG1 is expressed by 50% of NK cells. 35 Our data indicated a remarkably increased proportion of KLRG1 (+) subsets in tumor patients. Recently, Sunwoo and colleagues found that NK cells in patients with head and neck squamous cell carcinoma can differentiate into distinct subsets and divergent terminal states, 36 which depends on contact with tumor cells. Since KLRG1 is known to be a terminal differentiation marker of NK cells, 37 we think tumor cell-induced NK cell differentiation leads to the upregulation of KLRG1 on NK cells in tumor patients. Notably, these increased KLRG1 (+) cells were mainly cytotoxic CD56 dim cells.
KLRG1 signaling-induced CD56 dim NK cell dysfunction may partly explain the poor antitumor response of NK cells in tumor patients.
Another interesting finding in our study is that we revealed an unrecognized role for soluble N-cadherin in NK cell antitumor immunity. N-cadherin is highly expressed as an adhesion molecule in the epithelial tissue, and its soluble form is stably expressed in serum. 38 Shedding of soluble inhibitory ligands from tumor cells represents a novel strategy by which tumor cells evade immune surveillance. For example, elevated soluble Galectin-3 or soluble ULBP expression suppresses NK cell activity in gastric cancer or cervical adenocarcinoma. 39 40 N-cadherin and other cadherin family members have been described as ligands for KLRG1; however, it remains undetermined whether soluble N-cadherin could regulate NK cell function. Our data demonstrated that in addition to membrane-bound N-cadherin-induced inhibition of NK cell cytotoxicity, soluble N-cadherin released by CTCs can also cause NK cell functional exhaustion through the KLRG1 receptor in a cell-cell non-contact manner.
ADAM10 has been implicated as a predominant metalloprotease responsible for the ectodomain shedding of cadherin family proteins. 25 A positive correlation between ADAM10 expression and soluble N-cadherin levels in culture supernatants were observed in the tumor cell lines. Of note, we found that NK cell-produced IFN-γ and TNF-α could enhance ADAM10 activity and subsequently release more soluble N-cadherin into the extracellular environment by activating PI3K/AKT/mTOR signaling. These interesting findings indicate that tumor cells can sense the signaling molecules secreted by NK cells in the surrounding environment and promote the self-release of soluble N-cadherin to avoid immune attack. In addition, fluid shear stress was found to convert the pro-ADAM10 to the m-ADAM10, suggesting that although shear force in the circulation can kill tumor cells, tumor cells could also exploit fluid stress to protect themselves from immune surveillance.
Our in vivo study showed that ectopic expression of N-cadherin triggered rapid and higher tumor self-seeding in the contralateral seeding model. Conversely, silencing of ADAM10 and N-cadherin in self-seeded cells partially or completely suppressed the tumor cell self-seeding ability, respectively. The difference in the inhibition efficiency between ADAM10 silencing and N-cadherin silencing treatment may be because ADAM10 silencing merely inhibited the production of soluble N-cadherin while N-cadherin silencing abolished both soluble and membrane-bound N-cadherin. By using IVFC and an animal experimental model of CTCs, we also provided in vivo evidence that the expression of N-cadherin facilitates survival of CTCs by inducing NK cell exhaustion.
Higher expression of N-cadherin was observed in recurrent oral cancer tissues than that in matched primary tumors, suggesting CTCs that evade immune killing by NK cells may return to primary tumors and favor local recurrence, which needs further experimental confirmation.
Open access
Our study provides new insight into the diagnostic and therapeutic potential of N-cadherin/KLRG1 axis in cancer. N-cadherin expression analysis on CTCs can develop a new strategy for liquid biopsy to stratify patients with high recurrent risk who should receive more aggressive treatment. In addition, our data supported that KLRG1 is a novel checkpoint inhibitor target to promote NK cell antitumor activity, especially against CTCs.
Collectively, our data showed that a distinct subpopulation of CTC characterized by high expression of N-cadherin can sense NK cell-secreted signaling molecules and release soluble N-cadherin by enhancing ADAM10 activity. Membrane-bound and soluble N-cadherin triggers NK cell functional exhaustion and protects CTCs from NK cell-mediated killing by interacting with the KLRG1 receptor (online supplemental figure 8). Targeting N-cadherin is an effective strategy to prevent CTCs from homing to primary tumor. | 2022-09-14T06:18:07.967Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "02a91689b52f84861526413d886f70497ca91c15",
"oa_license": "CCBYNC",
"oa_url": "https://jitc.bmj.com/content/jitc/10/9/e005061.full.pdf",
"oa_status": "GOLD",
"pdf_src": "BMJ",
"pdf_hash": "82a113abbe0956c6689542872e7f39970517c831",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268999849 | pes2o/s2orc | v3-fos-license | Osteochondrolipoma of the foot treated by surgical excision: a case report and literature review
Background Osteochondromas, classified as a new benign subtype of lipomas and characterised by chondroid and osseous differentiation, are rare lesions that have been infrequently reported in previous literature. The maxillofacial region was reported as the most frequent localization, with infrequent occurrence in the lower limb. This paper represents the first documented case report of osteochondrolipoma in the foot. Case presentation A 51-year-old male patient presented with a chief complaint of right foot pain at the plantar aspect, accompanied by the observation of swelling between the first and the second metatarsal shafts. His complaint of pain and swelling started 10 and 4 years prior, respectively. Since their onset, both symptoms have progressed in nature. Imaging revealved a large mass exhibiting a nonhomogenous composition of fibrous tissue and bony structures. Surgical intervention through total excision was indicated. Conclusion Osteochodrolipoma is a benign lesion that can affect the foot leading to decreased functionality of the foot due to the pain and swelling. Surgical excision is the recommended approach for this lesion, providing both symptomatic relief and confirmation of the diagnosis through histopathological examination.
Introduction
Lipomas have been reported to be the most prevalent benign soft tissue neoplasms, which are further classified into superficial and deep, depending on their location [1].Differentiation into different mesenchymal elements, including fibrous tissues, blood vessels, or muscle, has been documented.However, differentiation into bone or cartilage has a low predilection and is often associated with parosteal localization [2].Osteochondromas, classified as a new benign subtype of lipomas and characterized by chondroid and osseous differentiation, are rare lesions that have been infrequently reported in previous literature.The first case, observed by Soeder et al., presented with a 70-year-old male patient who underwent MRI of the left thigh, revealing the presence of nonhomogenous fibrinous tissue and bony structures.Furthermore, this isolated lesion was found to be independent of the neurovascular bundle and not attached to the bone.Histopathological examination demonstrated an encapsulated lesion with a smooth surface composed of vascularized fibrous capsule.Within the capsule, yellowish adipose tissue was observed, along with a notable presence of cartilaginous and bony components.Microscopic evaluation revealed a significant number of osteocytes, accompanied by a small rim of osteoblasts indicative of woven bone formation.Adjacent to the woven bone, cartilaginous tissue was also identified [3].Other regions that have been reported in the literature include the forearm, ischial region, mandible, axilla, scapular region, popliteal fossa, and chest wall.Reports of this condition in the lower half of the body is relatively uncommon.All the reported cases treated this lesion with surgical excision, confirmed their diagnosis with histopathology, and reported no recurrence.Additionally, the patients were able to regain full functionality of the affected organ or limb [4][5][6][7][8].
This study provides a unique case of osteochondrolipoma, specifically located in the foot.This localization further adds to the rarity of this condition, as it has not been previously observed in such anatomical region.To the best of our knowledge, this represents the first documented case report of osteochondrolipoma in the foot.By highlighting this novel occurrence, our research expands on the current understanding of osteochondrolipoma and its diverse anatomical presentations.
Case presentation
A 51-year-old male patient was presented to the orthopedic clinic at King Saud University Medical City on July 11, 2023.He presented with a chief complaint of right foot pain at the plantar aspect, accompanied by the observation of swelling between the first and the second metatarsal shafts.His complaint of pain and swelling started 10 and 4 years prior, respectively.Since their onset, both symptoms have progressed in nature.Prior to his presentation, the patient had been surgically and medically free, and able to walk and undergo regular daily activities with no limitations.Throughout the years, he had been taking non-steroidal anti-inflammatory drugs (NSAIDs), including meloxicam, for pain relief.A year prior to his presentation, the pain started impeding daily activities, and he found no further relief with the use of analgesia.The patient denied prior instances of swelling in other areas in his body.Additionally, he denied any history of trauma, or constitutional symptoms such as fever, fatigue, night sweats, weight loss or loss of appetite.Furthermore, the patient denied any history of smoking.
Physical examination of his right foot revealed obvious swelling between the 1st and 2nd metatarsal shafts, extending to plantar aspect of the foot and the medial aspect of the first metatarsal shaft (Fig. 1).The swelling was 6*3 Cm in size, displaying an irregular shape accompanied by erythema.It was nonmobile, firm in its consistency, tender, and hot.
The patient demonstrated normal plantar flexion and dorsiflexion of the ankle joint with no pain, as well as pain-free and normal eversion and inversion of the subtalar joint.However, due to pain in the first and second phalanges of the first and second rays, there was limited range of motion in that area.He was neurovascularly intact and exhibited an antalgic gait while mobilizing.Plain radiographs demonstrated soft tissue swelling with a medial surrounding of bone density (Fig. 2).
Further investigation was deemed necessary, prompting the indication of magnetic resonance imaging (MRI).MRI revealed a large mass between the first and third metatarsal shafts, extending to the plantar aspect of the foot.The mass exhibited a nonhomogenous composition of fibrous tissue and bony structures, and surgical intervention was indicated (Fig. 3).
The operation was performed under general anaesthesia, with the patient in the supine position.A 350 mmHg torniquet was applied, and a medial incision was made (Fig. 4).Following dissection of the fascia, the mass was observed to be large, composed of fibrous and chondral tissues, and reaching between the first and third metatarsal bones (Fig. 5).Total excision was performed, and triple washout with hydrogen peroxide, iodine, and saline was performed.The mass was sent to histopathology for confirmation of the diagnosis.The wound was sutured in layers with vicryl 1 and vicryl 0 the vicryl 2.0, the skin was closed with monocryl 4.0., and pressure dressing was applied.The patient was in stable condition and had intact vascularity.The excisional biopsy measured 5.5 × 3.5 × 3.0 cm.The lesion was observed to contain mature fatty tissue with areas of fibrous and cartilaginous tissue and was negative for malignancy.The intraoperative radiograph is shown post-excision (Fig. 6).
Postoperatively, the patient had intact neurovascularity and experienced no complications.He was discharged home in stable condition with appropriate analgesia and a course of prophylactic antibiotic.He was advised for toe touch ambulation and further follow-up in the clinic, and had been on an ankle brace for two weeks.After the two weeks, the wound demonstrated appropriate healing, and sutures were removed.There had been no signs of infection or wound dehiscence.The patient commenced gradual range of motion two weeks postoperatively and had reached full range within one week.He returned to his normal activities four weeks following the excision.
Radiographic imaging on the 8-month follow-up are shown (Fig. 7)
Discussion
Osteochondrolipoma is a benign lesion considered as a histological variant of lipoma.Few cases have been reported in the literature on this lesion in the mandible, chest wall, hand, scapula and ischium, indicating a limited number of occurrences [4][5][6][7].According to Kitazawa et al., the maxillofacial region was reported as the most frequent location, with infrequent occurrence in the lower limb.The average patient age was also observed to be 57.4 years, with no clear gender predominance [7].To date, we believe this represents the 18th case of osteochondrolipoma reported in the literature.According to our review of the literature with the keywords "osteochondrolipoma" and "ossifying chondrolipoma", this is the first case reported to affect the foot.
Although it remains unclear, multiple theories have been suggested for the pathogenesis of osteochondrolipomas.One theory suggests that the different components independently arise from multipotent mesenchymal cells, while another suggests it indicates a metaplastic process in a previously existing chondrolipoma or lipoma, and some suggest repetitive trauma to cause secondary ossification [7][8][9].Furthermore, the diagnosis of osteochondrolipomas depends on plain radiographs, computerized A variety of observations regarding its presentation have been made in the literature.Some case reports described it as firm, mobile, and nonadherent to muscle or bone, while others found it firmly attached.Furthermore, symptom presentation may also differ.In certain instances, and as opposed to scapular lesions, patients with osteochondrolipomas of the hand may present with complaints of pain and numbness [1,6,8].The patient in our case experienced progressively worsening pain associated with swelling, eventually leading to a significant impact on their daily activities.Myositis ossificans, calcified or ossified tumours, hemangiomas, calcified bursae, and well-differentiated liposarcomas are all differentials of this condition [10].
In contrast to this condition, and on MRI, benign soft tissue tumors show uniform high intensity on T1, with low signal intensity appreciated on T2 weighted images [11].On the other hand, myxoid liposarcomas show a well-defined and multilobular mass within the subcutaneous tissue, with T1-weighted images demonstrating low signal intensity with foci of linear high signal intensity, and heterogeneously high signal intensity on T2-weighted images [12].
In this rare location of osteochondrolipoma, we excised the lesion through the medial approach.We tried to avoid the plantar approach to avoid violating the plantar fascia or transecting nerve endings that may lead chronic pain with weight bearing.Chronic incisional pain was seen in 7.1% of patients who were treated with the plantar approach, and 5.1% showed a hypertrophied scar.Delayed wound healing was also noticed, and another study reported a 27% incidence of chronic incisional pain with planter approach [13].
In this study, we present the first osteochondrolipoma that presented in the foot.Our investigations included x-rays, MRI's and our treatment modality of choice was through with surgical excision by using a medial approach to avoid complications of the plantar approach.The patient showed no chronic incisional pain, hypertrophic scar, or cyst formation.
Conclusion
Osteochodrolipoma is a benign lesion that can affect the foot leading to decreased functionality of the foot due to the pain and swelling.Surgical excision is the recommended approach for this lesion, providing both symptomatic relief and confirmation of the diagnosis through histopathological examination.
Fig. 1 A
Fig. 1 A. B. C.Clinical examination of the right foot
Fig. 2 Fig. 3
Fig. 2 Preoperative radiographs of the right foot showing a 4*3 calcified lesion between the first and third metatarsal shafts with no bony involvement.A Anteroposterior (AP) view B Oblique view.C Lateral view
Fig. 7
Fig. 7 Postoperative radiographs of the right foot A. AP view B. Oblique view.C Lateral view | 2024-04-09T13:18:35.559Z | 2024-04-09T00:00:00.000 | {
"year": 2024,
"sha1": "b8589a8b57e544b11b51c5dcd399f1e0c37a6f82",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "db3c6712fb68cb56c11d23d4b7d16d87a541788b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33522048 | pes2o/s2orc | v3-fos-license | The development of coordination abilities at a stage of the previous basic preparation in sports dances
Purpose: to prove a technique of the improvement of special physical fitness due to the development of coordination abilities at a stage of the previous basic preparation. Material and Methods: theoretical analysis and synthesis of data of scientifically methodical literature and empirical materials of the scientific research, pedagogical methods of the research, tool methods: program of diagnostics of the development of psychophysiological abilities (APC “Sports psychophysiologist”), methods of mathematical statistics. Results: the technique is developed which includes means of the general physical preparation, the specially developed complexes of danced-jumped exercises, the dancing combinations and means which are picked up taking into account the leading motive, functional abilities and special technical characteristics of movements and also has a strictly certain sequence of the realization of means during each cycle of classes. Conclusions: the application of the experimental technique of the improvement of special physical fitness of sportsman in the training process led to the improvement and enhancement of their technical preparedness and the increase of the competitive productivity of their performances.
Introduction.The development of competition in sports ballroom dances demands an additional attention to the organized educational-training process of sportsmen.As sports dances are carried out in couples, and additional loadings are created in the conditions of competition, a question of improvement of special physical fitness and technical training in sports dances at a stage of the previous basic preparation become relevant.
The objective of the research: to prove a technique of improvement of special physical fitness due to the development of coordination abilities at a stage of the previous basic preparation.
Material and methods of the research: theoretical analysis and synthesis of data of scientifically-methodical literature and empirical materials of the scientific research; pedagogical methods of the research: pedagogical supervision, pedagogical testing, pedagogical experiment; tool methods: program of diagnostics of the development of psychophysiological abilities (agrarian and industrial complex "Sports psychophysiologist"); methods of mathematical statistics.
Results of the research and their discussion.At the beginning of the research the expert video analysis of performances of couples was carried out at the greatest competitions of the season by the rules of The World dancing organization for the purpose of identification of a couple which owns the highest level of sports skill.It was revealed that couple No. 4 showed the best results during the season (tab.1).The testing of the level of development of coordination abilities was held at participants of the research.As showed the results of the research, the average value of reaction to a light made 293 ms, a standard deviation -15,3, a variation coefficient -5,2%; time of reaction to a sound -409,3 ms, a standard deviation -26,2, a variation coefficient -6,4%; time of reaction to a moving object -92,6 ms, a standard deviation -12,5, a variation coefficient -13,5%; an assessment of speed of movement-10,3%, a standard deviation -1,2%, a variation coefficient -11,6%; an assessment of size of the offered pieces -10,3%, a standard deviation -1,2%, a variation coefficient -11,6%; measuring of pieces -13,5%, a standard deviation -1,5%, a variation coefficient -11,1%; recognition of the offered angles -2,5%, a standard deviation of 0,3%, a coefficient of a variation of 12,5%.The coefficient of a variation fluctuates from 5,2% to 13,5% among all indicators.The variety of indicators of the development of coordination abilities in all six couples was low, and it allowed making the assumption that, in principle, the group is similar and the level of the development of coordination abilities is low at everybody.The technique of physical fitness due to the development of coordination abilities.Couples were distributed on two groups at the following stage of researches -control and experimental, on three couples.Additional trainings for the development of coordination abilities which included at themselves a set of exercises were introduced in the training process of the experimental group.The development means of coordination abilities are presented in the Addition A.
The volume of training tasks for the development of coordination abilities was the following: the preparatory period made 40% of the general time of special physical preparation; the competitive period made 30% of the general time of special physical preparation; the transition period made 30% of the general time of special physical preparation.
Trainings in CG were given according to the program of preparation in sports dances which is recommended by the federation of dance sport of Ukraine; in the experimental group (EG) -according to the developed program of formation of special physical capacities of sportsmen due to the corresponding coordination abilities.During the research positive dynamics of sports results of sportsmen was noted that testifies to the prospect of the growth of their skill.Researches were conducted in dynamics with intermediate cuts of indicators of the general and special physical fitness of sportsmen.Groups were completed for the research taking into account the age of sportsmen and their qualification.
The program of preparation was aimed at the development of physical capacities of an organism of sportsmen.Experimental and control classes were given at 11 o'clock in the morning.The main (dancing) trainings began at 16 o'clock.Training took place three times for a week for 1,5 hours in all groups.Such lesson schedule answers the data of questionnaire of specialists in sports dances and is usual for sportsmen.In the course of each training sportsmen found time for studying and improvement of technique of special physical exercises during 30-50 minutes.
The repeated research of the development of coordination abilities was conducted in the control and experimental groups after carrying out the pedagogical experiment.The data on indicators are given in tab. 4 and 5.By the results of the research in the control group the average value of reaction to a light made 285 ms, a standard deviation -13,2, a variation coefficient -5,2%, time of reaction to a sound -365,2, a standard deviation -22,1, a variation coefficient -6%, time of reaction to a moving object -85,3 ms, a standard deviation -10,5, a variation coefficient -12,3%, time of reaction of a choice -322,5 ms, a standard deviation -15,3, a variation coefficient -4,7%, an assessment of speed of movement of an object -12,4%; a standard deviation -1,3, a variation coefficient -10,5%, an assessment of size of the offered pieces -9,2%, a standard deviation -1, a variation coefficient -10,8%, measuring of pieces -11,5%, a standard deviation -1,2, a variation coefficient -10,4%, recognition of the offered angles -2%, a standard deviation 0,2, a coefficient of a variation of 10%.The coefficient of a variation fluctuates from 4% to 12,3% among all indicators.The minor change of indicators of sportsmen is observed.By the results of the research in the experimental group the average value of reaction to a light made 252 ms, a standard deviation -10,1, a variation coefficient -4%, time of reaction to a sound -332, a standard deviation -15,2, a variation coefficient -4,5%, time of reaction to a moving object -65,2 ms, a standard deviation -5,3, a variation coefficient -8,1%, time of reaction of a choice -275,5 ms, a standard deviation -10,3, a variation coefficient -3,7%, an assessment of speed of movement of an object -8,5%, a standard deviation -0,7%, a variation coefficient -8,2%, an assessment of size of the offered pieces -6,3%, a standard deviation -0,5%, a variation coefficient -7,9%, measuring of pieces -7,2%, a standard deviation -0,6%, a variation coefficient -8,3%, recognition of the offered angles -1%, a standard deviation of 0,05%, a coefficient of a variation of 5%.The coefficient of a variation fluctuates from 3,7% to 8,3% among all indicators.The considerable changes of indicators of coordination abilities of sportsmen are observed.CV is much lower in this group, than in the control group.
As it is noted above, the investigated considerable changes of indicators of the development of coordination abilities are observed in the experimental group in comparison with the control group.It gives the chance to allow about action of the pedagogical experiment.These comparisons of the control and experimental groups are presented in tab.6.
Conclusions.The importance of the level of development of coordination abilities in the improvement of special physical fitness at a stage of the previous basic preparation in sports dances was found as a result of the conducted research.The considerable changes of indicators of coordination abilities of sportsmen are observed.It gives the chance to allow about action of the pedagogical experiment.
The results of the testing of sportsmen of the experimental group showed the reliable changes under the influence of the program of special trainings.The positive dynamics of indicators which display the level of the development of coordination abilities of sportsmen was found.
The positive dynamics of sports results of sportsmen that testifies to the prospect of the growth of their skill was noted during the research.Introduction of the program of improvement of special physical fitness in practice of training of sportsmen of sports dances proved its efficiency experimentally.It is revealed on the basis of the conducted research: special physical fitness is the most significant at a stage of the previous basic training of sportsmen in sports dances; special physical exercises need to be applied constantly during trainings; improvement of special physical fitness of sportsmen at a stage of the previous basic preparation needs to be carried out due to the development of coordination abilities -special exercises are applied for this purpose.This complex of physical exercises and tests is represented effective, and can be used for the correction of the training process, as the current control of a condition of physical fitness, for the selection in a national team, for the transition from one stage of a study to another.Exercises are carried out twice.
The systematization of the content of special physical preparation in sports dances at all stages of preparation is provided in the long term the subsequent researches.Exercise 12. Run by a "Snake" Exercise 13.To put feet together, to extend hands in the parties.To rise on tiptoe and to stand in this provision during 10-15 seconds.To close eyes.To stand in this provision of 10-15 more seconds.Then to lower hands along a body and to stand in this provision of 5-10 more seconds.Exercise 14.To put feet together, hands on a belt.To rise on tiptoe and to make 4-6 fast inclinations forward, to horizontal position of a body.To repeat exercise blindly.
Addition A A SET OF EXERCISES WHICH WERE USED
Exercise 15.To raise a knee of the left foot on belt level, having remained to stand on the right foot.To extend hands in the parties.To stand in this provision of 10-15 seconds.To close eyes.To stand in this provision of 10-15 more seconds.Then to repeat the same exercise by other foot.
Exercise 16.To put feet on one line.The right foot in front of the left foot so that a heel of the right foot touched fingers of the left foot.To put hands on a belt.To make on 6-8 inclinations to the left and to the right, keeping balance.To repeat exercise blindly.
Table 5 Indicators of coordination abilities in the experimental group after the experiment
Exercise 1. Standing on one foot, having bent another and having raised it higher, to keep balance within one minute.To repeat the same on other foot.Exercise 2. To execute the same exercise blindly within 30 seconds.Exercise 3. Standing in a rank.A pass of a ball from hand to hand (to the right, to the left) in front of a breast, behind a back.Exercise 4. S.P. -Stand in attack on the right foot, the left hand for the head, the right hand behind a back.
Exercise 8. Jumps on a place on right foot, left foot, on two feet with advance forward, back.Exercise 9. Jumps on a place on right foot, left foot, on two feet with turns on 90°, 180 o and 360 o .Exercise 10.Shuttle run with jumps and turns.Exercise 11.Balance on the right and left feet. | 2017-09-07T09:53:53.517Z | 2016-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "7f4a61a24ac45b4cf6e780d015aeab7af727d0f6",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/index.php/1991-0177/article/download/56736/52953",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f4a61a24ac45b4cf6e780d015aeab7af727d0f6",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
230618658 | pes2o/s2orc | v3-fos-license | Evaluation of Genetic Diversity and Identification of Huperzia Species Collected in Some Different Areas in Vietnam by Molecular Markers
. The objective of this study was to evaluate genetic diversity of 35 Huperzia samples collected from the different geographical areas in Vietnam by using rbc L markers. The results revealed that the samples were genetically diverse in high genetic similarity which ranged from 98.0 to 99.0%, respectively. Based on the Genbank data in comparison, it was ascertained that only one sample (BK3) is a member of H.phlegmarioides group, and others belonged H. squarrosa group as divided into two main groups by phylogenetic tree analyses. It suggests that some Huperzia samples may plausibly descend from the same origin, evolution and arising relations. This study has provided useful information for further identifying the high-quality genetic plant sources for propagation, development and conservation of the high-valued medicinal materials of Huperzia plants in this country.
Introduction
Huperzia species such as H. squarrosa (HS) or Lycopodium squarrosum (LS) species belongs to the family of Lycopodiaceae. This species is widely distributed in many countries in the world, and available in the tropical and temperate regions with over 415 species in global [1]. They are adapted in mountainous regions with the height from 1000 to 3000 m above sea level [2]. In Vietnam, these plants are well-defined as high tolerance to heat, drought and growing in thin forests. They are also obtainable in some places from the North to the Centre of this country [3] and are easily found in the moist forests and rainforests at high altitude in and amongst mosses and other angiosperms epiphytes [4]. The major morphological characteristics of H. squarrosa were narrated as shown in Figure 1. The stem is pendulous and up to 60 cm long, branching regularly dichotomous and isotomous; their leaves microphyllous, spirally arranged, not imbricated and sharply acute, entire, rigid, coriaceous, shining, veins simple with no branches or unions extended to a middle portion (Figure 1 A, B). A: Habit; B: upward turned stem [4] International Letters of Natural Sciences Submitted: 2019-10-31 ISSN: 2300-9675, Vol. 80, pp [13][14][15][16][17][18][19][20][21][22][23] Revised: 2020-05-27 doi: 10.56431/p-mr77lk Accepted: 2020-05-29 CC BY 4.0. Published by Academic Open Access LTD, 2020 Online: 2020- H. squarrosa is known as a highly valuable medicinal plant, which contains alkaloids, flavones, triterpenes, phenolic acids, especially including Huperzine A (Hup A), Huperzine B (Hup B), N-methyl-huperzine B, Huperzinine, Lycoporine A, Carina-tumine A, etc [5]. To the best of our knowledge, over 300 Lycopodium alkaloids have been reported [6]. According to the report of Ngoc et al [7], the alkaloids isolated from H. squarrosa collected from Lam Dong province -Vietnam includes 6 main compounds: lycosquarosine A, acetylaposerratinine, huperzine A, huperzine B, 8α-hydrophlemariurine B and huperzinine which may account for the effectiveness of this plant in the treatment of several brain diseases [8].
This plant is traditionally used to treat many ailments like cold, fever, bruises, pain, strains, contusion, stasis swelling, rheumatism etc. The numerous studies on pharmacological effects have been done such as anticonvulsant, anti-inflammatory, antioxidant, anti-aoptosis, and organophosphate poisoning myasthenia gravis [9]. Interestingly, this plant has enhanced the neurotransmitters, prevent plaque formation, tangles in the brain, feed and nourish brain cells, which positive response to Alzheimer's, Parkinson's, brain atrophy and dementia and diseases related to brain nerve damage, preventing memory decline or brain atrophy [10][11][12][13].
The Internal Transcribed Spacer (ITS) is one of the most popular tools and widely used DNA fragments in plant molecular systematics at the genetic and molecular species levels due to its potentially high resolution of inter-and intraspecific relationship [14]. The ITS region is argued to be useful for low-level phylogenetic analysis because of its relatively fast rate of evolution [15]. Moreover, ITS is considered as one of the most feasible molecular markers for angiosperm phylogenetic inference and genetics relatedness which is a proven useful source of characteristics for phylogenetic studies in the plant kingdom [16]. In the current application, DNA sequences from a standardized gene region are possibly generated from a small tissue sample taken from the known species [17]. Typically, ITS sequences have provided the molecular evidence to evaluate the phylogeny of several taxonomic groups from the genus and subfamily of the plants.
In the world, Huperzia is listed in a red book as an endangered plant [18]. In Vietnam, this is one of the rarest species carrying precious genetic resources, which are being degraded and also included in the red book that needs to be protected, developed and appropriately exploited. However, in Vietnam, customarily, the botanists have identified and classified the relatedness of the valuable species based on the morphological and cytological characteristics. The disadvantages of these methods are restricted due to the environmental effects and diagnostic resolution. Hence, the objective of this study was to assess the genetic diversity and identification of 35 samples collected from different areas by using rbcL markers. In this study, the sequences of the chloroplast gene ribulose-1-5-bisphosphate carboxylase/oxygenase (rbcL) were to use for genetic variation and identification of the Huperzia samples.
Materials collection
A total of 35 samples of Huperzia species was collected in the different provinces in Vietnam. The detail information of the samples as shown in Table 1.
Total DNA extraction and amplification of rbcL
The fresh samples were collected and intermediately transferred to the laboratory for DNA extraction. In this study, DNA extract was done following the CTAB method of Doyle and Doyle [19] with some minor modifications. The yielded DNA products were then confirmed and determined by the use of Spectrophotometer [20]. The forward primer rbcL-F was 5'`-ATGTCACCACAAACAGAGACTAAAGC-3' and the reverse primer rbcL-R was 5'-CTTCTGCTACAAATAAGAATCGATCTC-3', respectively.
ITS sequencing and phylogenetic analysis
Sequencing was carried out at Macrogen company (Korea) by applying ABI PRISM 3700 DNA Analyzer (Applied Biotech). The data were used to analyze by MEGA 5 software to generate International Letters of Natural Sciences Vol. 80 a phylogenetic tree with neighbour-joining (NJ) methods. BLAST tool was used to compared the obtained sequences to National Center for Biotechnology Information databases (NCBI).
Molecular markers application to identify Huperzia samples based on ITS region sequences
Recently, the advances in molecular systematics in the plants have been significantly improved. The chloroplast gene ribulose-1-5-bisphosphate carboxylase/oxygenase for the large subunit of rbcL located on the chloroplast genome which is a proper use for the interference of phylogenetic relationships at the higher taxonomic levels [21]. Moreover, it is conspicuous that the molecular approach has been powerful and accuracy for phylogenetic analysing in the plant which is arduous to decipher by phenotyping. Hence, in this study, a total of 35 samples were collected and analysed by rbcL markers. The DNAs of 35 samples of Huperzia species had enough high concentration and quality for further experiments. All bands were clean and enough quality for further experiment. PCR reaction was performed with the primers rbcL. The results showed that 35 samples of Huperzia were a single-form band with the size of 650 bp (Figure 1), which was a resemblance to the report of Sundary and Papuangan [22] who applied the rbcL gene for analysing of Syzygium aromaticum.
International Letters of Natural Sciences Vol. 80
The data shown in Table 3 disclosed that the sequence lengths of the samples were consistent with the results of the PCR band size as presented in Figure 3. In comparing to the database from the NCBI Genbank, the results of 35 Huperzia samples were highly similar to the corresponding sequences. Additionally, they were consistent with the results when comparing nucleotide sequences to find the corresponding species via Blast (http://blast.ncbi.nlm.nih.gov/Blast.cgi).
ILNS Volume 80
Among them, there was only one sample (BK3) belonging to H.phlegmarioides group, and the others of the 34 samples were clustered in H.squarrosa group, respectively (Figure 3 and Figure 4). In the fact that the available databases on the rbcL gene are very useful and facilitated to identify phylogenetic relationships of many plant species at interfamilial levels [25,22]. Dubisson [26] successfully applied rbcL sequences of 18 species of the fern genus Trichomanes for assessing their systematic relationships which showed a more accurate reflection of the generic concept applied to other fern and angiosperms.
Genetic diversity and genetic relationships between the 35 samples of Huperzia samples
The data in Table 3 showed that the sequences of 35 Huperzia samples had a high similarity. The phylogenetic tree of 35 Huperzia samples was generated by using the Mega 6.0 software following the Maximum likelihood method (Figure 4). The results of the polygenetic relationship analysis have revealed that the phylogenetic tree divided 35 Huperzia samples into two main different clusters: The first cluster included the sequences of 34 Huperzia samples and reference species of H.squarrosa (DQ464235.1 on GenBank). The samples in this group had very high genetic similarity in comparing to the reference species in a range from 95.47% (between the LCH3 sample with the reference species) to 96.25% (between the samples YB2, TQ1, TQ3, PT2, PT3, CB2, QB1 and LS1 with reference species), respectively.
The second cluster comprised the sequence of the Huperzia sample of BK3 (collected from the Cho Don -Bac Kan) which was matching with H. phlegmarioides (AJ133896.1 on GenBank) and H.carinata (DQ464229.1 on GenBank) in the same branch. The genetic similar coefficient of this sample was 95.62% in comparing to the reference species H. phlegmarioides was 94.81% to compare with the reference species H.carinata.
To the best our knowledge, over 251 plant species have been successfully applied and reached 100% by use of two primary types [27,22], and acquired a greater length (around 1.428 to 2500 bp) by applying full-length rbcL [28]. Additionally, Kesanakuthi et al [29] used rbcL sequence alone to investigate 85% plant species groups of all root samples, and spelling out the ecological factors that contributed to the subterranean spatial organization of plant diversity in nature. By using datasets of Genbank, some works used the blast algorithm to examine the ability of rbcL as well as various combination of the nuclear ribosomal ITS and 5.8S regions to make accurate identification in land plants species [30][31]. Moreover, ptDNA markers, ITS such as ITS2 has had great effective DNA barcoding marker with over 92% successful identification in 6600 plant samples [32]. Also, some recent reports have ascertained that using rbcL+ matK as the standard barcode for the flora of moderate phylogenetic dispersion was up to 92% of the species that could be differentiated the plant kingdom [33][34].
In this study, based on region sequences, a total of 35 Huperzia samples were successfully in analysed their species-level identification by using rbcL primers and compared NCBI Genbank databases. However, due to the lack of species-level variation which may hinder species-level identification. Therefore, it should be applied rbcL combining ITS, matK, trnH-psbA for specieslevel accuracy identification of the plants. Moreover, some specifically different DNA regions from both the plastid genome such as rbcL, rpoB, rpoC1, matK, ycf1 and trnH-psbA and nuclear genome ITS should be further applied for accurate identification of various species.
Conclusions
All the Huperzia samples collected from some different geographical provinces in Vietnam had similar ITS sequences in compare to NCBI corresponding library reference. Via the application of chloroplast region sequence with the RbcL-F /RbcL-R primers, a total of 34 samples was identified as the group of H. squarrosa species (DQ464235.1 on GenBank) with the range from 95.47% to 96.25%. The Huperzia sample of BK3 collected from Cho Don -Bac Kan was grouped as H. phlegmarioides species with a similar coefficient of 95.62%. | 2020-12-10T09:06:47.618Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "d507a71e2c9619582c9492a7210a90a88691ee22",
"oa_license": "CCBY",
"oa_url": "https://www.scipress.com/ILNS.80.13.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e6bfe2df69a29ab6ac3b84f570f1c425c4911a50",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
6184637 | pes2o/s2orc | v3-fos-license | The influence of anti-TNF therapy on the magnetic resonance enterographic parameters of Crohn’s disease activity
Purpose Magnetic resonance enterography (MRE) is a useful tool in assessing the transmural and extraintestinal lesions in Crohn’s disease (CD). However, the influence of anti-tumor necrosis factor (anti-TNF) therapy on MRE features of CD severity remains unknown. The purpose of the study was to assess the short- and long-term changes in MRE features of CD activity in relation to CD clinical course in patients treated with anti-TNF antibodies. Methods The influence on the most important parameters of CD activity seen in MRE was assessed retrospectively using a validated score. Patients were treated with anti-TNF agents and the clinical, laboratory, and MRE CD activity was estimated at baseline, after the induction therapy and after 1 year of treatment. Results 71 patients were enrolled in a study. The change in CD clinical activity correlated significantly with fluctuations in MRE activity score (P < 0.0001, r = 0.5 for induction; P = 0.004, r = 0.7 for maintenance anti-TNF therapy, respectively). Bowel wall thickening, mesenteric lymphadenopathy, and fat wrapping with vascular proliferation were MRE parameters which changed significantly both after the induction and maintenance treatment in patients responding to the therapy. The change in MRE activity score was mostly pronounced during the first 3 months of treatment, when compared to the continuation of the therapy till week 52–54 (−6 points vs. −2 points, respectively; P = 0.0008). Conclusions Transmural and extraintestinal healing seen in MRE correlates with changes in CD clinical activity during anti-TNF therapy, thus MRE seems to be a useful tool in monitoring the efficacy of biological agents.
extraintestinal CD manifestations in MRE, there are still few data concerning the direct influence of anti-TNF therapy on radiological parameters of CD activity. That is why, due to an increasing role of MRE in diagnostics of CD, there is an urgent need to determine which morphological changes in the inflamed intestines result from anti-TNF therapy. This could help in a better understanding of the mechanisms of the action of biological agents, and it would provide important data on the definition of transmural and extraintestinal healing, as there are still many difficulties in defining this phenomenon. Thus, the main aim of the current study is to assess which radiological features of CD severity in MRE are mostly influenced by anti-TNF antibodies in relation to the efficacy of biological therapy.
Patients and methods
We retrospectively analysed data from patients with diagnosed CD, who were treated with anti-TNF agents in years 2009-2014 in our department. All patients were treated according to the current therapeutic guidelines with either adalimumab (ADA) or infliximab (IFX) [5]. ADA was administered in the following doses: 160 mg at week 0, 80 mg at week 2, 40 mg every other week till week 12 (7 induction doses), and then in patients responding to induction therapy-40 mg every other week till week 52. IFX was administered in the doses of 5 mg/kg body weight at week 0, 2, 6 (3 induction doses), and then in patients responding to induction therapy-every 8 weeks till week 52. Response to the induction therapy was defined clinically as a decrease in Crohn's Disease Activity Index (CDAI) by 100 points or more. Clinical remission was defined as CDAI < 150 points [6].
The inclusion criteria were as follows: 1. Qualification for biological therapy because of CD flare (as assessed by CDAI) not responding to standard medical therapy. Standard medical therapy was defined as therapeutic and stable doses of steroids (1 mg of prednisolone/kg body weight) for at least 3 weeks, and/ or azathioprine (AZA-2-2.5 mg/kg body weight), and/or mesalamine (4 g daily) for at least 12 weeks. 2. Ileal or ileocolonic CD location. 3. Performance of MRE twice: at baseline-up to 3 weeks before starting anti-TNF therapy, and after induction doses of biological agents: at week 12-14 in the case of ADA and at week 9-12 in the case of IFX.
Only MRE investigations performed with the same protocol were taken into account. When possible, also MRE studies performed after 1 year of anti-TNF therapy were analysed.
Exclusion criteria: 1. Isolated colonic CD. 2. Significant changes in therapeutic regimens of anti-TNF therapy defined above or in the concomitant treatment, like for example, the introduction of a new drug, a change in the dose of immunosuppressant.
Only tapering of steroids after finishing the induction phase of anti-TNF therapy or discontinuation of antibiotics were allowed in patients responding to the biological treatment. 3. Previous anti-TNF therapy <12 weeks before enrolment to the current analysis.
Clinical activity was assessed by CDAI. The severity of endoscopic lesions was estimated by calculating Simple Endoscopic Score for Crohn's Disease (SES-CD) [7]. Both SES-CD and CDAI were calculated prospectively. Biochemical activity was assessed in parallel.
MRE studies were performed with the same protocol as described previously [8]. Patients fasted for 6 h before MRE. 30-40 min before scanning, patients were administered 1500 ml of oral polyethylene glycol. Fifteen minutes before the procedure, 40 mg of buscolysin was injected intravenously in order to reduce bowel motility. The study protocol consisted of the following sequences: 1. true fast imaging with steady-state free precession sequences in the coronal and transverse planes, 2. single-shot turbo spin-echo sequences with fat suppression in the coronal plane, 3. cine loop coronal images for the visualization of the bowel movements and stenosis, 4. fat-suppressed 3D T1-weighted Volumetric Interpolated Breath-Hold Examination and T1-weighted 3D Fast Low Angle Shot technique before and three times-30, 90 s, and 5 min-after intravenous injection of gadolinium contrast (dose of 0.1 mmol/kg body weight followed by 20 ml of saline).
All MRE investigations were assessed by a radiologist with more than 12 years of experience in this cross-sectional imaging technique. All studies were blinded for the radiologist and then randomly analysed and quantified using the grading score called the Simple Enterographic Activity Score for Crohn's Disease (SEAS-CD), whose diagnostic utility was previously proved in an independent cohort of CD patients by Eder et al. [8]. This score consists of the most important features reflecting CD activity, which were quantified depending on their intensity and they formed the final SEAS-CD result ( Table 1).
As the first step, we determined the influence of induction doses of anti-TNF agents on CD activity assessed in MRE. We compared changes in the most important radiological features of CD activity in patients who responded to the therapy (defined as a decrease in CDAI by 100 points-the responders group) with patients who were primary non-responders. Then the influence of long-term anti-TNF therapy was also analysed by comparing MRE results after induction doses of biological agents with MRE CD activity scores after 1 year (at week 52-54) of treatment.
Statistical analysis
Data are presented as means with standard deviations (SD). Correlation analyses between selected parameters were performed with the use of Spearman's rank correlation coefficient. Statistical differences were calculated using Student t test (parametric) when conditions of normality and equal variance were met. When the normality test failed, the Wilcoxon test or Mann-Whitney test was used for paired or unpaired groups, respectively. A P value < 0.05 was considered significant. All data were analysed using the GraphPad Prism Version 6.0 (GraphPad Software Inc., USA).
Results
71 CD patients, in whom MRE imaging was performed before and after induction anti-TNF therapy, were enrolled into the study. Baseline characteristics of the whole study group are presented in Table 2.
Anti-TNF induction therapy 53 patients (75%) were primary responders, whereas 18 (25%) did not respond to the induction doses of anti-TNF antibodies. The change in CDAI scores in the whole study group (n = 71) correlated significantly with fluctuations in SEAS-CD scores during induction anti-TNF therapy (Fig. 1).
Almost all parameters of MRE CD activity decreased significantly after induction anti-TNF therapy in the Scoring of each parameter is presented in bold a Fat wrapping and proliferation of mesenteric vasculature scores were summed up, as those phenomena are strictly interrelated and they are assessed together (Fig. 3A). Figure 4 shows examples of the influence of induction anti-TNF therapy on selected features of CD inflammatory activity seen in MRE among primary responders.
In the non-responders group CDAI did not change significantly after anti-TNF induction therapy: 275 ± 71 vs. 212 ± 77 points. Mean SEAS-CD values only slightly decreased in the course of biological therapy-15 ± 5 vs. 14 ± 5 points (Fig. 2B); however, taking into account the different distribution of variables before and after treatment, it reached the statistical significance (P = 0.02).
There was a significant decrease only in fat wrapping and vascular proliferation after anti-TNF induction therapy in patients who did not respond to the treatment. Other parameters did not change or decreased without achieving statistical significance (Fig. 3B).
Comparison of SEAS-CD between responders and non-responders group
Baseline SEAS-CD scores were not statistically different when compared the responders and non-responders group (14 ± 5 vs. 15 ± 5 points, respectively; P = 0.09). All assessed parameters of MRE CD activity were comparable between the two aforementioned groups.
After finishing the induction phase of anti-TNF therapy, SEAS-CD among non-responders was significantly higher when compared with the responders group (14 ± 5 vs. 8 ± 4 points, respectively; P < 0.0001). Almost all parameters of MRE CD activity were significantly higher among non-responders after induction doses of anti-TNF antibodies (Fig. 5).
One-year anti-TNF therapy In 17 patients, MRE investigation was performed three times: at week 0, after anti-TNF induction therapy, and after 1 year of treatment. When compared CDAI and SEAS-CD scores after biological induction therapy and after 1 year of treatment, changes in those parameters correlated significantly (P = 0.004; r = 0.7). In this subgroup of patients-13 of them were in clinical remission (CDAI < 150) at week 52-54 of biological therapy; 4 patients were secondary non-responders (CDAI ‡ 150).
In patients with a long-term remission, there was a further, statistically significant improvement in CD MRE activity observed: SEAS-CD decreased from 9 ± 4 points after induction therapy to 5 ± 3 points at week 52-54 (P = 0.02). The decrease in SEAS-CD score was mainly due to further reduction in bowel wall thickening (P = 0.01), mesenteric lymphadenopathy (P = 0.03), and fat wrapping with vascular proliferation (P = 0.05, borderline significance). Other assessed MRE features did not change or improved without statistical significance. In the subgroup of secondary non-responders, SEAS-CD increased from 6 ± 3 points to 10 ± 3 points; however, without statistical significance. All MRE features of CD activity assessed by SEAS-CD score increased after 1-year anti-TNF therapy without achieving statistical significance.
Discussion
Objective assessment of the influence of anti-TNF therapy on the inflammatory activity of CD seems to be crucial for the proper monitoring of the treatment outcomes [9]. There are only few studies assessing the ability [10]. Van Assche and colleagues performed a multicentre prospective trial, in which magnetic resonance enteroclysis was performed at baseline of anti-TNF treatment, and then at week 2 and 6 months after starting infliximab [11]. Finally, 15 patients were assessed at weeks 2 and 26. The analysis revealed that a decrease of the activity of inflammatory components occurred from 2 weeks after beginning of the therapy. At week 26, the magnetic resonance enteroclysis index improved in 80% of patients; however, only in 13% of them a complete absence of CD activity was noted.
In the most recent study performed by Ordas et al., the accuracy of MRE in determining the clinical response and mucosal healing in CD patients treated with anti-TNF agents were assessed for the first time [12]. This was a prospective multicentre study in which 48 patients were included, and they underwent ileocolonoscopy (as a reference standard) and MRE at baseline and 12 weeks after starting corticosteroids or anti-TNF therapy. The analysis revealed that the change in the MRE activity estimated with the use of Magnetic Resonance Index of Activity (MaRIA) determined with 90% accuracy the mucosal healing effect and with 83% accuracy-the endoscopic remission [13].
To the best of our knowledge, our study is the second one in which the utility of MRE in reflecting the influence of anti-TNF therapy on CD activity was assessed, and it includes the largest cohort of CD patients so far. Our analysis revealed that MRE CD activity changes in parallel with fluctuations of CD clinical activity during both-the induction (short-term) and 1-year (long-term) anti-TNF therapy. Almost all parameters assessed routinely in MRE are improving in patients clinically responding to the therapy in parallel to the majority of the laboratory results. Only intraabdominal fistulas were still present after treatment, what is to be expected, as this structural complication of CD is rather an indication for surgery [14]. However, in rare cases other therapeutic options are chosen, as not all patients accept surgery. The second and more frequent scenario is that patients underwent intestinal resection in the past and surgery has to be avoided in order to prevent irreversible complications after multiple intestinal resections, like for example, the short bowel syndrome. That is why some minor proportion of CD patients with intraabdominal fistulas is treated non-surgically, and anti-TNF therapy seems to be the most effective choice in these cases [14].
Although statistical analysis showed, that SEAS-CD decreased also in the non-responders group, this change was significantly less pronounced when compared with patients with CDAI-100 response. The decrease in SEAS-CD in patients not responding to the therapy was a consequence of a slight improvement in the minority of SEAS-CD variables; only fat wrapping with vascular proliferation decreased significantly. However, comparison of SEAS-CD scores after the induction anti-TNF therapy shows a significant and very pronounced difference between responders and non-responders group. Moreover, the majority of MRE variables differ significantly in these two groups of patients after the induction therapeutic period (Fig. 5), what shows that SEAS-CD reliably reflects the course of CD during biological therapy.
Nevertheless, considering only the non-responders group, anti-TNF treatment leads to transmural and extraintestinal healing to some extent, as well as there is also some improvement in single laboratory tests. However, these phenomena are insufficient to be reflected by regression of CD clinical activity. This is a very interesting phenomenon which was confirmed even on a molecular level. In the study by Leal et al. it was shown that also in patients not responding to the anti-TNF therapy, a big amount of inflammatory parameters involved in CD pathogenesis significantly decreased, although this was not reflected by the improvement of clinical status of the patients [15]. These data and our results suggest that CD symptomatology assessed by CDAI is influenced by multiple pathogenic factors, and only a complex, multifactorial improvement provides clinical response.
Analysis of the long-term influence of anti-TNF therapy also shows that changes in MRE CD activity parameters correlate with the clinical efficacy of biological agents. Comparative analysis shows that in the case of patients achieving a long-term response to the therapy, the degree of SEAS-CD reduction is significantly more pronounced during the first 3 months of treatment (induction therapy) than during maintenance anti-TNF treatment (mean change in SEAS-CD = -6 points vs -2 points, respectively; P = 0.0008). These data show that the strongest potential of anti-TNF agents to induce transmural and extraintestinal healing concerns the first, early period of treatment. Thus, it can be hypothesized, that the degree of the healing effect achieved during the induction anti-TNF therapy is a predictor of further treatment's efficacy. However, it should be mentioned that even after 1 year of biological therapy, there was only one case of a complete recovery from inflammatory lesions seen in MRE (1/13-8% of patients in the long-term remission; data not shown).
In contrast to the induction anti-TNF therapy, which improved almost all MRE parameters of CD activity in the responders group, further clinically successive treatment results only in the significant decrease of bowel wall thickening, mesenteric lymphadenopathy, and fat wrapping with vascular proliferation; however, these data should be interpreted with caution due to a low number of patients experiencing the loss of response to anti-TNF agents. Nevertheless, it can be hypothesized that the aforementioned MRE parameters are mostly influenced by anti-TNF antibodies, and they are responsible to the greatest extent for transmural and extraintestinal healing phenomenon. The question that needs to be answered is whether these observations could be indirectly translated into hypotheses on the mechanisms and primary sites of action of anti-TNF agents. This could be very problematic, however, according to the latest reports on the pathogenesis of inflammatory lesions in CD, it is suggested that one of the crucial roles is played by the mesenteric adipose tissue in which significant vascular proliferation is seen [16]. It was hypothesized that so called creeping fat is a potent source of pro-inflammatory cytokines and other paracrine mediators, as well as it plays a significant role in the immune response to commensal bacteria [17]. In this hypothesis, further transmural inflammatory lesions are a consequence of an unusual transformation of mesenteric fatty tissue, as it was suggested that creeping fat in CD seen in MRE, has several features different from tissue seen in other gastrointestinal diseases, showing more inflammatory and fibrous pattern [18,19].
According to these data, we observed that a significant influence of anti-TNF antibodies, seen in all study groups (short-term and long-term responders, as well as among primary non-responders), concerned only the mesenteric fatty tissue. Thus, one could hypothesize that the primary site of action of anti-TNF antibodies is the mesenteric creeping fat. Hypothetically, the decrease in the CD activity seen in MRE in terms of the mesenteric fatty tissue in the primary non-responders group could be an evidence, that anti-TNF antibodies started to act, but some poorly defined factors did not allow them to lead to further transmural and extraintestinal healing.
Our study has several limitations. In our opinion, the most important one is the retrospective nature of the analysis. However, as all MRE studies were blinded for the radiologist and then randomly analysed, we believe that this methodological fact reduces the risk of statistical bias. Another possible limitation is the use of concomitant medicines. Although the doses of immunosuppressive drugs were stable during the whole observation, tapering of steroids after finishing the induction phase of anti-TNF therapy or discontinuation of antibiotics were allowed in our study. It could be hypothesized that these facts can bias the interpretation of the direct influence of anti-TNF agents on CD inflammatory activity in MRE. However, in our real-life study group, we could not exclude those patients, as tapering of steroids or discontinuation of antibiotics is a consequence of successful application of anti-TNF antibodies. Moreover, steroids and antibiotics should not be used for a longer period of time. There are also no data regarding the intestinal healing effect of steroids or antibiotics in CD. This phenomenon has been shown only for anti-TNF agents and, to a lesser extent, for immunosuppressive drugs [9]. Thus, we hypothesize that the fact of tapering the steroids or discontinuing the antibiotics in our study group had limited or even had no influence on intestinal healing seen in MRE, and the observed phenomena in imaging studies are only related to application of anti-TNF antibodies.
Another possible limitation is the used SEAS-CD scoring system, as it assesses MRE features more on a global level and not on a bowel segment level, as it is done with the MaRIA system [12,13]. This could be hypothetically problematic in patients in whom one intestinal segment responds to the therapy and other does not; however, such a situation is not likely to occur. Nevertheless, SEAS-CD takes into account the length of inflamed bowel loop, which is quantified and scored separately. Moreover, SEAS-CD scoring system assesses the CD activity (together with quantification of its extent) separately in the jejunum and ileum. That is why, although SEAS-CD does not allow for segmental assessment similar to MaRIA, it takes into account the different CD locations throughout the gastrointestinal tract and its extent in the bowel loops. Thus, it can be even reliable in the hypothetical aforementioned situation of simultaneous response and non-response to anti-TNF therapy in different bowel segments in the same patient.
To conclude, in our study, we showed that MRE is a useful tool in the assessment of CD course in patients treated with anti-TNF agents. We confirmed that transmural and extraintestinal healing occurs only in individuals responding to the therapy, although, the reduction of the inflammatory lesions seen in MRE takes place to some extent also in the non-responders. The most important compounds of the transmural and extraintestinal healing phenomenon seem to be the bowel wall thickening, mesenteric lymphadenopathy and wrapping of the mesenteric fatty tissue with the vascular proliferation. Data from our study cannot indicate the possible primary site of action of anti-TNF antibodies leading to intestinal healing, although the importance of mesenteric fatty tissue is discussed. There are too few proofs of the crucial role of creeping fat in the efficacy of biological therapy. We hypothesize, however, that our data could be an interesting background for further analyses on the role of mesenteric fatty tissue in anti-TNF therapy of CD. | 2016-05-12T22:15:10.714Z | 2015-06-07T00:00:00.000 | {
"year": 2015,
"sha1": "b8cb6bc97a088f225cdacc9f935f7c7e4161620f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00261-015-0466-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8cb6bc97a088f225cdacc9f935f7c7e4161620f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67842621 | pes2o/s2orc | v3-fos-license | Mathematical Analysis of a Reactive Viscous Flow through a Channel Filled with a Porous Medium
An investigation has been carried out to study entropy generation in a viscous, incompressible, and reactive fluid flowing steadily through a channel with porous materials. Approximate solutions for both velocity and temperature fields are obtained by using a rapidly convergent Adomian decompositionmethod (ADM).These solutions are then used to determine the heat irreversibility and Bejan number of the problem. Variations of other important fluid parameters are conducted, presented graphically, and discussed.
Introduction
Studies on heat irreversibility in moving fluid find its relevance in several geological, petrochemical, and industrial applications.In most flows at extremely high temperature, heat irreversibility is unavoidable.This usually leads to material waste due to reduced efficiency of the thermofluid machine.To conserve energy, Bejan [1] introduced an approach that is based on the thermodynamics second law to predict the performance of thermal systems so as to maximize scarce available energy for work and minimize wastages.Following his analysis, Al-Zaharnah and Yilbas [2] considered the irreversibility analysis in a viscous pipe flow.Haddad et al. [3] examined the heat irreversibility in forced convective flow in concentric cylindrical annulus under diverse flow conditions.Kahraman and Yürüsoy [4] applied the same approach to study the heat irreversibility in non-Newtonian fluid flow through pipes.Aksoy [5] considered the influence of couple stresses on the development of heat irreversibility in a channel with adiabatic surface and constant heat flux.Ting et al. [6] considered the irreversibility associated with nanofluids in a microchannel with porous materials using water-alumina.Moreover, Khan and Gorla [7] addressed the convective problem in non-Newtonian fluid flow through a channel with porous medium and heat flux.Revellin et al. [8] addressed the thermal performance of adiabatic two-phase flow using two different methods.Hedayati et al. [9] utilized the thermodynamics analysis to optimize flow on a nonstationary wedge.Butt and Ali [10] reported the irreversibility analysis of fluid slippage with convective boundaries.Other works that focused on the minimization of energy losses in a fluid flow can be found in references [11][12][13][14][15][16][17][18][19] and many more too numerous to be listed.
From applications' point of view, studies on transport reactive fluids in porous media are very important since they occur in many important areas like water treatment using fixed beds, agriculture, oil recovery, ground water flows, geothermal engineering, exhaust systems in combustion, material processing, and reservoir engineering.Recently, Rundora and his associates [20][21][22] documented several investigations on unsteady reactive fluid flow in porous medium and how the flow evolved to the steady state.Bég et al. [23] examined the flow of viscoelastic fluid through a non-Darcian porous medium.Makinde [24] studied the inherent heat irreversibility in reactive fluid through a channel filled with porous material.
In all the studies above, the entropy productions in the flow of viscous incompressible fluid flow through porous medium have not been investigated.Therefore, the work done in [24] can be further extended to give more interesting results on the thermodynamics and heat transfer properties of the fluid flow.This is because huge amount of money and effort could be wasted if the inherent irreversibility in the fluid flow is not well addressed.Therefore, the specific objective of this article is to examine the rate at which entropy is produced in a viscous fluid flow system through a porous medium.The problem under consideration is nonlinear due to the exponential nature of the rate law in Arrhenius kinetics for combustible fluids.In view of this, exact solution for the temperature field may not be possible to get.To solve the problem, we seek Adomian series solution to avoid linearization of the exponential term.The Adomian decomposition method is a straightforward way of solving all kinds of differential equations arising from many physical scenarios.It has been used extensively in the last few decades as reported in the bibliography by Rach [25], and, more recently, the method has been used in [26][27][28][29][30][31].The plan of the article is as follows: the problem is formulated and the mathematical analysis is presented in Section 2. Section 3 of the work gives the Adomian method of solution.Graphical results are presented and interpreted in Section 4 while, in Section 5, concluding remarks are given.
Mathematical Analysis
The steady flow of viscous incompressible reactive fluid through parallel-plate immersed in a porous medium is studied.The flow is assumed to be full-developed and driven by an applied pressure gradient.The channel wall temperatures are kept constant.Then, the balanced governing equations are [24] with the following boundary conditions: Under these assumptions, entropy generation equation becomes To nondimensionalize (1)-( 3), we need the following parameters and variables: to get the dimensionless problems: Setting then, the irreversibility ratio becomes From ( 9), it is evident that
Adomian Method of Solution
A direct integration of ( 5)-( 6) leads to the integral equations with Due to the exponential nonlinearity in (12), we now define a series of functions defined by Substituting (13) into the integral equations ( 11)-( 12), we obtain The nonlinear term in (15) represented by is expanded by Taylor's series to get the following Adomian polynomials: The zeroth-order components of the series solutions ( 14) and ( 15) are Since the integral of a continuous function is continuous, then each term of the series can be uniquely determined by where (0)/ = 0 and (0)/ = 1 are the parameters to be determined.Then, ( 17)-( 19) are evaluated using MATHEMATICA and the solutions are obtained as finite series: The series solutions are shown to be convergent and twice differentiable (see Tables 1 and 2).Next, we establish the uniqueness solution of (20).It is well known that the Lipschitz condition is sufficient for the uniqueness of solution.Therefore, we first seek for a Lipschitz constant such that is satisfied.To do this, the boundary-valued problems ( 6) are converted to system first-order differential equations by introducing the following transformations: With ( 22), ( 6) can now be written as ) ) ) ) ) ; where ( = 1, 2), the guess values that will ensure the boundary conditions, are satisfied.Then, Let () and () be any two solutions of the integral equation ( 12 or Hence, the problem will have a unique solution whenever 0 < ≪ 1 and as such Kindly see Tables 1 and 2 for the numerical results.
Results and Discussion
In this section, the effects of pertinent fluid parameters on the velocity and temperature profiles are shown graphically.
Figure 1 shows the effect of medium porosity on the entropy generation rate.Since an increase in the porous permeability parameter implies a decrease in the medium porosity, this caused a decreased flow and heat trapping strategy.The net effect is seen here; that is, entropy production increases only in the centerline of the channel while it decreases at the walls.decreases the fluid temperature, it is therefore expected to decrease the entropy generated in the flow region.This is true since an increase in the Frank-Kamenetskii parameter is known to enhance the fluid temperature.Therefore, by increasing values of this parameter, entropy generated is expected to be on the increase as shown in Figure 3.In Figure 4, the effect of the viscous heating parameter is shown.The result shows that entropy generation rate increases with increasing values of the viscous heating parameter due to frictional interaction in the fluid layers.Figure 5 represents the Bejan number's variations with the activation energy.From the plot, as the activation energy parameter increases, heat irreversibility due to heat transfer decreases.As a result, fluid friction irreversibility dominates over heat transfer irreversibility within the porous medium.Moreover, as the porous permeability parameter increases in Figure 6, there are reduced flow and fluid temperature rises.The net effect shows that heat transfer irreversibility dominates over irreversibility due to fluid friction.Similar behaviour is observed in Figure 7 as the exothermic Frank-Kamenetskii parameter increases.Finally, as viscous heating parameter increases, the viscous heat dissipation increases in the flow channel, and fluid friction irreversibility dominates over heat transfer irreversibility as observed in Figure 8.
Conclusion
The entropy generation rate in the flow of reactive fluid through a porous medium has been investigated.Adomian decomposition method is used to obtain approximate solution to the strongly nonlinear boundary-valued problem for the dimensionless energy equation.The main contributions to knowledge from the present analysis are as follows: a reactive fluid flowing through a porous medium, entropy generation is minimum at the centerline of the channel; as a result, fluid friction irreversibility dominates over heat transfer irreversibility in the centerline.Secondly, porous medium with low permeability is one major factor that depletes the useful available energy in a thermofluid.
Figure 5 :
Figure 5: Bejan number at different activation energy parameter. | 2018-12-30T06:57:02.303Z | 2016-12-27T00:00:00.000 | {
"year": 2016,
"sha1": "6d4d7eeba5a8533b290bf606ebb1f0d872c3a973",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jmath/2016/1350578.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d4d7eeba5a8533b290bf606ebb1f0d872c3a973",
"s2fieldsofstudy": [
"Mathematics",
"Engineering",
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119182499 | pes2o/s2orc | v3-fos-license | Dynamics of nonequilibrium thermal entanglement
The dynamics of a simple spin chain (2 spins) coupled to bosonic baths at different temperatures is studied. The analytical solution for the reduced density matrix of the system is found. The dynamics and temperature dependence of spin-spin entanglement is analyzed. It is shown that the system converges to a steady-state. If the energy levels of the two spins are different, the steady-state concurrence assumes its maximum at unequal bath temperatures. It is found that a difference in local energy levels can make the steady-state entanglement more stable against high temperatures.
I. INTRODUCTION
In describing real physical systems one should always take into account the influence of the surroundings. The study of open systems is particularly important for understanding processes in quantum physics [1]. Whereas in most cases the interaction with an environment destroys quantum correlations within the system, it is well known that in some situations it can also build up entanglement [2] and in principle even prepare complex entangled states [3]. The dynamics of entanglement in open systems provides many interesting insights into relaxation and transport situations, in particular if the system dynamics involves many-body interactions (such as spin chains, see [4] for a review). In order to understand the role of the various parameters that compete in this setup, it is useful to find exactly solvable models. Here we study the dynamics of a model that was recently introduced by L. Quiroga [5]. It consists of a simple spin chain in contact with two reservoirs at different temperatures. In such a non-equilibrium case most studies are restricted to the steady-state to which the system converges in the limit of long times [5,6,7,8]. The dynamics for the model in the zero-temperature limit was studied in [9]. In the following, we study the dynamics of this model for generic temperatures. This paper is organized as follows. In Sec. II we describe the model of a spin chain coupled to bosonic baths at different temperatures as introduced in Ref. [5]. For completeness we follow [5] in deriving the master equation for the reduced density matrix in the Born-Markov approximation. In Sec. III we present the analytical solution for the system dynamics and show the convergence of the obtained solution to the density matrix of the nonequilibrium steady state solution. One should note that in [5] this steady-state was found only in the case when the energy levels of the spins are equal. Finally, in Sec. IV we discuss the results and conclude.
II. MODEL
We consider the simplest spin chain consisting of two spins, with each spin coupled to a separate bosonic bath. In the derivation of the master equation we follow the formalism suggested in Ref. [5]. The total Hamiltonian is given bŷ is the Hamiltonian describing spin-to-spin interactions andσ z i ,σ ± i are the Pauli matrices. In this paper units are chosen such that k B = = 1. The constants ǫ 1 and ǫ 2 denote the energy of spins 1 and 2, respectively, whereas K denotes the strength of spin-spin interaction. We will see later that the energy difference ∆ǫ = ǫ 1 −ǫ 2 has a crucial role in determining the entanglement of the thermal state. We refer to the case ∆ǫ = 0 studied in [5] as the symmetric case. Our study focuses on the non-symmetric case ∆ǫ = 0. The Hamiltonians of the reservoirs for each spin j = 1, 2 are given bŷ H Bj = n ω n,jb + n,jb n,j .
The interaction between the spin subsystem and the bosonic baths is described bŷ The operatorsV j,µ are chosen to satisfy [Ĥ S ,V j,µ ] = ω j,µVj,µ , and thef j,µ act on bath degrees of freedom (this is always possible; their explicit form will be given later on). Physically, the index µ corresponds to transitions between eigenstates of the system induced by the bath. The whole system (spin chain with reservoirs) is described by the Liouville equation We assume that the evolution of the dynamical subsystem (coupled spins) does not influence the state of the environment (bosonic reservoirs) so that the density operator of the whole systemα(t) can be written as: (irreversibility hypothesis), where each bosonic bath is described by a canonical density matrixB j = e −βjĤBj /tr[e −βjĤBj ] andρ(t) denotes the reduced density matrix of the spin chain.
In Born-Markov approximation the equation for the evolution of the reduced density matrix [10] is: and where the spectral density is given by To find a solution we go to the basis of the eigenvectors |λ i with eigenvalues λ i of the HamiltonianĤ S , |λ 3 = cos(θ/2)|1, 0 + sin(θ/2)|0, 1 , λ 3 = κ, and tanθ ≡ 2K/(∆ǫ). In this representation the dissipative operator L i (ρ) becomes with transition frequencies and transition operatorŝ In this paper we consider the bosonic bath as an infinite set of harmonic oscillators, so the spectral density has the form J (j) (ω µ ) = γ j (ω µ )n j (ω µ ), where n j (ω µ ) = (e βj ωµ − 1) −1 and J (j) (−ω µ ) = e βjωµ J (j) (ω µ ). For simplicity we choose the coupling constant to be frequency independent γ 1 (ω) = γ 1 and γ 2 (ω) = γ 2 . In the basis |λ i the equation for the diagonal elements of the reduced density matrix is given by where B is a 4 × 4 matrix with constant coefficients. The time-dependence for the non-diagonal elements has the following form where s i,j is a complex number. For the initial state of system in the computational basis {|00 , |01 , |10 , |11 } we choosê
III. EXACT SOLUTION
The analytical solution in the basis of eigenvectors |λ i is given by: 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 where the coefficients a ij are given by: , , , Taking into account the initial conditions, the nonvanishing non-diagonal elements are: In the present solution we have introduced some constants: One can easily see that this solution converges with increasing time to a diagonal density matrix which does not depend on the initial conditions: In the symmetric case (∆ǫ = 0) the above limit in reproduces the result obtained by Quiroga in [5]. In order to quantify the entanglement between the spins we consider the concurrence [11]. In the steady-state (t → ∞) it is given by
IV. RESULTS AND DISCUSSION
The dynamics of entanglement is analyzed in Figures 1 and 2. In Figure 1 the dynamics of the concurrence between the two qubits is shown. For the model considered here the spin chain HamiltonianĤ S can entangle the qubits for specific times, which gives rise to the oscillations of concurrence one observes for short times Maximally possible steady-state concurrence C Max ∞ (ǫ1, ǫ2) in the strong coupling case ǫ1, ǫ2 < K with K = 1.
(note that the initial state is chosen to be separablê ρ 0 = |1, 0 1, 0|). For large times, the system converges to its steady-state. One can see the disappearance of entanglement with increasing temperatures of the bosonic baths which was shown for the steady-state in [5]. In Figure 2 the dynamics of the concurrence for different initial states of the qubits in shown. For all cases the system converges to one and the same value of entanglement. The plots in Figures 1 and 2 show clearly the competition between unitary and dissipative dynamics. If the qubits starts from a "symmetric state", i.e. |1, 1 , no oscillations in the concurrence dynamics are observed and if the qubits start from a "non-symmetric state", i.e. |1, 0 1, 0| or (|1, 0 −|0, 1 )/ √ 2, one can see oscillations of the concurrence which correspond to the energy exchange between the qubits in the unitary evolution. Both figures reveal that after time of order t ∼ 2/γ the concurrence "forgets" about initial conditions and converges to the same value, given by C ∞ from the end of the Sec. III. The steady-state concurrence C ∞ (ǫ 1 , ǫ 2 , K, T 1 , T 2 ) is analyzed in Figures 3, 4, 5 and 6. In Figures 3 and 4 we plot the steady-state concurrence for the symmetric and non-symmetric cases as a function of the mean temperature (T M = (T 1 + T 2 )/2) and the temperature difference (∆T = T 1 − T 2 ) of the baths. In the symmetric case one can easily see that the maximal value of the entanglement is reached for equal bath temperatures (∆T = 0) C eq sym = sinh(1/T ) − 1 2 cosh(ω 1 /2T ) cosh(ω 2 /2T ) .
The critical temperature in units of K above which the steady-state becomes separable is given by T C = arcsinh(1) −1 (T C ≈ 1.136) [5]. It is interesting to note that in the non-symmetric case ( Figure 4) the maximal entanglement is reached in the non-equilibrium case (∆T = 0). In particular, the maximal entanglement is larger than the corresponding non-symmetric equilibrium concurrence The temperature at which entanglement disappears is a function of the energy difference ∆ǫ between qubits: It is easy to see that this function reaches its minimum value in the symmetric case (∆ǫ = 0). In Figures 5 and 6 we show the maximally reachable value of entanglement as a function of qubits energies in the strong and weak coupling case. For every pair of energies (ǫ 1 , ǫ 2 ) we maximize the value of the concurrence for the different temperatures of the baths (T 1 , T 2 ). One can see that in the strong coupling case (ǫ 1 , ǫ 2 < K; Figure 5) the maximal value of the entanglement corresponds to the symmetric case. In Figure 6 one can see that in the weak coupling case (ǫ 1 , ǫ 2 > K) the maximal value of the entanglement is reached in the non-symmetric case.
In conclusion, we have found an analytical solution for a simple spin system coupled to bosonic baths at different temperatures. We studied the dynamics of the system and showed that on the long term the system converges to the steady-state solution. Resolving the entanglement dynamics allowed us to distinguish between entanglement created by the system and by the bath. For the symmetric case (ǫ 1 = ǫ 2 ) we reproduced the steady-state found by [5]. We focused on the non-symmetric case (ǫ 1 = ǫ 2 ) where we found that the steady-state concurrence assumes its maximal value for unequal bath temperatures. This is corresponds to a dynamical equilibrium, where the spin chain transfers heat between the baths. We also found that a difference in local energy levels can make the steady-state entanglement more stable against high temperatures. These analytical results motivate further numerical studies on longer spin chains. | 2009-05-05T08:28:44.000Z | 2008-07-02T00:00:00.000 | {
"year": 2008,
"sha1": "844af385307a37d58087a2a638757466951b450b",
"oa_license": null,
"oa_url": "http://pure.aber.ac.uk/ws/files/171385/PhysRevA.78.062301.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "844af385307a37d58087a2a638757466951b450b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265586834 | pes2o/s2orc | v3-fos-license | Lie as the Anthropological Destruction
: The theory of post-truth and the conviction that absolute objective truth does not exist have many supporters in the modern world. A lie seen from this point of view appears to be one of the many ways people contact each other. The conducted research shows that this reasoning is incorrect. It proves that a lie is a reality that destroys those who have been lied to and those who are lying, as well as other people who are found in their environment. A lie drives people away from the truth, replacing it with untruth. In this way, it strikes a person and disturbs the order in one’s social environment. Thus, it becomes an anthropological destruction. This creates an “alternative world” that “com - petes” with God’s created world. Also, by rejecting God’s call to live according to the truth, man moves away from God and draws closer to Satan. The world “created” by a liar is not appropriate to human nature, it cannot become a space for his development, but is only regression and, consequently, destruc - tive. The matter of lying and its consequences appear to be very topical today when the “power of disinformation” is being revealed more and more clearly.
Introduction: Context and Methodological Information
Without a doubt, our times present a peculiar moment in dealing with information. 1 On the one hand, the development of a social and cultural civilization has led to the fact that information is recognized as one of the main values of modern society, and within it every person.That's why we mainly talk about the information society in which information is the basic commodity and currency.On the other hand, many claims and theories draw attention to the subjectivity and relativity of information, as shown in the concept of post-truth, which is used by many people, especially journalists and politicians, to characterize our current understanding of information.In addition, the current world situation related to the invasion of Ukraine and the so-called hybrid war makes many people aware of the importance of understanding what disinformation is.And what effects it causes.All this points to the importance of information in a person's personal and social life (Wyrostkiewicz 2022, 4).
The second-mentioned point of view is very popular today.However, there is no lack of people convinced that a lie is bad for society and every person who functions in an environment affected by a lie.This last statement is the main research hypothesis of this study.Demonstrating its validity will take place in several steps.The first step will be to show the relationship between a lie and the truth and to place our reflection on lies in the classical moral-theological context of human deeds.This will be an introduction to the topic showing new original proposals for looking at a lie as an activity aiming to build an alternative reality in which both a liar and the person lied to are to function.The culmination of the analysis will be a theological reflection on the subject of lying concerning God and Satan.
The subject of this article is lying.It deals with the problem of the consequences in one's social and personal life.The purpose of the text, as the title of the article states, is to give a synthetic presentation of a lie as a factor effectively disrupting the development of man and society, and therefore creating anthropological destruction.In particular, it is about supplementing the current moral-theological concept of a lie with new original theses developed based on biblical premises and the reflections of anthropologists who are not theologians.Therefore, our assumption is to shed new light on the current knowledge about a lie and its consequences, both in the social and personal lives of the liar and the person lied to.In the era of the above-mentioned post-truth and the rejection of classic values, it appears to be an important task for researchers to answer the most important questions posed by people to lead them to integral development (their absolute fulfilment, meaning achieving fullness in life, meaning salvation from the theological point of view) (Nęcek and Nagórny 2005, 474-475).
As the title of the article states, the basic and primary source materials for this study are the works of theologians, philosophers, and sociological representatives of the Lublin scientific community (this is evidenced by the source literature).They are mainly associated with the Catholic University of Lublin, Poland.In this group is also included Karol Wojtyla -John Paul II.Before his pontificate, he was a professor at this university.
Searching for the Essence of the Lie
In its most basic understanding, a lie is giving someone information that does not agree with the objective state of affairs.It is a statement that diverges from the truth, an act of deliberately misleading someone (Wyrostkiewicz 2005, 258-259).The basis of a lie is the perpetrator's freely undertaken decision to express information incompatible with his knowledge and belief in a given matter (Communicating this information does not have to be through words or verbally.It can be a gesture or other means that the recipient will see as a clear representation of the information.)Thus, we are dealing with giving someone information that the informer knows (or at least thinks) to be untrue.The premise of lying is to be aware of the truth.Therefore, a lie is a persuasive type of activity (Chudy 1992, 88-89).
The natural consequence of this condition is leading the deceived person into a lie, meaning that they will make the wrong decisions based on the (false) information obtained.As we can see, misleading someone confuses them, leading to wrong thinking and wrong decisions.In this way, it can be shown that the ultimate goal of a lie is not to present what is untrue, but to make sure that the person lied to will undertake decisions and perform actions that would not have happened if they knew the truth.This thesis is the original understanding of a liar's purpose.
According to the point of view presented above, a liar's great cunning is revealed.Attempting to make changes in the world that are not in accord with the truth, a liar tries to use other people for this purpose from whom he hides the truth.He wants to build an "alternative world" (which will be discussed later) through the actions of other people.
It is worth adding that a lie is not just "simply" presenting what is untrue as true.It is also the transmission of doubtful information (unverified and uncertain) without appropriate commentary.This raises doubts and thus indicates its uncertainty, meaning the potential falsehood of the information presented (Wyrostkiewicz 2007, 49-50).
Because a lie is not passively blocking someone's access to the truth, it is not simply a static obstacle that the person lied to has a chance to recognize and bypass.A lie always requires human activity.Thus, it is assumed that the liar will try to give the person lied to the least chance to suppose they are being lied to.Therefore, a lie is an action that not only stops the person being lied to from accessing the truth but actively directs him or her to an untrue area.This is because a liar gives false information by presenting it as true information, and so getting to the truth is difficult (Chudy 1992, 89).This does not depend on the cleverness, ability or knowledge of the person lied to (as in the case of manipulation, for example) (Wyrostkiewicz 2014), because the truth is completely hidden, "secured" and "blocked" from the recipient.These "safeguards" are the human person's natural traits, such as those arising from their deepest needs, and therefore recognized as obvious, the pursuit of truth and goodness, and living according to them.It also includes solidarity, justice and the need and willingness to affirm the human person (Nagórny 1997, 174-201;Pokrywka 2000, 141-73).In other words, an honest man will hardly question what the other person presents to them as being the obvious truth.Inscribed in human mentality is trusting people, faith in their natural goodness and living in solidarity with one's neighbour; these are factors that favour liars.
A Lie is an Actus Humanus
Our reflections up to now have shown that lying is not a simple or accidental act.It is a human activity with all its attributes, and therefore it carries both temporal and eschatological consequences (Nowosad and Wyrostkiewicz 2005a, 129-32).
Lying is an action that not only directs people into the area of untruth but also puts untruth instead of truth into the consciousness of the deceived and into the world in which they function (Chudy 1992, 89-90).The key in the moral-theological assessment of a lie, perceived as an actus humanus, is to draw attention to the intention of the informer (finis operantis) and the necessary, inalienable effect (finis operis) that false (ambiguous, incomplete) information will produce.
According to moral theology's theory of human action, finis operis and finis operantis are the two basic sources of morality (fontes moralitates) of a person's actions.This means that, based on them, a decision determines whether the act is morally good or evil.The latter situation occurs when at least one of these goals (finis operis or finis operantis) is evil, meaning that it is incompatible with the sense and purpose of man's and the world's existence, i.e. when it harms the nature of one or the other (Nowosad and Wyrostkiewicz 2005a, 131).
To deepen our analysis of lies as a human activity (actus humanus), it is worth emphasizing the fact that a lying person's goal (finis operantis) is to mislead someone by allegedly providing them with true information and causing the recipient to recognize as true something that is not true.Treating what is untrue as true by the deceived is the basic effect of a lie (finis operis); the fact is that truth in his consciousness is replaced by untruth.
The decision (willingness, intention, and disposition) for someone to obtain false information is a key issue for a liar.The real truth of the message being conveyed is secondary.What the person giving the information thinks about it is fundamental.If he or she wants to mislead someone, even when they accidentally transmit true information (for example, due to ignorance), we are still dealing with a lie (Catechism of the Catholic Church 1993Church , can. 2482Church -2484;;Wyrostkiewicz 2005 ; Chudy 1992; Rubinkiewicz 2002) In this context, it is worth stating that in the Catholic tradition adopted by moral theology, a lie is included in the category of acts of the highest level of evil.We are talking about actions referred to as intrinsically evil (intrinsece malum), meaning those that will not reduce their evil status regardless of the circumstances.Due to their subject matter, there is also no way for good to come from them.Such activity is always "greatly evil" and causes harm to a man who finds himself in its sphere of influence (John Paul II 1993, 80;1984, 17).
Comparing earlier reflections on a liar's goal with the above theory of a human action allows us to state that a lie is clearly an evil deed.This is due to both the intention of the lying person (finis operantis) as well as the purpose and effect of the action (finis operis).When an act is bad, it means that its necessary consequence is a specific evil that takes on many forms.To some extent, it "stays in the villain" and "works" there.This "work" consists of making changes in the area of his reason and will, which are reflected in his or her transformed thinking, evaluation and limited freedom.In an obvious and necessary way, this results in a weakening of the fundamental option (John Paul II 1993, 65-68) focused on the good and the creation of vices that systematically reduce one's sensitivity to evil, thus facilitating doing evil, and sometimes even imposing evil (Wyrostkiewicz 2013, 52-56).This means that the evil "working" in a man destroys his good nature and pushes him to perform further evil deeds.Another area of evil arises as a result of a morally wrong act "entering into the world" and "working" there, thus contributing to the emergence of evil structures that become the context of other actions (Mariański 1998, 19-32), which will be discussed later.
Further exploring the issue of the essence and impact of lies, it is worth reaching as deeply as possible in search of the essence of both its mentioned goals (fines = finis operis + finis operantis) and their common basis.Not only should we look at a specific single act, but also see it as a link in the whole sequence and discover the goals and effects (fines) of a complex action.This concerns seeing the entire spectrum of both the effects and their contexts.Only this will lead to an understanding of the aforementioned lie.This proposition to pay sufficient attention to circumstances (circumstantiae) is more than a suggestion which, as already noted, constitutes the last of the two mentioned objectives (fines) from among the three sources of the morality of an act (fontes moralitatis) (Nowosad and Wyrostkiewicz 2005a, 131).
It is also something more than the implementation of a just postulate developed within the framework of the concept of a moral situation stating that it is necessary to include a variety of unique conditions in the moral assessment that create the subjective (though not relative) space of action for a specific subject being a rational and free being.Here we are dealing with stressing the truth that every act, as mentioned, is part of a larger whole (activity) constituting a collection of related individual works; thus, the effects of one action are the conditions for other actions.Relating all this to the evil act of lying and keeping in mind the truth about the social dimension of such an activity, it is impossible not to notice that it generates evil structures, as we have already pointed out.These, in turn, create a new environment in relation to the original environment of man's life and functioning, which becomes the source of subsequent activities (John Paul II 1984, 16).
To conclude this part about the essence of a lie and its moral-theological assessment revealed in this context, it is worth emphasizing the thesis noted earlier that a person who is lying does not say (or even transmit at all even non-verbally) that he does not know the truth.He does it because he wants someone to accept as true the news that he, the liar, is convinced to be untrue.A liar is trying to build an "alternative world" whose leaven, foundation and bond is falsehood, which he presents as the truth.
Lies Result in an "Alternative World"
Looking at the information presented above from a different perspective, meaning searching for the basic goal and effect (finis) integrating both of the above-mentioned goals and effects (finis operantis and finis operis), one can see that a person who is lying is about to construct an "alternative world".It is a "world" that will not be based on the truth, but on what they want, that others (the deceived) will recognize as such, and that this "world" creates false information in the consciousness of the recipients.These people will function within this "world" with the conviction that it is real (that it is based on truth, on the objective state of affairs).At the same time, the liar knows what the truth really is.
A liar is also aware of what knowledge the person lied to possesses and the "world" created as a result of this knowledge.Paradoxically, the lying person counts on the honesty of the recipients, and this happens even when he is not aware of it.A liar's realization of this fact is a confirmation of his premeditated and even perfidious action.His action is based on the belief that the person lied to (a person who is honestly seeking the good resulting from basing his actions on the truth) will make decisions based on the information presented to him (Nowosad and Wyrostkiewicz 2005b).
Obviously, these decisions will be different from those this person would have made if they had known the truth.In that case, some other, different action would be considered the right thing to do.Being convinced that the untruth presented by the liar is true, the recipient will do what he normally would not have done if he knew the whole truth.That's what a liar assumes and counts on.This is his goal (finis operantis).
One cannot fail to notice that, thanks to the premeditated actions of the liar and the other person applying false knowledge based on lies, this "alternative world" becomes a reality.It is no longer a "world" existing only in the consciousness of a deceitful person, but it is now a fact made real based on other people's actions performed according to false information.This "alternative world" is "activated" by the deeds of a person who was lied to.Next, it is systematically built up by subsequent actions which were somehow the results of the initial lie, or those whose starting, or reference point were the actions of those who were lied to.This expresses the previously mentioned generation of evil structures.
It is not difficult to see at this point that the spiral of evil is winding up.Therefore, all subsequent activities, which, according to the intentions of the acting person were to result in good, do not bring about such effects.These do not meet the minimum level assumed by the acting person who was lied to, the one who initiated his actions from the knowledge that was based on lies.The reality in which the deeds are carried out does not correspond to the one it should, meaning that the perpetrator "adapted" his actions to the situation because he possessed the wrong knowledge.These actions apply to the "false world," which, as a result of obtaining false information, were created in the consciousness of the person who was lied to.They are implemented in the real world in a way that turns out to be different from the original one.
Due to the inherent social dimension of every human activity (and therefore including lies), it should be assumed that they affect other people living in the vicinity of the liar (if we call the liar the "first" person and the person lied to the "second," then anyone else who is surrounded by the second person may be called a "third" person).There is no doubt that this act also becomes a condition of their actions.It is not a question of directly misleading a third person.The point is, however, that the third person must confront his thinking and actions with the activity of the person lied to or the "changed" person, who is also being misled.This "transformation" of the second person is evident in the fact that the person who is now overwhelmed behaves in a manner inadequate to the real situation.The third person must face the lie of the first person.
Based on the above thesis and referring to previous analyses, it is not difficult to conclude that a "changed" liar transforms the world according to their new vision, sensitivity and altered valuation.One person's transformation always leads to a transformation in the environment in which other people live, function, and make decisions.Every person comprises part of the environment along with other people.This means that the environment affects people's actions, lives and even their identities.Liars live in an environment of people, meaning they see it from a different perspective to the environment they belong to.Liars want to exist in this "changed" environment that includes people feeling the effects of the mentioned change in their minds and wills.They naturally adapt to it, considering it to be the right solution.
Therefore, they agree to live according to these "transformed" points of view that are not in accord with the truth (referring to the title of this part of the text and our previous analyses, we can talk about alternative points of view).People who do not agree with this and are forced to do so experience restrictions on their freedom.In this way, every single lie becomes the cause of the structure of evil (more or less built up and more or less intensively affecting others).It makes living according to the truth and freedom difficult (i.e. a truly human life characteristic of personal beings), and in extreme cases makes life almost impossible.Lying be-comes a way to achieve the appearance of fulfilment and gives the delusion of development, seemingly natural and harmless, and maybe even becomes a helpful element of the world's functioning.
The analyses carried out so far clearly show that the term "alternative world" given in the title of this section, which is the result of lies, has a dual meaning.First, it means a "world" that arose in a deceived person's consciousness.This happened against his or her will and without their awareness.It is an "alternative world" to the one in which a person lives.Second, it is all about an "alternative world" that is created in the mind of a liar.In effect, the liar tries to live in both worlds simultaneously.Reconciling these two roles is very difficult and requires great effort.
It also requires further lies that would confirm the "truthfulness" of previous lies (because one cannot find their confirmation in the "real" or "original" world, so one has to generate more lies).A lot of commitment is needed to hide the real world from other people and the fact that a liar is the creator of an "alternative world".Lying to oneself also requires someone who has started living in this "alternative world".It is where the actions of the person lied to take place.By lying to others, the liar builds a realm of untruth around himself.This is not only moral but also mental and temporal (Chudy 1992, 91-93).
A Liar's Relationship with God and Satan
For a moral theologian, a lie is not only a bad deed, but above all a sin.It violates the eighth commandment of the Decalogue, which forbids bearing false witness (Ex 20:16).Lying also appears to be an abuse of the gift of speech received from God (or more broadly, the ability to communicate), which is an evident misappropriation.Actions against God also go against the man whom a liar wants to treat instrumentally as part of his plan to acquire the good he does not deserve.In this way, a lie also turns out to be an act of injustice, which bears the marks of sin (Derdziuk 2002).
Exploring the problem of lies in the above analyses presents them as actions contrary to the will of God and the love revealed in Christ's saving work.It allows us to see a lie as misappropriating God the Creator, the Savior and Sanctifier, and even as offending Him, an injustice towards Him.Lying is undoubtedly turning away from God and a disorderly turning towards creatures.This means it's a sin (Greniuk 2006, 238-242).
Drawing attention to the object of a lie as emphasized above, these deeds are classified as intrinsece malum, and Catholicism includes them in the category of mortal sins (Derdziuk 2002, 182).Classifying lies as such also results from the fact that a liar works in a conscious, voluntary, and intentional way, thus violating the order established by God in very important spheres of human existence, meaning towards God, one's neighbour, the world and oneself.We can see it as a desire to break contact with God, who is the guarantor of order in the world.All this makes a lie a mortal sin (Greniuk 2006, 345).
When sinning or rejecting God, man does not remain alone.This is due to man's nature, being a person who does not lock himself up in his natural state.He replaces God's authority and plans with a pseudo-authority, Satan's devious plan.As a consequence, a sinful man acts like Satan and executes his deeds.This strong and even dramatic statement not only results from the anthropological knowledge pointed out earlier, but also from research on sin.According to it, whoever sins becomes a child of Satan (1 Jn 3:6).The above thesis, which applies to every mortal sin, in cases where the sin is a lie, is reinforced by the words of Jesus himself.He stated that everyone who lies is a child of Satan, the "liar and father of lies" (Jn 8:44).
An important original argument, which not only confirms but also strengthens the correctness of the thesis discussed here stating that a liar enters into a relationship with Satan and performs acts characteristic of the devil, is the concept of the liar outlined above as the creator of an "alternative world".Here we can find a community that thinks like Satan, whose destructive activity comes from the desire to be "like God" by rejecting the truth about his identity as a creature.A liar creating an "alternative world" succumbs to the devil's suggestions and tries to be "like God" (Gen 3:5); he deludes himself into thinking he is a creator.He wants to become the creator; he wants others to function according to the rules he proposes.A liar treats other people like tools that he needs so that he can implement his plan.This gives his activity a satanic character.
Referring to the reflection on the "alternative world as the effect of a lie", it can be stated here that the world "created" by a liar is a devilish world.It is marked by evil structures and based on them.Although they are the results of the accumulation of many lies, they always begin with one specific lie.Each subsequent lie strengthens and expands already existing sinful structures.In this process, every lie is significant.The alternative presented by the "world" described here created by a liar consists of being different from the world planned by God the Creator.This "alternative" was not born out of love, but out of selfishness and injustice.These are its characteristic traits and also its basic principles.They leave no place for solidarity or even empathy.What counts is self-interest and achieving goals.
Basing something on what is untruth is an obstacle to reaching the truth and, consequently, also the good that is integrally connected with it (Nowosad and Wyrostkiewicz 2005b).Such a world does not correspond to the nature of man created in the image of God (Gen 1:27), who is Love (1 Jn 4:16), Truth (Jn 14:6) and the only Good (Mt 19:17).Thus, this is a world in which it is difficult to live and fulfil oneself; it is difficult to achieve one's goal and find meaning in one's existence.It is a world in which man constantly gets tired (although he is often not aware of it).
Conclusions: A Moral Theological Synthesis
The analyzes carried out and presented above most certainly shows the correctness of the hypothesis that a lie is harmful to a person who is within its reach.Research conducted according to the methodology of moral theology (i.e.referring to the Bible and according to man's experience, which in this case means using the achievements of philosophy and social sciences, including theological anthropology) (Mroczkowski 2011, 42-60) leads to the following conclusions: a lie is evil.It destroys man.This applies to both those who have been lied to and those who lie.
Due to the social dimension of this act, a lie adversely affects the entire society that it creates together with other people.This means that the effects of lies are also felt by those who are not directly involved in them.A lie strikes every person it touches on various levels of their existence: the personal and social levels, including in relation to God.Lies hinder man's integral development and fulfilment.It is no exaggeration to say that a lie creates anthropological destruction.
A lie is a planned activity, a purposefully executed human activity.It is characterized not only by actions but also by sophistication and premeditation.It is an action that not only directs man into the area of untruth but also imposes untruth instead of truth into the consciousness of the deceived and into the world in which these people function (Chudy 1992, 88-89).A liar tries to replace the truth with what is false.In this way, a liar becomes a kind of creator of an "alternative world".By creating this "alternative world", a liar imposes "new rules" on the deceived according to which they both try to live.However, since these rules are incompatible with human nature and the world, they come from "another creator" and do not give man the opportunity to find fulfilment.Man was created in God's image, he comes from God, and in order to achieve our natural goal and find meaning in our existence, we must rely on God's rules.This means that people must know the truth that comes from the Eternal Truth.Man must associate with the truth and live according to it.This is because only the truth can set us free (Jn 8:32), meaning that it opens up the way for man to be fulfilled (Nowosad and Wyrostkiewicz 2005c).
Trying to live happily in an "alternative world" is to condemn oneself to constantly generate lies.No lie or no evil gives man real peace but forces him to undertake activities that will make him create alternative worlds for himself and others.These are actions contrary to God's actions.Thus, it is also contrary to man.
Man, created in the image of the God of Truth (Gen 1:26-27), is unable to realize himself in life except by imitating God.This imitation, however, does not mean the need to create a "new world".It is a call to live according to the truth and good.Here we are talking about a life where there is no room for egoism; it is one in which man "rejects lies" (deponentes mendacium) (Eph 4:25) and moves towards a real community of persons (communio personarum) by "speaking the truth in love" (veritatem facientes in caritate) (Eph 4:15).Research in moral the-ology leaves no doubt that this is the only way that leads to man's natural existential fulfilment in society (Nagórny 1997, 149-154;273-338).
The above analyses lead to the belief that from the point of view of moral theology practiced in the spirit of personalism, one cannot agree with statements and concepts that recognize any positive sides of a lie and consider it to be one of the tools in communication for maintaining and regulating social relations.A lie is an act of intrinsece malum that cannot be considered beneficial in any case, regardless of the circumstances and goals of the actor.In other words: there is no situation where a lie can be considered a source of good (Derdziuk 2002, 182).
The above statements, however, do not include a naive belief that people will always say "the whole truth and nothing but the truth".Its goal is not to develop an imperative that would oblige people to do so.Nor is it an absolute ban on deforming the truth.There are situations where telling what is not true cannot be treated as evil.Here we mean, among others, telling jokes, promising confidentiality, so-called defensive speech, or a white lie.However, one must be able to see the difference between these forms of expression and a lie.
Separate research should be devoted to these issues (which is the author's intention).Let it suffice to state that there are situations in which telling untruths does not mean lying.It concerns such a deformation of the truth that we clearly see it is a lie.This distinguishes it from lies.The later are always bad, something worth emphasizing at the end of the reflections undertaken here.Therefore, one cannot agree to justify a lie under any circumstances or obstacles in communicating the truth.
The thesis that lying is an element of communication is inconsistent with the knowledge presented above.Therefore, it can be seen as ideological.First, this point of view is not rational.Second, it is used to justify the behaviour of people who use lying for their own benefit.Especially for influencing.It can therefore be said that the theory that lying is a normal element of communication and building relationships between people resembles an ideology (Chlewiński 1993;Janeczek 1993).Because interdisciplinary scientific research shows that lying is an anthropological destruction -It destroys man and society.Here we can see the "spiral of destruction": destroying society is destroying man, and destroying man is destroying society.
. Michał Wyrostkiewicz et al. -Lie as the Anthropological Destruction | 2023-12-04T17:16:24.557Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "e613f3c12d02a8326d3d20f20e6a0a4e12a78a30",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.34291/bv2023/02/wyrostkiewicz",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c6f0b33f97b2660a17ecb26b95491cb4ee3fe095",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
55149621 | pes2o/s2orc | v3-fos-license | Herbicide Phytotoxicity and Resistance to Herbicides in Legume Plants
Active substances in herbicides, just like in other pesticides, are chemical compounds synthesized in order to kill organisms which are harmful for cultivated plants. Therefore, they are toxins introduced on purpose by man into the environment. From the perspective of environmental protection, it is very significant that herbicides are most often applied directly into the soil to manage weeds. Since DDT and chloro-organic herbicides such as 2,4,5-T were withdrawn (in the 1970s) and since the EU regulations were unified for all its member countries, plant protection techniques have advanced considerably. Yet, pesticides, thus herbicides as well, continue to be a big group of xenobiotics periodically occurring at high levels in agroecosystems. These compounds infiltrate into related biocenoses from air, soil, water and food (Allinson & Morita, 1995; Kolpin et al., 1998; Adomas at al., 2008). Soil may become a reservoir of various pollutants, including herbicides. Herbicides remain active in soil for different periods. Paraquat has a relatively long half-life in soil (estimated at about 1000 days). The half-life of glyphosate in soil is only 10 to 100 days, and according to Monsanto the average half-life of this herbicide is 32 days (Hornsby et al., 1996; Monsanto, 2005). Remainders of persistent herbicides (e.g. atrazine, metribusin, and trifluralin) can stay in soil and destroy subsequent plantations a year or more after herbicides had been used. Herbicides from soil leach into surface water and ground water. The assessment of herbicides content in the aquifers in Iowa shows that 75% of herbicides (Kolpin et al., 1998), despite degradation, are still detected. From soil, water or air, herbicides get into crops (Adomas et al., 2008). When pesticides are applied, acceptable remainders of active substances (MRL) can often be detected in cultivated plants. Depending on physicochemical properties of the active substances of pesticides and the ways of their detoxification, some of these pollutants tend to increase concentration while passing through organisms of higher trophic levels. It can lead to a significant bioaccumulation of toxins in the food chains (Allinson & Morita, 1995; Dinis-Oliveira et al., 2006). No doubt therefore, monitoring of herbicide (including desiccant) residues in cultivated plants is needed, so that people and environment can be safe. Moreover, application of herbicide desiccants modifies physiological properties of seeds and may thus lead to delayed problems, becoming evident long after the treatment.
. Basic stages of seed ontogenesis and dry mass accumulation in yellow lupin seeds. Chemical desiccation is carried out during natural dehydration of seeds.
They are resistant to water loss below 5% of the initial water content. During desiccation, orthodox seeds can lose over 90% of water, and as a result slow down their metabolism and become dormant (Adams & Rinne, 1980). Seeds of this type are produced by most monocots and dicots of the temperate climate zone (Walters & Towill, 2004). Seed desiccation is a prerequisite for completion of their life cycle and it can be chemically forced. Recalcitrant seeds perish as a result of drying below a critical level of water, which is usually quite high and amounts to 40% (Roberts, 1973). Desiccation cannot be applied in this group of plants which includes tropical trees, e.g. avocado, mango, litchi, cocoa (Farrant et al., 1993) and some grasses (Probert & Longley, 1989). The third group -i.e. intermediate (suborthodox) seeds are tolerant to dehydration if it stops at seed water level 6-10% (Ellis et al., 1991). It seems that chemical desiccation can be successfully applied in this group, however mild (with small dessicant doses). Moreover, intermediate seeds do not tolerate cold while being dehydrated (Hong & Ellis, 1996). Chemical desiccation of plants is performed during the period when orthodox seeds start their natural drying. Yet, precise determination of this period depends on the species, the pace of water loss and its expected final content. Many processes contribute to preservation of seed vigour and viability during both natural and accelerated (chemical) drying.
Biochemical changes in seeds resulting from desiccant application
The rate of water loss during seed desiccation has a profound effect on seed condition at cellular and subcellular levels. Organelles in the cytoplasm become compacted together, cytoplasm viscosity increases and a number of degenerative reactions intensify. In order for the cell to protect itself against dehydration, it has to accumulate protective compounds. A wide group of such substances was detected in seeds, for example proline, glutamine, fructans, poliols, trehalose, carnitine and others. A lot of plants respond to desiccation caused dehydration by accumulating so-called 'compatible solvents' (i.e. molecules not interfering with the structure and functions of the cell), including one kind or more of the above given substances. Unfortunately, concentration of these hydrophilic compounds is often not sufficient to bind water properly (Hare et al., 1998). Yet, probably it is one of the most important mechanisms protecting macromolecules under conditions of limited water loss. Molecules of compatible compounds form a protective layer around proteins, thus preventing protein deformations (Crowe et al., 1990). During desiccation, changes occur at all levels of plant cell functions. Maintaining the integrity of genetic material and keeping the DNA repair mechanisms functional during dehydration are the most important for seed survival. As a result of water loss during seeds desiccation the genetic material undergoes conformational changes, depending on its nucleotide sequence and interactions with specific DNA-binding proteins (Osborne & Boubriak, 1994). A higher stability of seed DNA structure probably indicates increased seed tolerance to desiccation. At the early stages of seed ontogenesis, desiccation leads to frequent chemical modifications of DNA bases, resulting in a modified DNA methylation level. The later chemical desiccation is applied in the ontogenetic development of seeds, the fewer epigenetic changes are thus caused.
Soluble carbohydrates and their derivatives content in seeds
Yellow lupin plants (Lupinus luteus L. cv. Taper and Mister) were grown in 10-L pots in greenhouse (Fig. 2) with a 12-h photoperiod at 20 o C day/18 o C night and 140 µmoles photons m -2 s -1 irradiace. Mixture of peat, garden soil and sand (1:1:1, v:v:v) was used as substrate for plant growth. On the day of flowering the plants were labelled and divided into three groups: a control and two treatments. Basta 200 SL (producer Bayer Poland, active substance (a.i.) glufosinate ammonium) was applied in the amount of 4,1 µg a.i. per pot (the preparation was diluted in 15 ml of distilled water). Roundup Ultra 360 SL (producer Monsanto Poland, a.i. glyphosate) was applied in the amount of 4,1 µg a.i. per pot (here the preparation was diluted in 15 ml of distilled water too). Herbicide levels applied in this experiment corresponded to the field doses 2.5 and 3 L/ha, respectively. Seeds were collected in five-day intervals, starting from 15 days after flowering (DAF) until full physiological seeds maturity. Soluble carbohydrates content in seeds were analysed by GC chromatography according to Piotrowicz-Cieślak (2005). Seeds (30 -60 mg fresh mass) were homogenised in ethanol: water, 1:1 (v/v) containing 300 μg phenyl--D-glucose as internal standard. The homogenate and the wash were combined in a 1.5 ml microfuge tube, heated at 75C for 30 min to inactivate endogenous enzymes and centrifuged at 15 000 g for 20 min. The supernatant was passed through a 10 000 MW cut-off filter (Lida, Kenosha, WI USA). Aliquots of 0.3 ml filtrate were transferred to silylation vials and evaporated to dryness (Eppendorf Concentrator 5301). Dry residues were derived with 300 μl of silylation mixture (trimethylsilylimidazole : pyridine, 1:1, v/v) in silylation vials (Thermo Scientific) at 70C for 30 min, and then cooled at room temperature. One μl carbohydrate extract was injected into a split-mode injector of a Thermo Scientific gas chromatograph equipped with flame ionisation detector. Soluble carbohydrates were analysed on a DB-1 capillary column (15 m length, 0.25 mm ID, 0.25 μm film thickness, J&W Scientific). Soluble carbohydrates were identified with internal standards as available, and concentrations were calculated from the ratios of peak area, for each analysed carbohydrate, to the peak area of respective internal standard. Quantities of soluble carbohydrates were expressed as mean ± SD for 3-5 replications of each treatment.
The role and content of soluble carbohydrates in seed drying
The composition of carbohydrates, with particular regard to content of raffinose family oligosaccharides, has been measured in seeds of many species. Raffinose content exceeds 1 % d.m. in seeds of bean, soybean and pea. The high content of stachyose (over 4 % d.m.) is found in vetch and soybean seeds, whereas verbascose content over 2 % d.m. was measured in bean and pea (Horbowicz & Obendorf, 1994). Soluble carbohydrates content in lupin seeds varies from 6 to 15.6 % d.m. (Piotrowicz-Cieślak et al., 1999;Piotrowicz-Cieślak, 2005). Raffinose family oligosaccharides (RFO) content in the overall content of soluble carbohydrates in lupin seeds amounts to 46 to 76 %, with the prevalence of stachyose. A common characteristic of lupin seeds is that the RFO content in embryonic axes is two times higher than in cotyledons.
This feature, connected with higher metabolic activity of embryonic axes than cotyledons, was also observed in all other legume seeds studied (Horbowicz & Obendorf, 1994;Górecki et al., 1997). Oligosaccharide contents in seeds depend on the rate of seed maturation (Piotrowicz-Cieślak et al., 2003). In maturing seeds of lupin, Taper and Mister varieties, accumulation of fresh and dry mass was at first slow, and then it rapidly increased between the 25 th and 35 th day after flowering. The first symptoms of seed viability were observed on the 20 th day after flowering. In the period of natural seed desiccation, the seed mass increased rapidly. In both lupin varieties, dry mass rapidly increased after applying desiccants, which resulted in an decrease in fresh mass. Quick desiccation also increased the electrical conductivity of seed exudates. Yet, such accelerated maturation does not have any significant impact on seed germination (Fig. 3). A properly conducted desiccation is of key importance for the maturing seeds to acquire vigour (Sanhewe & Ellis, 1996). Both these phenomena are dependent on biosynthesis of specific stress-related proteins, induced by ABA (Late Embryogenesis Proteins, Blackman et al., 1992) and accumulation of a considerable level of soluble carbohydrates in seeds. The assessment of soluble carbohydrates accumulation has been carried out in maturing seeds of L. luteus cv. Taper and Mister. The length of maturation period, from flowering to full maturity of seeds, was 45 days. Seeds accumulated monosugars (glucose, fructose, galactose) sucrose, cyclitols, raffinose family oligosaccharides and galactosyl cyclitols.
In the initial phase of seeds development (15-20 days after flowering) the monosugars content (fructose, glucose and galactose) was high (Fig. 4) and gradually growing, reaching the maximum on the 30 th day after flowering (DAF). At full seed maturity monosugars were present in negligible amounts. Upon chemical drying the content of monosugars decreased rapidly, particularly in the first five days after desiccants application. The rate of the decrease in monosugars content was similar when Basta and Roundup were applied (Fig. 4).
The chemical drying of seeds facilitated increase of cyclitols content, particularly myoinositol. The content of sucrose in the initial phase of seeds development was high. At the beginning of desiccation the sucrose level decreased, reaching the minimum at full physiological seed maturity (45 DAF).
Chemical drying induced sucrose synthesis in seeds (Fig. 5). Among galactosyl cyclitols: galactinol, ciceritol and trigalactopinitol A, galactopinitol B and digalacto-myo-inositol were present in the highest amounts. In seeds which were not chemically dried the content of galactinol increased to reach the maximum at 30 DAF. Its maximum concentration in seeds desiccated with Basta and Roundup dropped five days after treatment. In the course of seeds drying up, the content of galactinol decresed, reaching the minimum level in the naturally growing, fully mature seeds. In the chemically dried seeds the content of galactinol was higher; also, more intense synthesis of ciceritol and trigalactopinitol A was found after chemical drying of seeds (Fig. 5).
The dominant reserve carbohydrates in lupin seeds were raffinose family oligosaccharides (Fig. 6). The level of these metabolites gradually increased in the course of seeds desiccation, with particular intensity in chemically dried seeds. Stachyose was the member of raffinose family oligosaccharides that occurred in the highest concentration. In mature, nondesiccated seeds its level was 43.17 mg/g d.m. In the chemically dried seeds, after the application of both Basta and Roundup, the level of stachyose was higher and persisted until the end of the physiological maturity of seeds (Fig. 6). The authors of the papers published so far on the accumulation of RFO in seeds come to unequivocal conclusion that the drying induces the accumulation of RFO in embryonic tissues. The increases in the RFO contents are paralleled by increasing seed resistance to desiccation (Obendorf, 1997). The protective action of RFOs results from their role in inducing the cytoplasm vitrification and stabilization of membranes and macromolecules in dehydrated cells (Bernal-Lugo &Leopold, 1985). In the course of seed drying (chemical or natural) accumulation of osmoprotective substances takes place, such as proline, betaine, oligosaccharides including RFO, cyclitols and their derivatives (Carpenter & Crowe, 1989;Ramanjulu & Bartels, 2002;Piotrowicz-Cieślak et al., 2007). Our research demonstrates that the dominant soluble carbohydrates in maturing embryos of L. luteus were raffinose family oligosaccharides and galactosyl cyclitols.
Seeds accumulated significant amounts of stachyose, verbascose, trigalactopinitol A and B. Soluble carbohydrates characterised by high molecular weight (stachyose, verbascose) belong to the main osmoprotectors in seeds of lupin. They also contain several hydroxyl groups (seven in the case of verbascose). Polyhydroxy compounds can substitute for water Fig. 5. Soluble carbohydrates content [mg/g fresh mass] in lupin seeds during maturation and after application of herbicides: Basta 200 SL and Roundup Ultra 360 SL. Data points represented the mean ± SD for ten replicate samples -control (o), Basta treatments (Δ), Roundup treatments (■).
in stabilizing membrane structure in the dry state. Trehalose is indicated as the optimal osmoprotector; it contains 8 hydroxyl groups and has molecular weight 2.5 times lower (Crowe & Crowe, 1984) than verbascose. Lupin seeds do not accumulate trehalose, but similar functions can be performed in them by other soluble carbohydrates. Bianchi et al. (1993) point out that tissues should contain from 10 to 15 % of sucrose or 5g sucrose for every gram of lipid to tolerate the drying well. Lupin embryos contain less than 3 % of sucrose, but they tolerate chemical drying relatively well. Therefore, it is the total content of all soluble carbohydrates, including sucrose, that likely becomes significant. Chemical drying of seeds enhances the increase of soluble carbohydrates content in lupin embryos -raffinose family oligosaccharides and, to a smaller extent, galactosyl cyclitols. Thus, galactosyl cyclitols are probably important osmoprotective agents in lupin seeds. This hypothesis is based on biochemical and physiological characteristics of galactosyl cyclitols, similar to those of RFO (Obendorf, 1997). It was shown that in buckwheat seeds galactosyl cyclitols indeed contribute to the development of seed resistance to desiccation (Horbowicz et al., 1998). These compounds are formed by attaching one or more galactosyl radicals to cyclitol (most commonly inositol). Galactose is attached to cyclitol by -(1→2) or -(1→3) bindings. Galactosyl cyclitols series based on the form of cyclitol which is an acceptor or galactosyl radical. So far the following structures of galactosyl cyclitols have been discovered and defined: galactinol, galactopinitol, fagopyritol, galactoononitol. The accumulation of galactosyl cyclitols was observed in seeds of different plants ( Piotrowicz-Cieślak, 2004). Particularly rich in galactosyl cyclitols are the seeds of buckwheat, castor oil plant and lentil (Horbowicz & Obendorf, 1994). Lupin seeds accumulate up to 2 % d.s. of galactosyl cyclitols (Piotrowicz-Cieślak et al., 2003). Under desiccation seeds of legume plants (soybean, yellow lupin) accumulate mainly RFO, despite the fact that during the natural drying these seeds also form galacto-D-pinitol and D-chiro-inositol (Górecki et al., 1997). Water stress, resulting from soil drought, low or high temperature, may have a significant influence on prompting the biosynthesis of galactinol and RFO in vegetative tissues and maturing seeds. In leguminous plants exposed to water stress an intense accumulation of -D-galactosides is found (Streeter et al., 2001). Water stress resulting from soil drought and chill induces the accumulation of galactinol and raffinose also in vegetative tissues of alfalfa and Arabidopsis (Taji et al., 2002;Cunningham et al., 2003;Zuther et al., 2004) and tomato germinating seeds (Downie et al, 2003). RFO level depends on the activity of galactinol synthase. The activity of this enzyme is related to temperature (Panikulangara et al., 2004) and RFO total amount in leaves and seeds (Castillo et al, 1990). In the genome of Arabidopsis seven genes have been identified as responsible for the activity of galactinol synthase. Two of them are activated by drought and salinity, whereas one is activated by low temperature (Taji et al, 2002). The level of stachyose and verbascose depends on the level of initial substrates, including myo-inositol (Hitz et al., 2002) and sucrose. Herbicides modify the content of soluble carbohydrates in seeds and remain in soil after having been applied. Toxicological tests are a simple, inexpensive, and quick method to assess their impact on subsequent plants.
Toxicological tests in the environmental assessment
A wide range of analytical approaches are used to asses the effect of contaminants on environment at all levels of its complexity -from studying the biochemical changes in single cells to changes measured on ecosystem levels. The holistic, system approach so characteristic of modern science, combining efforts of experts in many fields, increasingly becomes applied also in environmental toxicology studies. Yet, it is not possible to precisesly characterize all possible pollutants due to their big number, different concentrations, molecular weights, or high reactivity of widely spread low-molecularweight substances (Kahru et al., 2008). The key task of toxicology and ecotoxicology is a direct evaluation of risks resulting from environmental contamination and refers to, among other issues, creating classification systems based on increasing levels of toxicity (Persoone et al., 2003). Until recently, physical-chemical methods were considered the basic way to diagnose the condition of environment and its specific elements. Although these analyses greatly facilitate elimination of some toxic substances , they do not fully characterize the biological activity of any substance on affected organisms. They only inform us about the level of contamination, but do not predict its biological consequences. The precise analytical methodologies are mostly worked out for those compounds only which are subject to strict legal regulations. Moreover, such chemical analyses sometimes blur the real environmental threats (Manusadžianas et al., 2003;Persoone et al., 2003;Wolska et al., 2007). A valuable alternative supplementing this purely chemical approach was worked out, based on the following principle: measure the exposure and analyze the accumulation and metabolism of contaminant in living organisms. Relatively inexpensive and biologically founded biotests are being developed for this purpose.
an ecosystem (a mesosystem, a microsystem), at the same time defining make-up of substances and their possible interactions (Kratasyuk et al., 2001;Wang et al., 2003;Persoone et al., 2003). Biotests show an interdisciplinary approach to organisms' response to the ocurrence of particular chemical substances. In order to determine the impact of these chemical substances, biotests employ the methods of physiology and biochemistry. The obtained results unambiguously determine the toxicity of a given sample or lack of its toxicity, as compared to the response of an organism not exposed to the activity of all chemical substances present in the analysed sample (Simeonov et al., 2007). Methods used in toxicology and ecotoxicology based on morphological and/or physiological disturbances at the cell level or organism level, and sometimes as a consequence death of a given organism, aim at protecting the environment, which indirectly results in protecting the life and health of human population (Celik et al., 1996;Dinis-Oliveira et al., 2006). The current form and shape of these tests and recommendations for their proper usage are a result of more than twenty years of research. In consequence, conditions of conducting such tests became standardized, costs related to their application decreased, the test cultures became widely available (Kaza et al., 2007), and measurements unified and comparable.
Biotests and their classification
Interdisciplinary analytical methods based on living material illustrate potential threats posed by contaminants or their mixture in diversified environmental matrices. The multitude of methods causes many problems with classifying them (Persoone et al., 2003). Primary classifications referred to information about the level of contamination in a given element of the environment, considering the place of performed analysis. There are laboratory tests conducted, which are based on modelling samples in controlled conditions, results of which determine the level of toxicity for real samples. Moreover, samples retrieved from specific components of the environment (e.g. water, soil, wastewater) are analysed and compared with standard samples. In comparative studies of water environment, it is best to apply various methods and to test the same samples with various organisms in order to determine the level of test sensitivity and eliminate errors resulting from application of only one test (Kratasyuk et al., 2001). Another way to best account for the specific environmental conditions is to perform an in situ analysis, which utilises responses of organisms living in the natural environment (Anderson et al., 2004;Mc William & Baird, 2002). Such an analysis enables continuous replacement of the medium, mainly water, when studies are performed on fish or plants as indicator organisms, and it is classified to be a dynamic biotest (Sundt et al., 2009). If replacement takes place in set time periods, such biotest is defined as a half-static one (Blanck et al., 2003;De Liguoro et al., 2009). In a static test the medium is not replaced until the end of analysis. While analysing toxicity, the tested pollutant at a certain concentration is only once placed on the medium, i.e. water, sediment, or soil.
Studies based on the assessment of various substances' impact on morphological and physiological changes in the indicator organism pose many difficulties in classifying them to a specific group of toxicological tests due to a wide spectrum of methods they employ. The active element of the test -i.e. an organism used in the test -may be considered as a basic criterion in classification of toxicological and ecotoxicological biotests. Various plants, bacteria and animals are used as active elements (Nałęcz-Jawecki & Persoone, 2006; Adomas 'toxkits' are available. Sets of microbiotests -TOXKIT -used to determine acute and shortterm chronic toxicity have been developed with the use of many test organisms which belong to many phylogenetic groups and trophic levels. They are used in order to assess land and fresh water, as well as coastal and sea environments. Among tests used to assess fresh water there are: . 7) is a microbiotest of phytotoxicity of soil, sludge, compost sewage used for watering, chemical substances and biocides.
Biotests based on the analysis of changes within a plant organism and called phytotests provide information concerning organisms of key importance to a given ecosystem, thanks to which it is possible to determine its condition and disturbances in matter flow or substance circulation. Phytotoxic substances disturb absorbance and transport of essential micro-and macro-elements in plants, which results in delayed seed germination and plant sprouting as well as deformations and underdevelopment of certain plant elements (Jin et al., 2009). Thanks to phytotests, a solid knowledge of the impact of an environmental factor is acquired. Phytotests are used to assess soil (Piotrowicz-Cieślak et al., 2010 a) and water (Fernández-Alba et al., 2002;Drzewicz et al., 2004) contaminated with, among others, pesticides. Biotests used in biomonitoring (Holgado et al., 2004) are applied in order to assess the condition of ecosystems, to establish the capacity of ecosystems to absorb pesticides, including herbicides, as well as to assess interactions among pesticides, and between pesticides and the environment. In these studies many kinds of plants are used; algae from the class of Chlorophyta: Selenastrum capricornutum, Scenedesmus quadricauda, S. subiscatus are used most often to evaluate fresh waters (Küster et al., 2003;Simeonov et al., 2007;Wadhia & Thompson, 2007), while the class of Bacillariophyta: Phadeodactylum tricornutum, Skeletonema constatum is used to evaluate brackish and saline waters (Nendza, 2002;Wadhia & Thompson, 2007). Duckweed is widely used in laboratory studies of substances and their accumulation (OECD, 221), Lemna minor and L. gibba are the organisms most frequently applied as indicators in biotests (Lemna test). Phytobioindicators are chosen also from among tracheophytes, among which Sorghum saccharatum, Lepidium sativum or Sinapis alba can be distinguished. They have very small seeds, and the test is performed for three days (OECD, 208). To evaluate soil contaminated with glyphosate yellow lupin was applied, with seeds 20 times bigger than the size recommended by the PHYTOTOXKIT™ producer (MicroBio Test Inc., Belgium) . In soil environment assessment, apart from assessing the germination of grasses, crucifers, leguminous plants and grains, also macrophytic plants are taken into consideration. However, a complicated process of growing them as well as a long time of their growth and their requirements with reference to space precludes their common use in biotests.
The choice of plant organisms should characterize a given ecosystem. Plant species are chosen considering their availability, methods of growing, simplicity of conducting studies and biological sensitivity to a compound or a group of compounds, confirmed in a number of repeated tests (Eberius et al., 2002). In such kind of research, the opportunity to assess toxicity in a multigenerational cycle is extremely important. Thus the majority of applied phytotests are of chronic character, i.e. they last longer than 1/3 of pre-productive period. Nowadays, as a result of long-lasting studies in the field of environmental assessment, scientists tend to use available plant cultures in their experiments, which can be used in laboratories. They grow their own algae and tracheophytes or use commercially available selected cultures in the cryptobiotic or lyophilized form, available with microbiotests, e.g. Algaltoxkit F TM , Phytotoxkit TM (Persoone et al., 2003;Wolska et al., 2007).These are kits equipped with accessories essential in determining the level of toxicity in samples. Every microbiotest contains an active element, i.e. a living organism, ready to be activated at any moment. Toxicological assessment with the use of such tests makes it possible to compare results between continents. Moreover, thanks to cultures attached to the test, there is no need to grow any organisms. Assessments performed with the use of these tests are characterized by lower research costs than in the case of the conventional ones; the time of organism response is shortened, and the study can be performed for samples of lower volume. Additionally, a researcher can work on a few samples at the same time (Persoone et. al., 2003;Kaza et al., 2007;Wolska, et al., 2007), which enables planning the experiment as a battery of tests. A battery of tests is a toxicological study which encompasses more than just one species of indicator organisms. It is used in order to widen the scope of research conducted on a given chemical substance onto a bigger portion of the ecosystem, considering also other organisms which live there on different trophic levels, in the following order: producer-consumer-reducer (Kaza et al., 2007). Toxicological analyses as well as the obtained results are specifically diversified. They often aim at determining the very lethality or inhibition effects with reference to different bioindicators, after particular exposure times. Lethal Concentration (LC) or Lethal Dose (LD) is often determined taking into account Lowest Observed Effect Concentration (LOEC) or Observable Effect Concentration (NOEC). Typically, Effect Concentration (EC) is established for rotifers and crustaceans, and denotes inhibition of a particular physiological or biochemical activity expressed as a percentage in a given test group. In growth tests, inhibition of growth of plants, fungi as well as biomass of algae is considered. Apart from the above parameters, due to the development of biotechnology, also enzymatic tests (Budantsev, 2005) and tests of genotoxicity (Yamamoto et al., 2001;Küster et al., 2003;Jha, 2008) have become important in toxicological studies. The first group determines inhibition of activity of an enzyme or a group of enzymes which catalyze a given biochemical reaction, while the second group determines genetic changes brought about by a toxicant. Sometimes toxicologists analyze parameters relating to absorption and storage of certain substances in tissues and organs depending on the time of exposure. These tests are referred to as bioaccumulation tests (Kahle & Zauke, 2002). Sensitivity of leguminous plants to herbicides has been successfully used in our research.
www.intechopen.com
Herbicide Phytotoxicity and Resistance to Herbicides in Legume Plants 33
Phytotest used in herbicides study
Yellow lupin, soybean, adzuki bean, pea, lentil and alfalfa seedlings responded in a similar manner to herbicides (glyphosate, glufosinate ammonium and diquat). All seeds, irrespectively of the concentration of herbicides, germinated with frequency between 65% and 95% (Fig. 8). Seeds were considered germinated when the radicle penetrated the seed coat. With increasing concentration of herbicides an inhibition of root elongation was observed in all plants (Fig. 9). Active substances of herbicides, depending on their chemical structure, remain in the soil solution and they are absorbed by roots of germinating seeds. When the weeds are removed, within one vegetation season these substances should decompose to compounds which are natural in the environment (e.g. nitrogen and carbon dioxide). Yet, among herbicides there are chemical compounds which have varying half-lives. For instance, the half-life of glyphosate in soil is only 10 to 100 days (47 days on average) according to Hornsby et al., (1996) and according to Monsanto, (2005) the average half-life is 32 days. Paraquat has a relatively long half-life in soil (estimated at about 1000 days). The residue of persistent herbicides (e.g. atrazine, metribuzin, trifluralin) may stay in soil and negatively affect subsequent crop even more than a year after application. It pertains especially to active substances used year by year on the same field (e.g. atrazine) (Sheets & Shaw, 1963). Thus for both agriculture and environmental protection it is important to check what happens with the active substances of herbicides in soil (their translocation on different levels of soil and water, and their degradation) and how they are absorbed by plants (Beckie & McKercher, 1990). In such research, phytotests in which different plants are phytobioindicators are successfully applied. Plant sensitivity to environment contamination is often used to estimate the degree of environment degradation. Plants respond in different manner to many kinds of toxic substances. The symptoms include morphological deformations (e.g. seed germination or the length of roots and shoots) and changes in plant biochemistry (osmoprotectors content) (Pardo, 2010;Piotrowicz-Cieślak et al., 2010a). The phytotoxic effect is a result of an interaction between the compound and the plant in given environmental conditions. Environmental toxicity is usually determined with the use of phytotests according to OECD norms (2006) (OECD, 221), mainly with reference to pesticide (Stork & Hannah, 1996) and veterinary medicine contamination (Piotrowicz-Cieślak et al., 2010b). Biotests, in contrast to instrumental (chemical) methods, enable simple and inexpensive estimation of very low levels of active substances in soil which can be phytotoxic to crop plants .
Phytotoxkit has been successfully applied to estimate phytotoxicity of glyphosate used all over the world to fight weeds and desiccate crop plants (leguminous plants and rape seed). Soil contaminated with increasing concentrations of glyphosate from 0 to 2000 µM was assessed with the use of leguminous plants (Lupinus luteus), crucifers (Brassica napus, Sinapis alba, Lepidium sativum), grains (Avena sativa) and a plant from the Poaceae family (Sorghum saccharatum) (Piotrowicz-Cieślak et al., 2010a). Glyphosate concentrations highter than 40 µm inhibited root growth.
In order to see a complete picture of herbicide phytotoxicity, one has to assess also key cell metabolites, among others myo-inositol, in seedlings which grow in contaminated soil. myo-Inositol is an aliphatic alcohol derived from glucose-6-phosphate, it has six OH groups on its ring. It is the most commonly spread cyclitol in the environment and a precursor of optical and methyl derivatives. myo-Inositol is localized in cytosol and plastids in small quantities (Paul & Cockbourn 1989), it is easily incorporated into cell metabolism and thus its (galactosyl) derivatives are said to have storage functions . myo-Inositol constitutes a key component of cell membranes (poli-phosphatidylinositols take part in receiving and transducing signals). This compound occurs in considerable quantities in the form of an ester of phytic acid, being an easily accessible form of phosphate ions (Loewus et al., 1990). In dehydration stress, myo-inositol plays the role of an osmoprotectant (Nelson et al., 1998;Piotrowicz-Cieślak et al., 2007), first of all limiting destructive changes in the biological membranes which were induced by the stress factor.
The level of myo-inositol in plants treated with glyphosate, glufosinate ammonium and diquat indeed increased when the herbicides were used at very high concentrations (Fig. 10).
Phytotoxkit was also used to assess the residue of herbicides applied to control weed in winter wheat: chlorsulfuron, nicosulfuron, 2,4 DP and dicamba within the range of 0.025-1.2 mg/kg of soil. Seeds of Sinapis alba, Fagopyrum esculentum and Cucumis sativus were phytobioindicators here. On the basis of root elongation in 5-day seedlings, it was shown that Sinapis alba was most effective in detecting chlorsulfuron and nicosulfuron residues in soil. Cucumis sativus seedlings were the most sensitive plants to 2,4 DP residue in soil, as the highest of the analysed concentrations reduced root growth by 99%. The roots of Fagopyrum esculentum were most suitable for the detection of dicamba residue. The research demonstrated that very sensitive plants are able to detect the residue of herbicides in soil at the level of 0.0015 mg/kg (Sekutowski & Sadowski, 2009).
While applying herbicides, e.g. in agrotechnical activities and to destroy plants in water reservoirs (e.g. glyphosate), xenobiotics are introduced into natural aquatic systems. The mechanism of toxic activity of herbicides inhibiting photosynthesis in plants in PI and PII system was used to assess phytotoxicity of paraquat, atrazine, metribuzinan and diuron towards Scenedesmus obliquus green algae. The F684/F735 chlorophyll fluorescence ratio in S. obliquus algae can be a quick and sensitive measurement method of contamination levels in water reservoirs when they were contaminated with active substances of herbicides chosen for the research (Eullaffroy & Vernet, 2003 The unicellular algae Chlamydomonas reinhardtii were used to detect 16 herbicides, belonging to 11 different chemical groups, in water. Different reactions (sensitive or no effect) of one species of algae were shown for acifluorfen, chlorpropham, diclofop-methyl (DFM), glyphosate, isoxaben, pinnacle, and trifluralin, dichlorobenzonitrile (DCB), 2,4dichlorophenoxyacetic acid (2,4-D), metobromuron, 2-ethyl-4-chlorophenoxyacetic acid (MCPA), metribuzin, atrazine, hexazinone, norflurazon and terbacil -after 3 days from their application. The reaction of algae depended on the quantity of substances and the mechanism of toxic activity of particular chemical groups of herbicides. Such unicellular organisms can be used as phytobioindicators for a quick and easy detection of different active substances of herbicides in water (Li et al., 2008).
The results of toxicity assessment of active substances in herbicides may vary, depending on the sensitivity of the species used for biotests. These relations were demonstrated through the analysis of pesticides (42 insecticides and 45 herbicides) transported in 1985-2004 into the rivers in Mississippi River basin, which then were entering Chesapeake Bay. The U.S. EPA estimates that approximately 75% of all pesticide usage in the United States is agricultural. The other 25% is for home and garden use, industrial, commercial and government sectors. It has been found that of the 45 tested herbicides the ones most often utilized in 1985-2004 were atrazine, glyphosate, and metalachlor. Atrazine and metalachlor have been identified as more toxic to nontarget plant species than glyphosate. On the other hand, insecticides were more toxic to animals and plants used in the study (trout, bluegill, daphnia, selenastrum, skeletonema, lemna) than herbicides. In the years 1985 and 2004, 12 herbicides inhibiting weed roots were examined in Maryland. The lowest amounts of these herbicides (0.0054, 0.0145, 0.0345 mg/L) were phytotoxic for selenastrum, lemna, and skeletonema that had been used as bioindicators section (Hartwell, 2011).
Pea (Pisum sativum L.) and lupin (Lupinus angustifolius L.) roots were used seven days after sowing for assessment of soil contaminated with herbicide from sulfonylurea class (chlorsulfuron, triasulfuron and metsulfuron-methyl) and from the sulfonanylide class (flumetsulam and metosulam). The sensitivities of the species were similar in chlorsulfuron and flumetsulam trials and their response range varied with soil type and herbicide, e.g. between 0.75 and 6.0 ng triasulfuron g −1 in the Wimmera grey clay and between 0.125 and 8.0 ng chlorsulfuron g −1 soil in the Mallee sand (Stork & Hannah, 1996). Sensitivity of leguminous plants to herbicides has been successfully used in our research.
Conclusion
Research into pesticides toxicity to animal and plant organisms has been conducted for many years and has developed as production and registration of active substances of these xenobiotics have increased. Most toxicological studies concern animal organisms, yet e m b r y o p h y t e s -a n d e s p e c i a l l y c r o p p l a n t s -h a v e b e e n u s e d r a t h e r m o r e r a r e l y a s bioindicators for the environment contaminated with pesticides. Over the recent years the ecotoxicological research has proved that active substances of herbicides present in water and soil are, even in very low quantities (examples), phytotoxic. Yet, these are very low quantities (e.g. over 10 µM of glyphosate) (Piotrowicz-Cieślak et al, 2010a) in soil and even lower in water -0.08 mg/L of atrazine for Lemna (Harwell, 2011).
On the other hand, before it is registered, every active substance is analysed in order to determine its dosage phytotoxic to standardised mono-and dicot assay plants. Our studies have proved that glyphosate, glufosinate ammonium and diquat applied in doses recommended by producers are not phytotoxic to leguminous plants. Only if a 50 µM dose is exceeded, morphological and biochemical changes are apparent in these plants. Applying desiccants prior to harvest (to immature seeds) induces changes in their metabolism of carbohydrates. We also found that in order to determine phytotoxicity it is better to use a root elongation test than a test of seed germination in soil contaminated with herbicides. The need to conduct research into the natural environment assessment with the use of up to-date toxicity assessment methods has lead to many various legal modernisations in the field of environmental protection of many countries (European Commission, 76/464/EEC; European Commission, 2455/2001/EC). Thanks to this, a lot of concentrations of pollutants so far believed to be safe have been found unsuitable, and appropriate regulations require their elimination or lowering their content in the environment. Biotests -as compared with conventional methods -constitute a relatively quick and inexpensive analysis method. Combining two methods (the biological and chemical ones) enables to provide an exhaustive and wide-ranging toxicological description. As a consequence, the natural environment protection, including human population, is becoming more and more effective and preventive steps are taken much earlier than they used to be. | 2017-09-15T00:37:16.278Z | 2012-01-20T00:00:00.000 | {
"year": 2012,
"sha1": "de3619f6ad259b74b9c60cd4c9c259c10b95740e",
"oa_license": "CCBYNC",
"oa_url": "https://www.intechopen.com/citation-pdf-url/25990",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "023aafd7aee3e1fd1236fc704e8891136ada07b3",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
236734110 | pes2o/s2orc | v3-fos-license | Coordinated Planning of Electricity/gas/storage Distribution Network Based on LSTM and Demand Response
. This paper presents a collaborative planning method of an electricity-gas-storage regional integrated energy system based on LSTM neural network and demand response. First, the LSTM Neural network is used for load forecasting, and the energy hub structure of the electric gas storage system is established. Then, the mathematical models of power storage, gas storage, electric network topology, gas network topology, and P2G are established to minimize the expansion cost of the electricity-gas-storage system, and the collaborative planning of energy storage, power lines, and natural gas pipelines is proposed based on the existing electric gas coupling integrated energy system. The original model which is difficult to solve is transformed into a mixed-integer linear programming model by introducing auxiliary variables, and the CPLEX solver is called to solve it. Finally, the economic advantages of collaborative planning of electricity-gas-storage system are verified by an example, and the connection of power storage and gas storage can reduce system pressure and optimize equipment selection.
Introduction
At present, the power demand of the user side is rising, while the supply side capacity expansion problem cannot be solved effectively in a short time. Therefore, the demand side resources with relatively low cost are paid more and more attention [1].Many scholars have carried out multi-directional research on demand-side load forecasting and proposed a variety of forecasting methods, such as linear extrapolation method, regression analysis method, time series method, wavelet analysis method, artificial neural network method, support vector machine method, and combination forecasting method [2][3][4][5].
With the gradual deepening of the coupling of electricity and gas resources, the comprehensive joint planning of electricity-gas-storage increases the interoperability between networks to a certain extent and reduces the total cost. Therefore, it is necessary to study the planning method of a regional integrated energy system [6][7][8][9]. [10] considers the investment cost, operation cost, and reliability cost, optimizes the regionally integrated energy system with electricity to gas device from the perspective of the optimal economy; in [11], considers the collaborative expansion of wind farm and P2G equipment, and studies its impact on the system expansion scheme and wind power consumption.
However, there are few studies on the optimal allocation model of various devices in the electric gas multi-energy storage system, such as energy storage battery, gas pipeline, gas storage equipment, P2G equipment, and there is no unified planning combined with power load forecasting technology.
Based on the LSTM model and demand response, this paper proposes a collaborative planning method for the electricity-gas-storage regional integrated energy system. Firstly, load forecasting is carried out based on LSTM neural network. Then, the energy hub structure of the electricity-gas-storage regional integrated energy system is constructed, and the collaborative planning method after adding energy storage equipment based on the existing electric gas coupling system is analyzed. Then, the mathematical model of the gas network topology is transformed into the mixed-integer linear programming problem which is easy to solve by introducing the original nonlinear programming model into the auxiliary variables. Finally, the result is solved by calling CPLEX. running through the whole repeating structure. In the figure, σ and tanh represent sigmoid and tanh activation functions respectively, ݔ ௧ is input quantity, ℎ ௧ , ℎ ௧ିଵ is hidden state of the current and previous time step,݅ ௧ ,݂ ௧ , ௧ represent input gate, forgetting gate, and output gate at the current time respectively. (1) Firstly, the input load is decomposed into different frequency components.
(2) The load component data obtained after decomposition is preprocessed, and the data is normalized.
(3) The parameters of the network are set to train the LSTM model. The predicted load curve is shown in Figure 2.
Overall description of planning problems
To better describe the characteristics of the electric-gasstorage integrated energy system, the energy hub structure is established, as shown in Figure 3. The load characteristics of a regional integrated energy network will affect the optimal configuration results of the energy storage system. At the same time, access to energy storage will affect the line construction and equipment type selection in electric network topology and gas network topology to a certain extent. Therefore, it is necessary to carry out collaborative planning for the electricity-gas-storage regional integrated energy system.
4.1The Type of Demand Response
The demand-side response is usually used in the time of use (TOU) system. In this paper, two types of demand response, load reduction and load transfer are considered. In the process of load transfer, the total load should remain unchanged, that is: ∑ ܲ ௩,௬௦, = 0 ∀݅ ∈ ߗ ௩ , ,ݕ∀ ,ݏ∀ ∀ℎ Where ݅ is the node number of the distribution network; y, s, h are the year, quarter, and hour respectively; ߗ ௩ is the set of transferable load nodes participating in the demand response; ܲ ௩,௬௦, is the transferred load of ݅ in the h hour of the s quarter of the y year.
4.2User compensation cost
If the demand response is brought into the planning model, the user compensation cost ܥ ହ caused by the decrease of user comfort and flexibility should be considered.
Where ܥ ௨௧ is the unit compensation cost of load reduction, ܥ ௩ is the unit compensation cost of load transfer, ܲ ௨௧,௬௦, is the total load reduced by the node in the yth year, sth quarter, and hth hour, ܲ ௩,௬௦, is the total load transferred, ߗ ௨௧ is the collection of reducible load nodes participating in demand response, and theߗ ௩ is the collection of transferable load nodes.
Demand-response constraints considering user satisfaction
In the implementation process of demand response, the less the load is reduced or transferred, and the shorter the average transfer time is (that is, the faster the transfer speed is), the higher the user satisfaction is.
4.3.1Transferable load constraint
Where ܦ ,௬௦, represents the original load when the node does not implement demand-side response and ߪ ଵ represents the maximum proportion of transfer load to the original load.
4.3.2Reducible load constraint
Where ߪ ଶ represents the maximum proportion of the reduced load to the original load.
5.1Objective Function
The investment subject of the collaborative planning model of electricity, gas, and storage proposed in this paper is the integrated energy service provider, and the objective function is to minimize the total cost of the coupled network system of electricity, gas, and storage under the condition of meeting the energy demand of users: Where ܥ ଵ is the investment cost, ܥ ଶ is the maintenance cost, ܥ ଷ is the transaction cost of purchasing electricity/gas from the superior network, and ܥ ସ is the investment cost of energy storage.
In this paper, the whole planning process is divided into two stages. In the first stage of planning, the formula is Where logical variables ܺ ௬,,ఏ and ݔ ௬,,ఏ are respectively 0-1 variables indicating whether the equipment selection exists in y year and whether it is put into construction at the beginning of the year, ܥ nv,,ఏ and ܥ mnt,,ఏ are the investment cost and maintenance cost the selection ߠ of the equipment ݂ to be built, respectively. ܥ BUY,e and ܥ BUY,g are the unit purchase cost of power and natural gas, respectively. ܽ and ݅ are the topology nodes of the regional power network, ܾ and ݉ are the topology nodes of the regional gas network. ܲ ,௬௦, and ܨ ,௬௦, are respectively the tidal current of power line ܽ݅ and natural gas pipelines ܾ݉ in year y, quarter s, hour h, ߗ SUB ,ߗ GATE ,ߗ EL ,ߗ GL are respectively the collection of substations nodes, natural gas station nodes, power lines, and natural gas pipelines.
In the second stage of planning, the investment planning of multi-energy storage in an integrated energy network mainly includes two parts: energy storage battery and gas storage device.
Where, ܲ ୫ୟ୶ andܥ ୫ୟ୶ respectively represent the maximum power and capacity of the storage battery, ܲ ଶீ ୫ୟ୶ are the maximum power of the P2G equipment, ܥ ீௌ ୫ୟ୶ are the maximum capacity of the gas storage device, and ܲ ௦.
are the maximum power of the input and output gas storage device respectively; Approximately think ܲ ௦. ୫ୟ୶ = ܲ ଶீ ୫ୟ୶ , ݇ ଵ ,݇ ଶ represent unit power investment coefficient and unit capacity investment coefficient of storage battery respectively; ݇ ଷ represent unit power investment coefficient of P2G equipment; ݇ ସ represent unit capacity investment coefficient of the gas storage device; ݇ ହ represent unit power investment coefficient of gas transmission pipeline.
5.2.1Energy storage constraints
(1) Energy storage battery constraints charging and discharging power in t period; ݐ߂ denotes the time interval. ݐ߂ takes one hour.
(2) Constraints of the gas storage system represents the state of charge of the gas storage system at the end of period ݐ − 1 ; ܲ ௦,௧ , , ܲ ௦,௧ ,ி respectively represent the gas storage and gas consumption power in period t; ܲ ଶீ is the electrical power input by the P2G equipment. Where݅, ݆, ݇ is the node of electric network topology, ܲ ,௬௦, andܲ ,௬௦, represent the power flow of power line and hour in the yth year, sth quarter, and hth hour respectively, ܲ ଶீ,௬௦, is the output power of P2G equipment at the node ݅ in the yth year, sth quarter, and hth hour, and Ω EBUS is the collection of electric network nodes.
5.2.2Topological constraints of the power network
(2) For each power line, there are power flow constraints 0 ≤ ܲ ,௬௦, ≤ ݖ ,௬௦, ܲ el,௫, ,ݕ∀ ,ݏ∀ ∀h (22) Where ܲ el,௫, is the upper limit value of power flow of power line ݆݅ ; 0-1 decision variable ݖ ,௬௦, is the direction auxiliary variable of power line݆݅; specifically, when there is power flow from node ݅ to node ݆on the line, the value is 1; when there is no direct power flow from node ݅ to node ݆on the line, the value is 0. It should be noted here that ݖ ,௬௦, and ݖ ,௬௦, jointly represent the power flow direction of the line from two directions.
Case analysis
In this section, three planning scenarios are set, and the proposed method is applied to an actual Park in Zhejiang Province. Compared with the planning results, the superiority of the proposed method is verified, and the impact of energy storage access on the joint planning of the power gas distribution network is analyzed.
6.1Example data
The topology of 12 nodes electric network, 10 nodes natural gas network, and user load in a certain area of Zhejiang Province are shown in Figure 4. The electricity gas network is coupled with each other through the P2G equipment to be built. The example uses the peak valley TOU price of Zhejiang Province, as shown in Table 1. According to the unified price of natural gas network 3yuan/݉ ଷ , the heating value coefficient is 9.7kw/݉ ଷ Besides, The energy storage equipment in Zhejiang energy storage market is selected, and the main component parameters of lithium battery and gas storage tank are given. To verify the applicability of the proposed planning method to the collaborative planning of power gas storage integrated energy system and the impact of energy storage access on the planning results, a design example scenario is as follows: Scenario 1: independent planning of electric network and natural gas network; Scenario 2: collaborative planning of electricity and gas system coupled by P2G; Scenario 3: collaborative planning of power gas storage regional integrated energy system coupled by P2G after power storage and gas storage are connected.
Results and analysis of an example
Call CPLEX solver to get the optimal planning scheme under three scenarios, as shown in Table 3. The planning results of power lines and natural gas pipelines are given in the form of "initial node terminal node (investment type)", and the planning results of P2G are given in the form of "local power network node (investment type)", with 1-3 representing three types of equipment with capacity from small to large. The results of the investment power configuration, capacity configuration, and the cost of each part of the electricity storage and gas storage system are obtained by the solution, as shown in Table 4. Compared with scenario 1, scenario 2 verifies that the power gas collaborative planning through P2G interconnection has economic advantages. Compared with scenario 1, the power supply of electric network node 3 is changed from line 2-3 to node 6 with P2G; in terms of equipment capacity, the connection of P2G makes it more Loads to be built, natural gas pipelines, P2G equipment Natural gas network gas station nodes , existing pipelines Gas s torage equipment to be built flexible to convert electricity into gas, which can convert the power in low price period into natural gas, so that the line capacity of power lines 7-10 and 10-12 is reduced, and the capacity of pipeline 1-5,2-7 in natural gas network is reduced; in terms of cost, P2G is more flexible The introduction of equipment will add the investment cost of P2G, but the capacity of lines and pipelines will be reduced. The overall performance is to reduce the annual investment cost of 26900 yuan and the maintenance cost of 9100 yuan. The purchase cost from the superior power grid and gas network will be reduced by 56400 yuan, and the total cost will be reduced by 92400 yuan.
Compared with scenario 2, scenario 3 verifies that the coordinated planning of electricity, gas and storage can further realize economic advantages and make the system more flexible. Compared with scenario 2, the line connection between electric network node and natural gas network node remains unchanged; in terms of equipment capacity, due to the access of power storage and gas storage equipment, the time of energy supply can be changed, and the electric energy in the low price period can be stored and the load can be supplied in the peak price period, so that the 4-5 line capacity of power line connecting P2G equipment in the calculation results is reduced The capacity of 1-5, 2-7 and 7-8 pipelines in the natural gas network is also reduced; in terms of cost, the introduction of power storage and gas storage equipment will add energy storage investment cost, which will increase the total investment cost by 40100 yuan. However, due to the access of energy storage, the time of energy supply can be changed, low-cost electricity can be used more flexibly, and the purchase cost from the superior power grid and gas network will be reduced by 102500 yuan, The total cost decreased by 58900 yuan.
Conclusion
Based on the analysis of the characteristics and topological structure of the components in the regional integrated energy system, a planning method for the power gas storage regional integrated energy system is proposed. The main research results are as follows 1) The influence of energy storage on the electric gas coupling system is analyzed, and the electric gas energy storage hub structure is established.
2) The constraints of each component in the regional integrated energy system are established, and the coordinated planning model of electricity, gas, and storage is proposed with the lowest cost as the goal.
3) It is verified that the collaborative planning of electricity gas network coupling has more economic advantages than the single planning and that the access of energy storage can further reduce the cost and improve the flexibility of the system, which realizes the collaborative planning of electricity gas storage system.
The power gas storage planning method proposed in this paper lays a foundation for the follow-up research, and more complex network topology and energy forms will be considered in future research. | 2021-08-03T00:05:41.223Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bd41e164647e9c8dddd61f06891d5c85385ec7b2",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/32/e3sconf_posei2021_02026.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "aa8ad2b8e1fa1d9b7548fc372f7fe32a34a9c206",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
90735583 | pes2o/s2orc | v3-fos-license | Giant Cell Tumour of Tendon Sheath
Giant cell tumour of tendon sheath is a benign but locally aggressive and recurrent Neoplasm. Origination of this tumour is commonly from membrane of tendon sheath, bursa, and joints. Malignant behaviour is uncommon. The patient presented with progressive and irregular increase in the dimension of the left middle finger over period of time which was basically did not interfere with the functioning of the hand.
Introduction
Giant cell tumour of soft tissue is uncommon. Giant cell tumour of the tendon sheath (GCTTS) is known by various synonyms like fibrous histiocytoma of synovium, pigmented nodular synovitis, tenosynovial giant cell tumour, localized nodular synovitis, benign synovioma, and fibrous xanthoma of synovium 2 .However, each entity has subtle difference as the name indicates. Most common age groups involved is 30to 50 years 3 . Females being more prone than males andrare in children. The tumour is a gradually developing painless soft tissue mass. It is the second most common tumour of the hand. The tumour can occur in various sites like spine, ankle knee, and feet 2 . Clinically it is of two types.One is localized and theother is diffuse form.Localized type is a common one which is encapsulated, extra-articular and commonly seen in the tendon sheath of the fingers. Diffuse type is rare one which is non-encapsulated, intra articular and involves the joints 2 . Pathological nature of this disease is still controversial as to whether this lesion is neoplastic or non-neoplastic, because of the recurrence rate which is reported at 45%. Metabolic disease is considered as the causative factors and lesions associated with trauma and inflammation. The tumour is composed of oval, plumphistiocytes, macrophage with hemosiderin pigments along with multinucleated giant cell and collagen strands; synovial hyperplasia is also observed. Giantcells present in these lesions resembles that of osteoclastic type. Giant cells are nearly 100 micrometre in size with around 50 nuclei. These giant cells are present throughout the lesions. The present study is comprised of clinical and imaging sciences which gave the probable clinical diagnosis of soft tissue tumour with the differential diagnosis of local gigantism (a suggested history that the patient whose presenting age was 21years had the lesion over the several years probably a decade). The fine needle aspiration cytology gave the diagnosis which was confirmed by histological analysis of the specimen received in the Pathology Department.
Clinical History
21-year-old man, a skilled manual worker presented with the complaint of swelling over the left middle phalanx of middle finger for past 8 years allegedly following a trivial trauma. A nontender swelling, which is sudden in onset and slowly progressing. On examination a bosselated solitary swelling present over the left middle finger measuring 4 x 2.5 cm of varying consistency namely soft to yielding in nature. The signs of inflammation are not present and there were no restrictionin the mobility of the joint. FNAC was done and a provisional diagnosis of giant cell tumour of tendon sheath was made. Subsequently after proper planning, the lesion was excised in toto.
Fine Needle Aspiration
To achieve a tentative diagnosis fine needle aspiration cytology was attempted and under conventional asepsis, blood stained fluid was aspirated. Multiple smear studied reveal a material composed of few small groups of round to polygonal to elongated to spindle shaped cells along with scattered multinucleated giant cells of osteoclastic cell type 1 . Background shows haemorrhage.
Macroscopy
A large, single, grey brown soft tissue mass measuring 5x3x2cm was received. External surface showed bosselation. On cut section of the mass, grey yellow, grey white solid and gelatinous areas were identified.
Discussion
Histologically GCTTS is composed of multinucleated giant cells, polyhedral histiocytes and hemosiderin pigments. These are spindle shaped cells which are elongated in nature and arranged as bundles 1 . There is no prognostic effect on the cellularity and mitosis 3 . The growth factor such as macrophages Colony Stimulating Factor-1(CSF1) plays an important role. These factors are involved in the proliferation, differentiation and survival of monocytes, macrophages 2 . 1p 13 break point plays the major oncogenic role in GCTT. Mostly they are non-neoplastic cells. Neoplastic cells produce CSF1which influence the inflammatory cell recruitment and activation, called as landscaping. CSRF1 is a Group II receptor tyrosine Kinase that shows structural homology with KIT. The most frequent location of the tumouris in the long finger (23.5%), followed by the thumb (20.3%), index finger (20.3%), ring finger 7.8%, and little finger 7.8% observed in the Di Grazia et al., study. Fotiadis et al. and Briët et al., in their study, found the most common location of the tumour to be in the index finger (29.7%) and 30%, respectively 2 .The treatment approach to the tumour is established by radiograph techniques. However the first method used to diagnose GCTTS is ultrasound and it provides the information regarding tumour vascularity, tumour size and its relationship with the surrounding tissue. Pre operatively the Fine-needle aspiration is helpful to make the tissue diagnosis and further subsequent management. Fine-needle aspiration cytology (FNAC) is very useful in the preoperative diagnosis and help in pre-operative planning to prevent recurrence. In the present study the lesion was diagnosed as GCTTS by fine needle aspiration. The reason for high recurrence rate of GCTT is the Subcutaneous location of GCTTS from the tendon sheath and its deeper extension to neurovascular bundle. It makes it difficulty in proper excision of the lesions. The high recurrence rate can depend on proximity to arthritic joint, proximity to distal interphalangeal joint of thumb, and radiological osseous erosion, to types of cells, to mitotic activity, capsular invasion, and incomplete excision.
Conclusion
Whether Giant cell tumour is neoplastic or reactive process is still controversial and needs further study. The patient presented with local gigantism and he was not concerned about the swelling as it was symptom less. A pre-operative diagnosis by FNAC was contributory which helped in proper planning and surgery. The histology was confirmatory. Note: The patient was symptom free and there was no recurrence in the past 12 months follow up; he was advised to report if there was any discomfort or swelling at the operated site. | 2019-04-02T13:12:57.997Z | 2018-01-07T00:00:00.000 | {
"year": 2018,
"sha1": "a0de859cd966a7bb581aafab8103e813866199e6",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v6i1.31",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1b47d39defaba57d6e8b2488adc566f5f345a23b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
18086092 | pes2o/s2orc | v3-fos-license | The forgotten risk? A systematic review of the effect of reminder systems for postpartum screening for type 2 diabetes in women with previous gestational diabetes
Background Screening for type 2 diabetes is recommended for women with previous gestational diabetes (GDM). However, the screening rates remain low. We aimed to evaluate the reminders and reminder systems for women with previous GDM and the health professionals in primary and secondary health care with screening rate among postpartum women as primary outcome. Methods Observational and intervention studies were included and the PRISMA guidelines were followed for the literature extraction. Results Six studies were included: two long-term follow up studies and four early terms. Five studies focused on secondary care settings and one on primary care. Three studies focused on reminders to postpartum women only, two studies to both the women and health care professional, and one study on the health care provider only. Types of reminders varied from letters, emails, and personal telephone calls to the women to register-based reminders or letters to the health care professionals. Reminders were efficient but efficiency varied between studies. Two studies found that direct telephone calls strengthened the reminding of the women. The effect of reminding both the women and the health professional screening rates decreased compared to reminding either health professionals or reminding the women separately. Conclusions Reminders have a potential for early detection and prevention of type 2 diabetes in this high risk group of women; however, the kind of reminder and the frequency of reminders should be carefully considered accordingly to the target group.
Background
Gestational diabetes mellitus (GDM) is an increasing health concern in many middle-and high income countries and is mainly associated with the increasing prevalence of overweight and obesity. Worldwide, it is estimated that 16 % of live births in 2013 were complicated by hyperglycemia during pregnancy [1]. It is crucial for women with GDM to maintain a tight control of blood glucose level throughout the pregnancy to avoid complications such as macrosomia, preeclampsia or perinatal mortality [1]. However, also after giving birth the blood glucose level has to be monitored. From a meta-analysis it was found that women with prior gestational diabetes had a seven-fold risk of developing type 2 diabetes within 5 years after giving birth [2]. The progression from being at risk for type 2 diabetes (T2D) to overt diabetes can be prevented or postponed by life style changes that include a healthy diet and regular physical activity, accompanied by weight loss if the women are overweight or obese [3]. To prevent type 2 diabetes and to ensure early detection of a potential developing glucose metabolism dysfunction, recurrent screening of the women should accompany the life style changes initiated to prevent hyper-glycemia during the GDM complicated pregnancy. The American Diabetes Association and other health authorities recommend the first postpartum screening for type 2 diabetes between 6 and 12 weeks after delivery followed by screening every 3 years [4,5]. A questionnaire-based survey in Canada has shown that both health professionals and the postpartum women were aware of the importance of postpartum screening for type 2 diabetes [6]; nevertheless, other studies report low screening rates for postpartum screening in this high risk group of women [7,8]. In screening trials and followup studies of postpartum women it is well recognized that barriers and constraints for participation in screening are the main factors to eliminate if screening rates are to be increased [6,9]. In a review of Nielsen et al., emotional stress, time pressure, difficulties with adjustment to motherhood, and loss of requisition for screening were reported as barriers among the women for postpartum screening [9]. These barriers are patient perspectives; nevertheless, health professionals' perspectives of barriers are not yet investigated though relevant to target if screening rates should be increased: Reminder systems have shown to be a potential strategy also for improvement of health professionals' behaviour and to improve the health care process [10]. Pierce et al. reported in their study among British primary and secondary care health professionals that 75 % had an installed reminder system to recall women to ensure postpartum screening [11]. Reminder systems; both electronic alerts for the health professional and letter or telephone reminders for the patient, are a known strategy for patient compliance improvement and have been evaluated for other outcomes than diabetes [10,12,13].
In this systematic review, we aimed to study the effect of implemented reminder systems for postpartum type 2 diabetes screening in the health care system where the reminders targeted either the health care professional and/or the postpartum women. The effect was measured as either the percentage of women who underwent an oral glucose tolerance test (OGTT) or other screenings tests for diabetes, or various secondary outcomes, such as response rates after the reminder, or attendance of any kind of follow-up visit.
Search strategy
We searched the Pubmed, Cinahl, Cochrane Review Database, and the Embase database for relevant studies. The search was performed from February to March 2014 and was limited to include only English language literature published from 2004 to 2014. Before the literature search we settled for three criteria on which literature was included. Our main inclusion criterion was in relation to the study design: We preferred intervention studies; however, we allowed studies of observational design as well to be included if an effect of a reminder system could be measured or evaluated. Our second criterion was in relation to the measured outcome: We were mainly interested in the rate of women undergoing a postpartum OGTT; however, we included other secondary outcomes; other postpartum glucose test, or rate of any kind of response from the postpartum women after the reminder was sent. Our third criterion was regarding the definition of a reminder: Reminders were defined as postal reminder, email reminders, or telephone calls/text messages for the patients. For health professionals we defined reminders as for the patients and additional the option of pop-up electronically implemented reminders/ alerts or simple reminders either in paper form posted on medical reports or implemented electronically in the patient registry system. No inclusion criteria regarding the content of the reminders, since we were interested in the effect seen after reminders were sent and not necessarily received.
Clinical practice and guidelines for postpartum screening from the American Diabetes Association recommend the first early screening should be around the first 6-12 weeks postpartum, and every 1-3 years thereafter [4]. Thus, we were interested in both long term and early follow-up and compliance to screening. We defined the early term follow-up as the first postpartum follow-up visit, which is usually within the first 6 months after giving birth. Long term follow-up was defined as the following visits, usually 1 year after birth. According to this definition, we distinguish between the reminders related to the early follow-up visit, and the following reminders related to long-term follow-up no matter if the reminders sent after 12 months are the first reminders. In this way, women could miss the early follow-up but be reminded of the importance of follow-up and therefore attend a long-term follow-up visit.
According to the PRISMA guidelines the literature extraction was divided into four stages [14]: In stage one, the search strategy was initiated by the combination of three search words: "gestational diabetes*" AND "reminder*" AND "postpartum follow-up" OR "intervention". From that search we found 61 articles. From titles we excluded 29 articles, which were not relevant for the study question. After the database search the reference lists were further investigated in order to identify additional literature of interest. If only abstracts were available, the reference was disregarded. In the final stage, the articles were chosen after a careful reading of abstract and method sections. Figure 1 shows the flow chart for the literature search for reviewing reminder system effect. Literature search was conducted by CJ and verified by an authorized librarian trained in scientific literature search.
Data extraction
The included studies shown in Table 1 targeted healthcare professionals, patients, or both. The studies could be either observational or experimental in design: Observational studies in the meaning of studies not directly intervening but rather studying the effect of introducing a reminder system. Intervention studies were characterised as interventions targeting a specific defined target group. We found three study protocols which we discarded, though the studies would have been relevant if results were presented. Two articles were found relevant; however, only the abstract were available and published for poster presentations. We did not make contact to the authors. We ended up with the final six references that met the inclusion criteria and were included in the review.
Search results
The included studies were published from 2009 to 2012 and were conducted in Finland (1), Australia (1), United States (1), and Canada (3). Two studies focused on the long term postpartum screening: One study investigated the effect of reminders sent 15 months after delivery and every 12 months thereafter [15] where women according to recommendation should have attended the first screening consultation and Vesco et al. investigated the effect of reminders sent before and after 3 months after delivery [16]. However, all studies were conducted aiming at reminding of the first early postpartum glucose intolerance screening test. Ethical approval for this literature study was not required.
Population characteristics
Study populations differed between the studies. The effect of reminder systems were studied on patients only (N = 3), and on both patients and health professionals (N = 3). Health professionals included general practitioners, endocrinologists, and medical residents [17,18]. Only one study reported age of the health professionals, and found 83.8 % were above 30 years of age [18]. In the three studies including patient reminders only, the patients were randomized according to a specific setting (clinical site being the intervention group) [19], patients belonging to a specific municipality [20], or patients were already included in a registry allocated to a clinical site (hospitals) or joint treatment organization [15,16]. The studies reported the women's age as mean or percentage under and over 30 or 35 years as cut-off points. Mean age (SD) ranged from 30.1 (±5.7) [20] to 34.9 (±5.2) [17]. In the papers reporting age ranges 72.8 % were over 30 years of age in Clark et al. [18] and in Vesco et al. [16] 30 and 36.3 % in the pre-implementation of reminder system group and post-implementation of the reminder system group, respectively, were above 35 years of age. Only one study did not report the women's age [15].
Reminder-systems
In the six studies included, the reminder-systems were either the main intervention [15,[18][19][20] or an integrated part of a larger intervention program [16]. In the study of Vesco et al. the postal reminders for patients was a part of a larger process improvement program that also included revised nursing protocols for the GDM women, enhancing the electronic medical record system, and education of the clinical staff in addition to the postal patient reminders [16]. The results of Vesco et al. were not divided into differentiated effects of the various initiatives and therefore the results are included in the patient reminder group. The health professionals' reminders were based on system-based pop-up messages reminding the professional to recall women for postpartum checkup. The patient reminders varied from telephone calls from nurses to the patient [16,20] or letters or mailed reminders sent to the patient [18,19].
Physician interventions
The primary sector and secondary sector are joint in the health care system by interaction around the patient; however, with different responsibilities and duties. In our definition, primary sector also include health care staff at elderly centres, pharmacies, and infant and neonatal health care providers for supporting the mother. General practitioners, as being the primary player in the primary sector, have first contact with citizens. Thus, the primary sector can in addition to being the first contact with patients also play a health promoting and disease preventive role towards the public. The secondary sector includes hospitals and specialised health care providers. The secondary sectors is responsible for the complex treatment of patients and the secondary sector can normally only be contacted by patients through a first contact to the primary sector. Going through the literature we therefor made a distinction among the health care professionals. Primary health providers are general practitioners since they act as a first point of consultation for the women with former gestational diabetes, however, secondary system play the dominating role treating the women during the pregnancy and preventing a worsening of the GDM state. The occurrence of using reminder systems for recalling postpartum women for glucose intolerance screening have been studied before [21]. Nonetheless, diverse results on the use of reminder systems exist: One study found that 8.2 % used an electronic alert or reminder to call women in for postpartum screening, but only 12.8 % of the 306 health professionals actually sent reminders to patients when it was time for postpartum screening [21]. Another study reported that 75 % of the secondary care specialists (obstetricians and gynaecologists) had an integrated system in place to alert when postpartum women failed to attend their follow-up visit. Among these secondary care providers 73 % sent a notification to the general practitioners (GP), proposing the GP to recall the women for follow-up [11].
Among the primary care GPs 39 % recalled women on an annual basis and 35 % advised the women to attend screening in the future. The perception of responsibility seems to be a factor affecting the recall of patients: 45 % of the GPs thought it was the responsibility of the primary care sector to conduct early follow-up on previous GDM women; however, 26 % of the GPs thought there was no clear responsibility in any sector [11]. With the above results in mind, it is interesting to evaluate the effect reminders have on physicians and health professionals and the effect reminders have on screening rates of postpartum women.
We included two intervention studies with the focus on health professional alerts for recalling postpartum women: Clark et al. made a 2*2 factorial randomized control design on patient and/or physician reminder interventions. The study included 256 eligible women and their 256 physicians. The randomized groups were designed as: physician reminders only; patient reminders only; both physician and patient reminders; and no reminders (controls). The primary outcome of the study, as seen in Table 1, was the proportion of women who underwent an OGTT within 1 year after delivery and the reminder was sent approximately 3 months after delivery in order to conform with screening recommendations. Secondary outcome was measured as the performance of any kind of screening test. When only physicians were reminded, the postpartum women were more likely to undergo an OGTT [OR 8.4 (95 % CI 2.4; 28.5)] than if no reminders were sent (the control group). Interestingly, the association with compliance to screening decreased if both the physician and the patient were reminded OR 5.2 (95 %CI 1.4; 19.6) compared to controls. However, the results should be interpreted carefully since the study populations of both patients and physicians were small: after loss to follow-up 112 physicians remained for analyses. The number of women in the three groups varied from 31 (physician reminders only) to 81 (both physician and patient reminders). Loss to follow-up was mainly due to movement of either patient or physician. This was taken into account in sensitivity analyses where the study assumed that patients lost to follow-up all underwent OGTT screening (analyses 1) or did not undergo OGTT screening (analyses 2). When assuming that none of the women lost to follow-up underwent OGTT the effect of the reminders to both patients and physicians and the interventions on physicians only remained significant, OR 3.9 (95 % CI 1.1; 13.9) and OR 4.8 (95 % CI 1.6; 14.9), respectively.
In Lega et al. a retrospective observational study of a checklist procedure was evaluated. The study included 314 postpartum women with previous GDM from an internal hospital record. From this study population 173 of the women were enrolled with a checklist depicting to the health professionals whether the woman had attended her postpartum visit. The study was an evaluation of the effect of implementing this checklist procedure in the register. Therefore no direct intervention took place; however, having a checklist registered was shown to be associated with completed OGTT between 6 weeks and 6 months postpartum by OR 2.99 (95 % CI 1.84; 4.85). As for the study of Clark et al., this study also had secondary outcomes measured as the rate of women attending a postpartum visit, not necessary completing an OGTT. The registration of a checklist for the patient was positively associated with attending a postpartum visit within the same time frame as for OGTT: OR 3.71 (95 % CI 2.26; 6.11). Since no direct intervention took place, a major strength of the study was no loss to follow-up and the directly effect on the registration of the patients the effect of a checklist targeting the attention of the health professional. Nevertheless, the study did not discuss differences in the allocation of a checklist and if the allocation could confound the results in any way. Both the study of Lega et al. and Clark et al. had the outcome of increasing the first screening after birth (early term screening) meaning undergoing an OGTT within 6 months after delivery.
Based on these two studies it seems plausible that reminders for health professionals can increase the postpartum OGTT screening among women with previous GDM. However, this conclusion is based on two studies only, where one study had a large loss to follow-up and the large differences in the number of intervention groups. Furthermore, the conclusion can only be drawn on the first early screening after delivery and not on a long term basis. Furthermore, interestingly it seems that reminding both patient and physician does not result in increased screening rates compared to reminding either patients or physicians.
Patient interventions
The most recent study on patients' perception of health professionals' responsibility for postpartum screening revealed that primary care providers, such as general practitioners, were responsible for the recall of women for postpartum OGTT. Out of 136 postpartum patients 76 % thought it was the GP, whereas 8.8 % thought it was the responsibility of the obstetrician only, and 11.8 % thought it was the health professional treating them for gestational diabetes [6]. This confusion of which health professional to turn to for postpartum diabetes screening could potentially influence screening rates among postpartum patients in some countries. Furthermore, a qualitative study conducted in the Johns Hopkins Bayview Medical Center's obstetric clinical practice revealed that receiving care from multiple providers and lack of continuity of care were barriers for women's participation in postpartum screening [22].
Reminder-systems for patients could provide this continuity of care but also a continuity of the awareness of the risk for development of diabetes. In the study of Shea et al. two clinical sites were allocated to have implemented reminder systems contacting the postpartum patients by telephone or letter [19]. Despite that the patient groups had a large variation in number of patients (control site: 117, site A: 90, site B: 55) 28 % of patients, when adding site A and B together, returned for a postpartum visit and completed an OGTT test compared to 13.75 % (p = 0.01) from the control site. In this intervention site A and B differed: In site A only laboratory requisitions and reminder letters were sent. In site B reminder letters were sent, and an additional telephone call was made. This difference in reminder approach resulted in a stronger association of site B with completion of OGTT: OR 3.10 (95 % CI 1.35; 7.14) compared to the non-reminder control site whereas site A had an OR 1.57 (95 % CI 0.66; 3.70) compared to the control site. When the outcome was set as completion of any kind of screening test including HbA1c, random glucose test, or fasting glucose test the difference in association between site A and B attenuated: OR: 1.09 (95 % CI 0.56; 2.13) and OR 1.33 (95 % CI 0.65; 2.71) for site A vs control site and site B vs. control site, respectively [19]. In Korpi-Hyövälti et al. a similar reminder approach was implemented: High risk-of-GDM women were called by health care nurses encouraging the women to complete a visit for an OGTT test. Of the women who completed an OGTT 83.2 % had received a phone call compared to 49.1 % completing OGTT in the group of women not receiving phone reminders [20].
The potential of personal approach reminders, such as phone call, is significant and a personal approach can have a large effect whereas letters and emails that can be perceived less personal and committing to the patients still have an effect on screening rates; nonetheless, they are less effective than phone calls. This is in consistence with qualitative research findings; a personal approach from medical staff to the patient enhances the commitment of the patient [22].
The number of reminders is another factor influencing screening rates. In Chittleborough et al. 817 postpartum women registered in the Gestational Diabetes Mellitus Recall Register allocated at three local hospitals received up to 6 reminders over a period of 6 years. The main outcome was the return of an update form and second, the information of completion of an OGTT the previous 12 months. The proportion of women who returned the update form declined over time; however, the percentage of women who reported to have an OGTT completed remained high: 56.3 % of 429 women after the first reminder letter and 66.7 % of 26 women after the sixth reminder letter. In Vesco et al. it was reported that an implementation of staff education combined with patient reminders that included telephone call, email or letter, and in-person reminder in 7 clinics handling postpartum follow-up visits increased screening rates significantly. After the implementation, cox regression analysis revealed a HR 1.37 (95 % CI 1.10; 1.70) for completion of screening. Screening was defined as completed OGTT, or measured fasting glucose. The number of reminders influenced the completion rate: after the first reminder 80 % of the women completed a glucose test (either fasting or OGTT). According to the authors 41 % of the remaining completed after second reminder and after the third reminder 28 % of the women completed the ordered glucose test [16].
Staff education and awareness in combination with reminder systems
In the study of Vesco et al. the reminder system was combined with implementation of an educational intervention among the health staff [16]. The study found that the implementation of staff education has obviously influenced awareness of reminding women to undergo screening. The largest effect on the amount of reminders send was seen on a long-term aspect where the proportion of any kind of the three mentioned reminders increased from 27 to 59 % (p < 0.0001). This is not surprising since other studies have found that integrated study protocols that help the health professional will improve keeping focus on and prioritize high risk patients: Ko et al. reported that more than half of their study population of obstetricians and gynaecologist (N = 306) reported to have clinical guidelines addressing postpartum screening integrated in their practice. Clinical guidelines were more widespread in clinics of frequent screeners. If the obstetricians and gynaecologists reported to screen the women "always" or "most of the time" the authors' defined the health professional to be "frequent screener". Likewise, health professionals who reported "sometimes", "rarely" or "never" to conduct screening were characterised as "infrequent screeners" [21]. Furthermore, the obstetricians and gynaecologists identified as frequent postpartum screeners were more likely to consider screening a priority in their practice (25.5 % for frequent screeners vs. 13.3 % for non-frequent screeners, p < 0.001) [21]. In line with this, Hunsberger et al. conducted a mail survey among family medicine physicians and specialized physicians in obstetric or gynaecology. They found that postpartum screening rates depended on whether it was a priority of the physician or clinic (OR 4.39, 95 % CI 1.69-7.94) and mostly if the physician thought screening was the norm among this special group of patients [OR 3.66, 95 % CI (1.65-11.69)]. However, the study had a low response rate of only 42 % in total (N = 285) and 58 % (N = 166) of these were family physicians compared to 42 % (N = 119) obstetricians or gynaecologists [23].
Discussion
In this systematic review we studied the effect of reminders for health care professionals and/or postpartum women with previous GDM or reminder systems integrated in secondary or primary care. For the patient reminders it seems that personal reminders have a better effect on the screening rates and Chittleborough et al. showed that even though the response rate from each reminder sent declined-the proportion of women completing OGTT over time remained high. Furthermore, patient reminder systems can with benefit be combined with educational interventions among the health professionals to increase awareness of the importance of using the reminder system.
A recent review on barriers for gestational diabetes screening and barriers for the subsequent postpartum screening revealed that time and the new role as a mother were the two main barriers among patients [9]. However, few studies have so far been interested in the barriers that may exist among health professional conducting screening test among this high risk for diabetes patient group. Two studies only had the main aim of their questionnaire surveys to study barriers among health professionals: Stuebe et al. identified various barriers among the participating physicians: poor communication between primary care physicians and secondary care specialists, and poor documentation in the electronic record of gestational diabetes (filed for 45.8 % of women only), even though this was recommended [24]. In Keeley et al., the barriers for not screening patients was that the physician did not have consultations with the patient. This was reported by 37 % of the 63 physicians that did not screen their patients. The second most common barrier for not screening was reported by 33 % and was due to non-arrangement of the test, even though there has been a consultation regarding other health problems. Lack of awareness regarding the screening issue between health professionals and between health professional and patient could be solved by integrated reminder systems alerting the professionals either to contact other professionals or to call in women for screening. Integrated reminder systems could automatically send out reminders for patients as well.
The use of reminder systems is not a new intervention. Randomized controlled trials have evaluated the effect of reminder alerts for physicians in order to increase screening rates for colorectal cancer [25], mammography [26], and breast-and cervical cancer [27]. These studies provide evidence for the improvement that implemented reminder systems can give for these specific diseases. However, it does not mean it will be effective for increasing postpartum screening rates but definitely leave potentials for trials to replicate interventions in order to find suitable reminder systems for health professionals handling postpartum testing of women. One study found significantly higher screenings rates for deficiencies, hypertension and diabetes among elderly patients in an ordinary primary care setting in Stockholm, Sweden where a reminder system had been implemented among GPs. The study found significantly higher rates of systolic blood pressure, and cobalamin deficiency, whereas the rates of diabetes and anaemia were non-significantly increased among elderly patients of the GPS in the intervention group. The authors concluded that implementation of this reminder system in a primary care setting [28] could have similar benefits in a secondary sector setting. It was in an updated Cochrane Review concluded that mobile phone text messaging reminders increased attendance at healthcare appointments and this was compared to no reminders, or postal reminders [12].
One could speculate on the effectiveness of the reminders. The effect of reminders and implementations of reminder systems for either health professionals or patients could be improved by increasing the amount of information e.g. by patient specific campaigns, on the importance of postpartum screening and the awareness of the risk of developing type 2 diabetes.
Strengths and limitations
We have several limitations in our study. The most obvious is the lack of controlled randomised intervention studies. A similar systematic review investigating reminders for women with previous GDM included randomised controlled trials only [29]. This systematic review ended up with only one study in their analyses. In our opinion, other studies not necessary of interventional design can provide valuable information as well. We therefore broadened our search to include non-randomised studies and observational studies. However, with the lack of randomised controlled trials we are not able to study negative results of reminder systems or studies showing no effect. Another large limitation is the lack of long-term follow-up studies: four out of six studies focused on early term follow-up within the first year after giving birth. This leaves us with scarce results and unable to make solid conclusions for a long-term effect of reminders for postpartum screening. Finally, we were able to find one study focusing on primary care health setting, whereas the other five focused on secondary health care setting. A large diversity between countries variations exist when it comes to postpartum follow-up visits. In Denmark, women are encouraged to attend their postpartum diabetes screening at their general practitioner.
Nevertheless, a strength of our study is the variation in type of reminders used. Effect of letters, telephone calls or emails for patients, and reminders attached to medical files or register-based reminders to health professionals were included in the studies. Despite the type of reminder-all reminders had an effect. This is important seen from a general point of view: The use of reminders and reminder systems has one aim: to increase awareness of the importance of attending the recommended postpartum diabetes screening. From a patient point of view awareness on the risk of developing diabetes has to be maintained. From the health professional's point of view reminder systems help to keep track on patients and rates of follow-up. Therefore, we see it as an important finding, that all the articles showed an effect despite type of reminder.
Conclusions
The included studies have shown that both reminder systems to patients and to health professionals are successful in increasing postpartum screening rates. Nevertheless, screening rates and effectiveness have to be differentiated in early and long term effects postpartum. Evidence is scarce for long term effects past the first follow-up visit or with visits placed after the first year postpartum. The number of reminders sent to patients and the kind of reminder also influence the effectiveness in a way that personal confrontation reminders such as telephone calls are more effective than letters. For the number of reminders evidence is lacking, however, it seems that with numerous reminders more women will undergo a follow-up visit over time; nevertheless, some women will not attend or will refuse to attend.
Due to lack of evidence, future research could focus on longer term compliance to screening among women with previous GDM to see the effect of reminder systems ideally both for health professionals and for patients. One could speculate, whether postpartum health communication could be linked to health examinations of the child when the woman is at the general practitioner. In this way, pop-up reminders linking child examination and postpartum examination would benefit the awareness among the professionals and patients, and by that mean, solve one of the main barriers identified among health professionals [9]: lack of awareness and contact to the patient, and inadequate knowledge exchange among professionals. | 2015-09-18T23:22:04.000Z | 2015-08-26T00:00:00.000 | {
"year": 2015,
"sha1": "606a24b323b8d028d3f15ed7fe0603382f349870",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-015-1334-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "606a24b323b8d028d3f15ed7fe0603382f349870",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233879833 | pes2o/s2orc | v3-fos-license | A high-resolution unified observational data product of mesoscale convective systems and isolated deep convection in the United States for 2004–2017
Deep convection possesses markedly distinct properties at different spatiotemporal scales. We present an original high-resolution (4 km, hourly) unified data product of mesoscale convective systems (MCSs) and isolated deep convection (IDC) in the United States east of the Rocky Mountains and examine their climatological characteristics from 2004 to 2017. The data product is produced by applying an updated Flexible Object Tracker algorithm to hourly satellite brightness temperature, radar reflectivity, and precipitation datasets. Analysis of the data product shows that MCSs are much larger and longer-lasting than IDC, but IDC occurs about 100 times more frequently than MCSs, with a mean convective intensity comparable to that of MCSs. Hence both MCS and IDC are essential contributors to precipitation east of the Rocky Mountains, although their precipitation shows significantly different spatiotemporal characteristics. IDC precipitation concentrates in summer in the Southeast with a peak in the late afternoon, while MCS precipitation is significant in all seasons, especially for spring and summer in the Great Plains. The spatial distribution of MCS precipitation amounts varies by season, while diurnally, MCS precipitation generally peaks during nighttime except in the Southeast. Potential uncertainties and limitations of the data product are also discussed. The data product is useful for investigating the atmospheric environments and physical processes associated with different types of convective systems; quantifying the impacts of convection on hydrology, atmospheric chemistry, and severe weather events; and evaluating and improving the representation of convective processes in weather and climate models. The data product is available at https://doi.org/10.25584/1632005 (Li et al., 2020).
2020; Taszarek et al., 2020). During 1950During -1994, deep convection thunderstorms produced 47 % of annual rainfall and up to 72 % of summer rainfall on average east of the Rocky Mountains (Changnon, 2001b). During the same period, both the number of severe thunderstorms and amount of deep convection precipitation have increased in most regions of the contiguous United States (CONUS) (Changnon, 2001a, b;Groisman et al., 2004). Folger and Reed (2013) found that hazards associated with thunderstorms have accounted for 57 % of annual insured catastrophe losses since 1953. Since the 1980s, the inflation-adjusted economic losses due to convective storms have increased from about USD 5 billion to about USD 20 billion in recent decade (https://www.iii.org/ fact-statistic/facts-statistics-tornadoes-and-thunderstorms, last access: 30 March 2020). With warmer temperatures, the environments of hazardous convective weather are projected to become more frequent in the future (Diffenbaugh et al., 2013;Seeley and Romps, 2015), although few robust trends have emerged in the recent decades Tippett et al., 2015).
The crucial roles of deep convection motivate the need for more accurate and comprehensive datasets to improve understanding and modeling of this process and its impacts. To this end, datasets with information on the location and time of occurrence, intensity, and other properties of deep convection are necessary to understand and quantify its impacts on the hydrologic cycle, severe weather hazards, large-scale circulations, etc. While field campaign data can provide detailed information on deep convection properties, they are limited in space-time coverage for statistical analysis. A corresponding reliable long-term dataset is undoubtedly useful for model evaluation and development (Prein et al., 2017;Yang et al., 2017).
Deep convection can exist as isolated convective storms or organized storms with mesoscale structures. A mesoscale convective system (MCS) is an aggregate of convective storms organized into a larger and longer-lived system, which is the largest type of deep convection. Due to their much longer duration and broader spatial coverage, MCSs generally have stronger and longer-lasting influences on largescale circulations than isolated deep convection (IDC) events (Bigelbach et al., 2014;Stensrud, 1996Stensrud, , 2013. MCSs may also produce higher rain rates, larger echo top heights, and greater water and ice masses than IDC (Rowe et al., 2011(Rowe et al., , 2012. The enhanced rain rates in MCSs might be caused by larger amounts of ice falling out and melting, higher amounts of liquid water below the melting level, and higher concentrations of smaller drops (Rowe et al., 2011(Rowe et al., , 2012. Rowe et al. (2012) also suggested that the enhanced rainfall from MCSs might be associated with more favorable environmental conditions, such as higher convective available potential energy (CAPE) and wind shear. CAPE and wind shear can impose different impacts on the initiation and evolution of IDC and MCSs (French and Parker, 2008).
Considering the significant differences between IDC and MCS events, a reliable long-term dataset not only describing the characteristics of deep convection but also separating IDC events from MCSs is useful. With the deployment of operational remote sensing platforms such as geostationary satellites and ground-based radar network several decades ago, scientists have developed numerical algorithms to automatically detect deep convective systems and track their evolutions over large areas and for long durations on the basis of continuous measurements from remote sensors (Cintineo et al., 2013;Feng et al., 2011;Feng et al., 2012;Futyan and Del Genio, 2007;Geerts, 1998;Hodges and Thorncroft, 1997;Liu et al., 2007;Machado et al., 1998). Objective tracking of deep convection has been applied to geostationary satellite data (Cintineo et al., 2013;Sieglaff et al., 2013;Walker et al., 2012) and Next Generation Weather Radar (NEXRAD) data (Haberlie and Ashley, 2019;Pinto et al., 2015) in the United States (US) over different periods. However, a long-term climatological data product of MCS and IDC events over the CONUS has heretofore not been developed.
Here, building on the work by Feng et al. (2019), which developed an algorithm for MCS tracking and a dataset for MCSs for the eastern CONUS, we produce a unified highresolution data product of both MCS and IDC events and analyze their characteristics east of the Rocky Mountains for [2004][2005][2006][2007][2008][2009][2010][2011][2012][2013][2014][2015][2016][2017]. The data product is developed by applying an updated Flexible Object Tracker (FLEXTRKR) algorithm (Feng et al., 2018 and the Storm Labeling in Three Dimensions (SL3D) algorithm (Starzec et al., 2017) to the NCEP (National Centers for Environmental Prediction)/CPP (the Climate Prediction Center) L3 4 km Global Merged IR V1 brightness temperature (T b ) dataset (Janowiak et al., 2017), the 3-D Gridded NEXRAD Radar (GridRad) dataset , the NCEP Stage IV precipitation dataset (Lin and Mitchell, 2005), and melting level heights from ERA5 (ECMWF, 2018). Section 2 describes the updated FLEXTRKR and SL3D algorithms in detail, as well as the source datasets used by the algorithms. In Sect. 3, we first compare the climatological characteristics between MCS and IDC events based on the MCS-IDC data product. Then, as an application of the data product, we examine the spatiotemporal precipitation characteristics of MCS and IDC events. In Sect. 4, we discuss the uncertainties and limitations of the data product. Section 5 provides the availability information of the data product. Finally, we summarize the study in Sect. 6.
Source datasets and algorithms
2.1 Source datasets
Merged 4 km infrared brightness temperature dataset
In this study, we identify cold clouds associated with MCSs and IDC by using the NOAA NCEP/CPP L3 half-hourly 4 km Global Merged IR V1 infrared T b data for 2004-2017 (Janowiak et al., 2017). The dataset is a combination of various geostationary IR satellites with parallax correction and viewing angle correction, therefore providing continuous coverage globally from 60 • S-60 • N with a horizontal resolution of about 4 km and a temporal resolution of 0.5 h (Janowiak et al., 2001). We only use the hourly T b data in the FLEXTRKR algorithm discussed below, as all other datasets are only available at an hourly interval.
Three-dimensional Gridded NEXRAD Radar (GridRad) dataset
GridRad is an hourly 3-D radar reflectivity (Z H ) mosaic combining individual NEXRAD radar observations to a Cartesian gridded dataset, with a horizontal resolution of 0.02 • × 0.02 • and a vertical resolution of 1 km. The dataset covers 115 to 69 • W in longitude, 25 to 49 • N in latitude, and 1 to 24 km in altitude above sea level (a.s.l.). Homeyer and Bowman (2017) produced the dataset by applying a four-dimensional binning procedure to merge level-2 Z H data from 125 National Weather Service (NWS) NEXRAD weather radars to GridRad grid boxes at analysis times. Only the level-2 observations within 300 km of each radar and 3.8 min of the analysis time were used in the binning procedure. The GridRad Z H was the weighted average of the level-2 observations within the GridRad grid boxes to reduce the potential loss of information. The weight calculation of each level-2 observation followed a Gaussian scheme in both space and time. Observation weight was negatively correlated with the distance of the observation from the source radar and the time difference between the observation and analysis time. The GridRad dataset provides the total weight of the level-2 observations within each GridRad grid box, which is useful for quality control. In addition, the number of level-2 radar observations (N obs ) and the number of level-2 radar observations with echoes (N echo ) within each GridRad grid box around analysis times (±3.8 min) are also available in the GridRad dataset.
First, we convert the GridRad logarithmic reflectivity Z H to linear reflectivity (Z : mm 6 m −3 ). We then set Z in grid boxes with radar observations but no echoes (N obs > 0, but Z H = NAN; NAN, Not a Number) to 0 (Z = 0). Here the physical interpretation is that NEXRAD scans those grid boxes, but no detectable hydrometeors return any echo. The primary motivation of this procedure is to avoid the reduction of the number of valid reflectivity values after re-gridding, as the ESMF bilinear method treats the destination point as a NAN as long as there is one NAN value in the source points. A common scenario is at the edge between hydrometeor echoes and clear air. Setting Z of those grid boxes having radar observations but no echoes to NAN would cause all surrounding destination points to become NAN even though all other source points have valid Z values, which would reduce the number of re-gridded valid Z H (Z H = NAN) values by about 20 % for 2004-2017. After the "bilinear" regridding of Z , we convert the linear reflectivity Z back to the logarithmic reflectivity Z H . And we set Z H equal to NAN for those grid boxes with Z equal to 0. Now the NAN values are acceptable and will not affect the SL3D algorithm and FLEXTRKR algorithm discussed below.
NCEP Stage IV precipitation dataset
The NCEP Stage IV precipitation dataset provides hourly rain accumulations over polar stereographic grids across the CONUS with a resolution of 4.76 km at 60 • N starting in 2002. The dataset is a mosaic of precipitation estimates from 12 River Forecast Centers (RFCs) over the CONUS (Stage IV data in Alaska and Puerto Rico are archived separately) (Lin and Mitchell, 2005;Nelson et al., 2016). Each RFC produces its precipitation estimates through a combination of radar and rain gauge data based on the multisensory precipitation estimator (MPE) algorithm (for most RFCs), P3 algorithm (for Arkansas-Red basin RFC), or Mountain Mapper algorithm (for California-Nevada, Northwest, and Coloradobasin RFCs with missing radar-derived estimates) (Nelson et al., 2016). Some manual quality control steps are conducted to remove bad radar and gauge data before radargauge merging (Lin and Mitchell, 2005;Nelson et al., 2016). The Stage IV dataset has been widely used as a basis to evaluate model simulations, satellite precipitation estimates, and radar precipitation estimates (Davis et al., 2006;Gourley et al., 2011;Kalinga and Gan, 2010;Lopez, 2011;Yuan et al., 2008). Here, we obtain the hourly Stage IV precipitation for 2004-2017 from NCAR/UCAR RDA (https://rda.ucar.edu/ datasets/ds507.5/, last access: 28 December 2019). We regrid the original Stage IV precipitation from polar stereographic grids to the 4 km satellite Merged IR grids by using the "near-eststod" method from the ESMF "NCL" module (https:// www.ncl.ucar.edu/Applications/ESMF.shtml, last access: 30 January 2020). The "neareststod" method maps each destination point to the closest source point.
ERA5 melting level dataset
Melting hydrometeors produce intense radar echoes in a horizontal layer about 0.5 km thick located just below the 0 • C level (melting level), which is known as "bright band" (Giangrande et al., 2008;Steiner et al., 1995). The bright-band signatures are often pronounced for stratiform precipitation, while convective precipitation produces well-defined vertical cores of maximum reflectivity, diluting the bright-band signals (Giangrande et al., 2008;Steiner et al., 1995). Therefore, the SL3D algorithm that is described below examines Z H above the melting level to avoid the false identification of stratiform rain as convective (Starzec et al., 2017). In this study, we use the hourly melting level heights from the ERA5 reanalysis dataset.
ERA5, as the successor to ERA-Interim, contains many modeling improvements and more observations based on 4D-Var data assimilation using Cycle 41r2 of the Integrated Forecasting System (IFS) at the European Centre for Medium-Range Weather Forecasts (ECMWF). ERA5 provides hourly estimates of atmospheric variables at a horizontal resolution of 31 km and 137 vertical levels from the surface to 0.01 hPa from 1979 to the present (Hersbach et al., 2019). We obtain ERA5 "Zero degree level" (melting level heights above ground) for 2004-2017 and "Orography" (geopotential at the ground surface) from the Climate Data Store (CDS) disks (ECMWF, 2018). The CDS-archived ERA5 variables have been interpolated to regular latitude-longitude grids with a resolution of 0.25 • × 0.25 • . We calculate melting level heights above sea level from Zero degree level and Orography (divided by 9.80665 m s −2 to obtain ground surface height). Finally, we regrid the hourly 0.25 • melting level heights above sea level to the 4 km satellite Merged IR grids by using the ESMF neareststod method.
We summarize the basic information of the four types of source datasets in Table A1. We also define our data product domain as 110-70 • W in longitude and 25-51 • N in latitude ( Fig. 1), which covers the US east of the Rocky Mountains and excludes the western US. The domain coverage takes into consideration the availability of the GridRad radar dataset, the relatively scarce radar coverage over the Rocky Mountains, and associated uncertainties in radar-based Stage IV precipitation estimates in complex terrains (Nelson et al., 2016). As shown in Fig. 1a, we further define four regions in the domain following
SL3D algorithm
The SL3D algorithm exploits GridRad Z H to classify each grid column with radar echoes into five categories: convective, precipitating stratiform, non-precipitating stratiform, anvil, and convective updraft (Starzec et al., 2017). SL3D identifies these five categories successively following the criteria listed in Table A2. We run the SL3D algorithm for 2004-2017 by using the re-gridded ERA5 melting level heights and GridRad Z H dataset described in Sect. 2.1. Figure 2e shows an example of the SL3D classification results based on GridRad Z H (Fig. 2d) at 2005-07-04T03:00:00Z. A sizable convective system with intense radar echoes and precipitation is observed in Kansas, and many isolated con-vection events are also observed in the Southeast. The SL3D classification results will be used in the following FLEX-TRKR algorithm to identify convective core features (CCFs, continuous updraft or convective areas with precipitation > 0 mm h −1 , which are used to indicate the existence of convective activity in the IDC definition; red regions in Fig. 3) and precipitation features (PFs, continuous updraft, convective, or precipitating stratiform areas with precipitation > 1 mm h −1 ; green areas in Fig. 3, which are used to denote the sizes of convective systems in the MCS and IDC definitions).
MCS and IDC identification and tracking
The FLEXTRKR algorithm was first developed and used by Feng et al. (2019) to track MCSs. In this study, we further update the algorithm so that it can identify and track MCS and IDC events simultaneously. Figure 3 displays the schematic of FLEXTRKR . The first step is to identify cold cloud systems (CCSs; continuous areas with T b < 241 K) at each hour by applying a multiple T b threshold "detect and spread" approach (Futyan and Del Genio, 2007). We search for cold cloud cores with T b < 225 K and spread the cold cloud cores to contiguous areas with T b < 241 K. Cloud systems that do not contain a cold cloud core but with T b < 241 K are also labeled as long as they can form continuous areas with at least 64 km 2 (4 pixels). In addition, as described in , CCSs that share the same coherent precipitation feature are combined as a single CCS. A coherent precipitation feature is defined as continuous areas with smoothed Z H at 2 km > 28 dBZ (if Z H is not available at 2 km, use Z H at 3 km instead if it is available) . We use a 5 × 5 pixel moving window to smooth Z H . Figure 2b shows an example of the CCSs identified in the first step based on T b at 2005-07-04T03:00:00Z (4 July 2005, 3:00:00 UTC time). "Cloud 1" in Fig. 2b corresponds to a large area of low T b in the central US (Fig. 2a).
In step 2, CCSs between two consecutive hours are linked if their spatial overlaps are > 50 %. "Linked" means the CCSs are considered to be from the same cloud systems. FLEXTRKR produces tracks by extending the link between two consecutive time steps to the entire tracking period, as shown in Fig. 3. Each track represents the life cycle of a cloud system. We calculate a series of CCS summary statistics associated with each track, such as CCS-based lifetime of the track (the duration of the track when CCSs are present), CCS area, CCS major axis length, CCS propagation speed, etc. In addition, SL3D classification ( Fig. 2e) and Stage IV precipitation ( Fig. 2c) within the tracked CCS are associated with the tracks and their merges and splits (described below). Then, we can obtain CCF and PF statistics of each track, such as convective and stratiform area, precipitation intensity and coverage, radar-derived echo-top heights, PF major axis length, CCF major axis length, intense convec-tive cells (convective cells with column maximum reflectivity ≥ 45 dBZ and precipitation > 1 mm h −1 ; pink areas in Fig. 3, which are used to indicate intense convective activity in the following MCS definition), etc.
Merging and splitting refer to situations when two or more CCSs are linked to one CCS between consecutive hours (Figs. A1 and A2). A track associated with the largest CCS is defined as the main track (Fig. A3), and smaller tracks from merges or splits are regarded as parts of the main track when calculating PF and CCF statistics. In the algorithm, we require that a merge or split track associated with an MCS-IDC event must have a CCS-based lifetime of no more than 5 h. Otherwise, we treat it as an independent track.
The identification of MCS and IDC is based on the CCS, PF, and CCF statistics of the tracks. Following the definition of MCSs by Feng et al. (2019) (Fig. 4), we define a track as an MCS if it satisfies the following criteria: (1) there is at least one pixel of cold cloud core during the whole life cycle of the track, (2) CCS areas associated with the track surpass 60 000 km 2 for more than 6 continuous hours, and (3) PF major axis length exceeding 100 km and intense convective cell areas of at least 16 km 2 exist for more than 5 consecutive hours. Considering the lack of a strict and universal MCS definition (Geerts et al., 2017;Haberlie and Ashley, 2019;Pinto et al., 2015;Prein et al., 2017), we evaluate the impact of different MCS definition criteria on the data product in Sect. 4.4. For the non-MCS tracks, we further identify IDC with the following two criteria ( Fig. 4): (1) a CCS with at least 64 km 2 (4 pixels) is detected, and there is (2) at least 1 h during the life cycle of the track when PF and CCF are present (PF and CCF major axis lengths ≥ 4 km). In addition, for each IDC event, the CCS-based lifetime of associated merge and split tracks cannot surpass the lifetime of the IDC event. Here, the IDC criteria denote a low limit in convective signals that we can identify by using the FLEXTRKR algorithm and given source datasets. Potential uncertainties associated with the limit are discussed in Sect. 4.3.
Note that while we designate the term IDC to differentiate smaller convective storms from MCSs, there are subcategories of deep convection within IDC. For example, multicellular convection systems that do not grow large enough or last long enough to meet our MCS definition are defined as IDC in our study, even though they are not necessarily "isolated." Users of the data product can further separate subcategories within IDC using the derived CCF statistics information to address specific science questions or research objectives.
Finally, the FLEXTRKR algorithm maps MCS-IDC track information back to the domain pixels. Figure 2f-i give an example of the pixel-level MCS-IDC information at 2005-07-04T03:00:00Z. Figure 2f displays the spatial coverages of MCS-IDC tracks at that time at pixel scale and the corresponding unique numbers of these tracks. From Fig. 2f, we know whether a pixel belongs to an MCS-IDC track and the number of the track if the pixel belongs to a track. We can further determine whether the track is an MCS or IDC event from Fig. 2g, which shows the types (MCS or IDC) of the tracks in Fig. 2f at pixel scale. Figure 2h and i are similar to Fig. 2f and g, respectively. The difference is that Fig. 2h and i only show pixels with precipitation > 1 mm h −1 in that hour. Together, the track-based CCS, PF, and CCF statistics of MCS and IDC events and the pixel-level dataset constitute the unified high-resolution MCS-IDC data product we develop in this study. Original T b (Fig. 2a), Stage IV precipitation ( Fig. 2c), GridRad Z H at 2 km (Fig. 2d), and GridRadderived echo-top heights are also archived in the data product.
We run the FLEXTRKR algorithm separately for each year from 2004 to 2017. The starting time of each continuous tracking is 00:00 Z on 1 January, and the ending time is 23:00 Z on 31 December. Because winter has the fewest deep convection events, very few MCS-IDC events extend between two different years based on our investigation. Also, the lifetimes of MCS-IDC events are much shorter compared to our tracking period. Therefore, running FLEXTRKR separately for each year rather than continuously for the whole period has little impact on the MCS-IDC statistics.
Climatological characteristics of MCS and IDC events
According to the MCS-IDC data product, we identify We compare the climatological characteristics of MCS and IDC events in Table 1. MCSs have much longer lifetimes than IDC, averaging 21.1 h (CCS-based) and 18.9 h (PF-based), compared to 2.1 h (CCS-based) and 1.7 h (PFbased) for IDC. Here, PF-based lifetime refers to the lifetime determined by the MCS-IDC PFs. Only those hours with a significant PF present (PF major axis length > 20 km for MCSs; ≥ 4 km for IDC) are counted during the life cycle of an MCS-IDC event, which represent the active convective period of a storm. We find that MCSs have the longest PF lifetime in winter (21.3 h) and the shortest in summer (17.9 h). In comparison, IDC has the longest PF lifetime in winter (1.9 h), but the summer lifetime (1.7 h) is comparable to spring and autumn. We examine the seasonal cumulative distribution functions (CDFs) of PF lifetimes for MCS and IDC events for 2004-2017 in Fig. A4. Results show winter has the largest fraction of MCS-IDC events with longer lifetimes compared to other seasons.
As expected, MCSs are much larger than IDC events in spatial coverage and precipitation area, as shown in Table 1 by the comparisons of CCS area, PF major axis length, PF convective or stratiform area, CCF area, and CCF major axis length. Generally, on average, winter MCS-IDC events are the largest in overall spatial coverage (both CCS and PF areas), while summer has the smallest. The larger and longerlived MCSs in winter than in summer were also observed in the Southeast US in 1994-1995by Geerts (1998. The remarkable seasonal difference in MCS-IDC overall spatial coverage is mainly due to stratiform areas. Convective areas are much smaller than stratiform areas. The PF stratiform area of MCSs in winter is 90 513 km 2 , 2.4 times larger than the area of 26 599 km 2 in summer, but the PF convective area of MCSs in winter is 7293 km 2 , 14 % smaller than 8465 km 2 in summer. Similarly, the IDC PF stratiform area in winter is 3182 km 2 , 2.8 times larger than 828 km 2 in summer, while the IDC PF convective area in winter is 528 km 2 , slightly larger (9 %) than 483 km 2 in summer. Unlike stratiform areas with the largest value in winter, convective activity is the most intense in summer as indicated by the PF mean convective 20 dBZ echo-top height in Table 1. The most intense convective activity reflects the strongest atmospheric thermal instability due to the strongest solar radiation in summer. We further confirm this point by investigating the MCS-IDC initiation time. As shown in Fig. A5, most MCS and IDC events initiate in the afternoon of summer when atmospheric instability is the strongest, consistent with Geerts (1998), who found warm-season MCSs generally initiated at 12:00-14:00 local time in the Southeast US.
Although MCSs are much larger than IDC events in spatial coverage, their mean convective 20 dBZ echo-top heights, which can be used to represent their mean convective intensities, are similar in Table 1. And their PF mean convective and stratiform rain rates are also comparable. PF mean convective and stratiform rain rates show significant seasonal variations for both MCS and IDC events. Summer MCS and IDC events have the largest rain rates, followed by autumn. Winter has the lowest rain rates compared to other seasons.
The high-resolution nature of the MCS-IDC data product enables a detailed examination of the 3-D evolutions of MCS-IDC events to investigate the relationships between atmospheric environments and MCS-IDC characteristics and to examine the impacts of MCSs and IDC on hydrology, atmospheric chemistry, and severe weather hazards. The data product can also be used to evaluate and improve the representation of MCS-IDC processes in weather and climate models. As an example of the application of the MCS-IDC data product, in Sect. 3.2, we investigate the contributions of MCS and IDC events to precipitation east of the Rocky Mountains for 2004-2017.
is the SL3D classification results: (1) convective updraft, (2) convective, (3) precipitating stratiform, (4) non-precipitating stratiform, and (5) anvil. Panel (f) displays the track numbers to which pixels belong. Here, the track numbers are not the real values in the MCS-IDC data product. The track numbers should be unique throughout the whole running period. We adjust the track numbers here to make the figure clear. Similar to "PF track number." Panel (g) gives information on whether the pixels belong to MCS (marked as 1) or IDC (marked as 2) tracks, which correspond to the tracks shown in (f). Panel (h) also displays the track numbers to which the pixels belong, but only for pixels with precipitation > 1 mm h −1 . Panel (i) is like (g) but corresponds to (h). All these variables are stored in the FLEXTRKR hourly pixel-level output files.
Precipitation characteristics from different sources
Here we only consider hourly data with precipitation > 1 mm h −1 . At 4 km resolution, precipitation less than 1 mm h −1 accounts for less than 19 % of the total precipitation, and the uncertainty of radar-derived precipitation at such low rainfall intensity is typically large. Including hourly data with precipitation ≤ 1 mm h −1 in the calculation will change the values shown in this study but will affect neither the comparison among MCS, IDC, and nonconvective (NC) precipitation nor their spatial distribution patterns. Here, NC precipitation refers to precipitation not associated with any MCS or IDC events and is mainly from stratiform rain. Total precipitation is the sum of MCS, IDC, and NC precipitation. It is noteworthy that NC precipitation may contain some convection-associated rain due to the limitation of the source datasets and the algorithms used in this study. More relevant details are discussed in Sects. 3.2.3 and 4.
Annual spatial distributions of different types of precipitation
According to the MCS-IDC data product, the annual average total precipitation east of the Rocky Mountains in the US (US grid cells in Fig. 1) is 691 mm between 2004 and 2017 with a mean precipitation intensity of 3.6 mm h −1 . MCSs contribute the most to the total precipitation with a fraction of 45 %, followed by NC (30 %) and IDC (25 %). And the mean precipitation intensities of MCSs (4.4 mm h −1 ) and IDC (3.8 mm h −1 ) are much larger than NC (2.7 mm h −1 ). Our MCS precipitation fraction (45 %) is higher than that (∼ 30 %) from Haberlie and Ashley (2019) over the ECONUS due to their different algorithms and stricter criteria to track and define MCSs. (2019), although their MCS precipitation fractions are generally lower than our results. IDC precipitation is concentrated in the SE and NE coastal areas, with peak values in Florida. NC precipitation is substantial in the eastern and southern regions with ample moisture supply and contributes over 35 % to the total precipitation across most of the NE region. The coastal area near Louisiana, which is significantly affected by all three types of precipitation, has the most total precipitation with annual amounts of over 1350 mm. The annual total precipitation amounts in most regions of SE also exceed 1050 mm due to MCS contributions. While the total precipitation amounts in most regions of Florida are also over 1050 mm, they are mainly attributed to IDC.
The spatial patterns of precipitation intensities are somewhat different from those of precipitation amounts (Fig. 5). Generally, the southern regions, especially in the coastal areas, have larger precipitation intensities than the northern areas. The MCS precipitation intensities are the largest in Texas, Louisiana, Oklahoma, and Kansas, significantly shifting west compared to MCS precipitation amounts. Unlike IDC precipitation amounts concentrating in the SE and NE coastal areas, IDC precipitation intensities are the largest over the SGP and SE. IDC precipitation intensities over the NE are much smaller compared to the SGP and SE, similar to NC precipitation intensities. We summarize the annual mean precipitation amounts and intensities of different types of precipitation in the NGP, SGP, SE, and NE in Table A3.
The distributions of MCS-IDC precipitation amounts are mainly determined by the distributions of MCS-IDC hours (Figs. 5 and 7). Here, the MCS-IDC hour of a grid cell during a period is the number of hours when any MCS-IDC events produce > 1 mm hourly accumulated rainfall in the grid cell. The distributions of MCS-IDC precipitation intensities, although not the main factor, can also affect the distributions of MCS-IDC precipitation amounts. For example, the maximum MCS hours are located around Missouri (Fig. 7a), but the maximum MCS precipitation amount is in the coastal area of Louisiana (Fig. 5c). The larger MCS precipitation intensities in the southern regions contribute more to the MCS precipitation amount in the southern US. In addition, a large number of IDC events (IDC hours > 60 h yr −1 ) occur in the NE region along the Appalachian Mountains (Fig. 7b), but IDC in that region only contributes to 20 %-30 % of the total precipitation amount (Fig. 6b) due to the low precipitation intensities (Fig. 5f). (2019), showing different MCS precipitation distributions between warm and cold seasons over the ECONUS. Spring and summer have much larger MCS precipitation amounts (∼ 100 mm) than autumn (∼ 62 mm) and winter (∼ 50 mm). The mean MCS precipitation amount in spring is close to that in summer. However, the total number of identified MCSs in summer (212) is much higher than that in spring (122), as discussed in Sect. 3.1; the mean MCS precipitation intensity in summer (5.2 mm h −1 ) is also larger than that in spring (4.1 mm h −1 ) (Fig. A7). The inconsistency is because MCSs in spring occur in more favorable large-scale environments with strong baroclinic forcing and low-level moisture convergence . As a result, spring MCSs are larger and longerlasting, and they produce more rainfall per MCS event compared to those in summer (Table 1), compensating for the fewer number of MCS events and lower precipitation intensities in spring. The fractions of MCS precipitation amounts are generally > 35 % over the Northern and Southern Great Plains in spring and summer and can reach up to over 70 % within the MCS precipitation center (Fig. A6a-b). The results are roughly consistent with Fritsch et al. (1986), which showed that MCSs accounted for about 30 %-70 % of the The IDC precipitation amounts reach a maximum in summer, centered in the coastal areas of the SE, where IDC precipitation contributes to more than 40 % of the total precipitation amounts (Figs. 8i-l and A6e-h). Winter has the least IDC precipitation. Areas of high IDC precipitation do not show much seasonal variability, suggesting that IDC is constrained by local conditions such as moisture availability, local solar radiation, and land-atmosphere interactions. The NC precipitation amount also peaks in summer, followed by autumn, particularly in the NE (Fig. 8m-p). However, because both MCS and IDC precipitation amounts are very high in summer, the fraction of the NC precipitation amount in summer (28 %) is smaller than that of winter (32 %) (Fig. A6i-l). Winter NC precipitation center occurs in the SE coastal areas (Fig. 8p).
MCS precipitation center agrees with the study of Haberlie and Ashley
The precipitation intensities of all three types peak in summer and reach minimums in winter (Fig. A7). In each season, precipitation intensities in the south are larger than those in the north except for MCS precipitation intensities in summer, which maximize in Oklahoma. We summarize the mean sea-sonal precipitation amounts and intensities of different types of precipitation over the four climate regions of Fig. 1 in Table A4. Figure 9 shows the monthly mean diurnal cycles of precipitation amounts from MCSs, IDC, and NC in the NGP, SGP, SE, and NE, respectively. Generally, MCS precipitation peaks during nighttime in the NGP, SGP, and NE. The seasonal shift of the peaks from spring in the SGP to summer in the NGP reflects the northward migration of the MCS precipitation center in the Great Plains ( Fig. 8e and f).
Diurnal cycles of different types of precipitation
The SE has significantly different diurnal cycles of MCS precipitation from other regions. In spring, SE MCS precipitation is mainly located in the western areas (Fig. 8e), showing similar diurnal characteristics as the SGP MCS precipitation but with peaks in the early morning and late afternoon ( Fig. 9d and g). Besides, the SGP MCS precipitation peaks in May (Fig. 9d), while SE peaks in April (Fig. 9g), suggesting that the MCS precipitation center first appears in the western SE regions (Alabama, Mississippi, and Louisiana) in April and then moves northwards to Arkansas in May. In summer, the SE MCS precipitation diurnal cycles are more like those of IDC ( Fig. 9g and h), peaking in the late afternoon and much different from those in the Great Plains. The significantly different precipitation diurnal variations between the Great Plains and SE were also identified by Haberlie and Ashley (2019). We find that most summer MCS pre- cipitation over the SE occurs near the coastal areas (Fig. 8f), far from the MCS precipitation center in northern Missouri and Iowa, suggesting either a different MCS genesis mechanism in the SE from those in the SGP and NGP or long-duration deep convective systems showing MCS characteristics (Geerts, 1998). In autumn, the SE MCS precipitation peaks in the morning (Fig. 9g). The diurnal cycle of MCS precipitation in September shows mixed features of summer and autumn with peaks in both the morning and the afternoon. In winter months, the diurnal cycle of the SE MCS precipitation shifts from the autumn feature to the spring feature, with peaks shifting from the morning to the afternoon. The distinct diurnal cycles of SE MCS precipitation in different seasons in Fig. 9g are roughly consistent with the corresponding seasonal diurnal variations in MCS occurrence frequencies from Geerts (1998), where the occurrence time of an MCS was defined as the central time between the initiation and decay of the MCS.
The diurnal cycles of IDC precipitation are consistent in all regions (Fig. 9b, e, h, and k), peaking in the late afternoon in summer (Tian et al., 2005), again reflecting the impact of local instability driven by the solar forcing on IDC development. NC precipitation (Fig. 9c, f, i, and l) shows some diurnal cycle characteristics similar to IDC precipitation. It may be caused by the limitation of the temporal resolution of the datasets used in the FLEXTRKR algorithm. Weak IDC events that are shorter than 1 h could be missed by GridRad in identifying CCFs, as GridRad Z H only considers reflectivities within ±3.8 min of the analysis time. These weak IDC could be aliased to NC precipitation, therefore showing some similar diurnal cycles as IDC. Another possible reason is that the FLEXTRKR algorithm may miss some parts of IDC clouds with T b ≥ 241 K, which are then classified as NC, so the NC precipitation exhibits some IDC characteristics.
The monthly diurnal cycles of precipitation intensities for MCSs, IDC, and NC are generally similar among all regions, peaking in the late afternoon and early morning in the warm season (Fig. A8).
Uncertainties from source datasets
The NCEP/CPP L3 4 km Global Merged IR V1 T b dataset has been view-angle-corrected and re-navigated for parallax (Janowiak et al., 2001) to reduce errors. However, the US continent is covered by two series of geostationary IR satellites (GOES-W and GEOS-E). During the production of the T b dataset, the value with the smaller zenith angle is adopted when duplicate data are available in a grid pixel. Measurements from different satellites may be inconsistent. Janowiak et al. (2001) suggest this type of inconsistency to be considered minor.
For the GridRad radar dataset, some bad volumes have been removed during the production of GridRad Z H . We further filter out potential low-quality observations, scanning artifacts, and non-meteorological echoes from biological scatters and artifacts following the approaches of Homeyer and Bowman (2017). However, there is another source of error from anomalous propagation caused by non-standard refractions of radar signals in the lower atmosphere, which cannot be mitigated during the filtering procedure. Non-standard refractions can result in underestimation or overestimation of the true radar beam altitude, thus affecting the location of radar reflectivity for binning. Estimating the corresponding uncertainties is out of the scope of this study. However, anomalous propagation is typically limited to radar beams traveling long distances in the boundary layer .
Stage IV precipitation is a mosaic of precipitation estimates based on a combination of NEXRAD and gauge data from 12 RFCs. Therefore, the errors of Stage IV are from several sources, such as inherent NEXRAD biases, radar quantitative precipitation estimate (QPE) algorithm biases, bad gauge data removal inconsistency among different RFCs, multisensory processing algorithm inconsistency among different RFCs, and mosaicking border discontinuities (Nelson Figure 8. Distributions of annual mean seasonal precipitation amounts for different types of precipitation for 2004-2017. The first row is for total precipitation, the second for MCS precipitation, the third row for IDC precipitation, and the fourth row for NC precipitation. The first column shows spring precipitation, the second column summer, the third column autumn, and the fourth column winter. MCS, IDC, and NC precipitation values share the same label bar. We exclude hourly data with precipitation ≤ 1 mm h −1 in the calculation. et al., 2016). The most severe errors occur in the western US, where NEXRAD data are limited, and a gauge-only rainfall estimation algorithm is used (Nelson et al., 2016;Smalley et al., 2014). Hence our data product has a geographical focus east of the Rocky Mountains, with the best NEXRAD coverage in the US. After regridding the Stage IV precipitation into our 4 km domain, we further manually filter out certain "erroneous precipitation" hours and set all precipitation in those hours to missing values. Erroneous precipitation is defined as sudden appearance and disappearance of a large contiguous area (> 4800 km 2 ) with intense precipitation (> 40 mm h −1 ) (Fig. A9), which is physically not possible. There are 40 h in total in the period 2004-2017 containing such erroneous precipitation.
As the FLEXTRKR algorithm is applied to a combination of three independent types of remote sensing datasets, we identify the most robust MCS-IDC events satisfying all the criteria based on the three datasets. It reduces the potential false classification of tracks as MCSs or IDC based on any single dataset. And to consider the potential error of ERA5 melting level heights, we require Z H ≥ 45 dBZ above (Z melt + 1) km for convective classification in the SL3D algorithm (Table A2).
The impact of missing data
In the CCS identification step of the FLEXTRKR algorithm, we require the fraction of missing satellite T b in the domain at each hour to be less than 20 %. Otherwise, the hour is excluded from our data product. During 2004-2017, we excluded 716 h with missing satellite T b data, accounting for less than 0.6 % of the total period. The year with the most missing satellite data is 2008, with 206 missing hours (2.3 %), followed by 2004 with 154 h (1.8 %). All other years have no more than 57 missing hours. During the link procedure of the FLEXTRKR algorithm, we search the next hour if a missing hour is encountered, as long as the time gap between the two "linked" hours is less than 4 h. Otherwise, we start new tracks from the next available hour. This method aims to reduce the impact of the missing hours. Considering the high completeness of the satellite T b data in 2004-2017, we conclude that the missing satellite data have little effect on the data product.
We show the distribution of the fractions of valid Stage IV precipitation data in 2004-2017 in Fig. A10. The fractions are over 97 % for all grid cells of the US in the domain. Most grid cells in the US have less than 2 % missing hours, which should have a negligible impact on the data product. Figure A11 shows the fractions of available GridRad reflectivity data from 2004 to 2017 between 1 and 12 km a.s.l. The fractions are relatively high over the majority of the troposphere except for 1 km a.s.l. Based on the criteria of the SL3D algorithm, Z H at 1 km is rarely used and can be easily substituted by Z H at 2 km. Generally, GridRad has good spatial coverage during the period, with most grid cells east of the Rocky Mountains having fractions > 90 % between 2 and 9 km and 80 % between 10 and 12 km. The completeness of the GridRad dataset is relatively lower compared to the satellite T b and Stage IV precipitation datasets, and GridRad Z H is a crucial variable in the SL3D classification and MCS-IDC identification. Therefore, the missing data of GridRad Z H should have some impacts on our data product. However, as an advanced long-term high-resolution 3-D radar reflectivity dataset, GridRad is valuable for constructing a climatological MCS-IDC data product.
Temporal resolution limitation of the source datasets
As we discussed in Sect. 3.2.3, the diurnal cycles of NC precipitation show some possible aliasing from IDC precipitation. Some weak IDC events are so short that the hourly data cannot properly capture their occurrence, especially for GridRad Z H , which only includes reflectivities within ±3.8 min of each hour. We calculate the cumulative distribution functions of PF-based lifetimes for MCS and IDC events and their associated precipitation in the data product for 2004-2017, as shown in Fig. 10. About 75 % of IDC events have a PF-based lifetime of 1 h. Therefore, it is almost certain that we miss some IDC events shorter than 1 h in the data product. Here we give an estimate of the probability p that a given IDC event with a convective signal duration of x minutes is detected by radar, as expressed below: where the numerator is the time window of GridRad observation in each hour, and x is the duration of the IDC event. The detection probability is only about 25 % when x = 30 min. To obtain a detection probability of 50 %, we require x ≥ 45 min. Hence, we cannot assess the distribution of IDC convective signals with durations less than 1 h using the currently available datasets. Higher-resolution datasets, such as individual NEXRAD radar data, which typically have an update cycle of 4-5 min, are necessary to derive the information. However, as shown in Fig. 10, we find that precipitation from IDC events with a 1 h PF lifetime only accounts for about 10 % of the total IDC precipitation. Therefore, IDC events with PF lifetimes less than 1 h should have a relatively small impact on precipitation.
The impact of MCS and IDC definition criteria
The separation between MCSs and long-lasting IDC events is somewhat fuzzy Geerts et al., 2017;Haberlie and Ashley, 2019;Pinto et al., 2015;Prein et al., 2017). Here, we briefly examine the impact of different MCS-IDC definition criteria on the data product. We change the definition of MCSs to relax the CCS and PF size and duration thresholds. Specifically, the second and third criteria listed in Sect. 2.2.2 are modified as follows: (2) CCS areas associated with the track surpass 40 000 km 2 for more than 4 continuous hours and (3) PF major axis length exceeding 80 km and intense convective cell areas ≥ 16 km 2 exist for more than 3 consecutive hours. And we also require that each merge or split track associated with MCS-IDC events has a CCS-based lifetime of no more than 3 h. We keep the definition of IDC the same as described in Sect. 2.2.2, which is a limit for IDC that we can identify based on the source datasets. By using the new definition, as expected, the lifetimes and spatial coverages of MCSs are reduced, and those of IDC change little because most IDC events cannot satisfy the new MCS criteria (Tables 1 and A5). The annual number of MCSs identified in 2004-2017 increases from 454 to 857. The number increases from 122 to 207 in spring, 212 to 434 in summer, 83 to 151 in autumn, and 37 to 62 in winter. As PF-based lifetimes of MCS-IDC events in summer are the shortest (Table 1), the new definition has the most significant impact in summer. The annual number of IDC decreases from 45 346 to 45 225. Reducing the merge or split lifetime limit retains more independent IDC events, which is the reason why the decrease in the number of IDC events is smaller than the increase in the number of MCSs. Annual mean MCS precipitation east of the Rocky Mountains increases from 313 to 353 mm, while IDC precipitation decreases from 170 to 130 mm. The fraction of MCS precipitation only increases by 6 % (from 45 % to 51 %), compared to the almost doubling of MCS number (from 454 to 857), suggesting the MCS definition in the original data product is capable of capturing most of the important MCSs with heavy precipitation. Similar to MCS numbers, summer has the most increase in MCS precipitation amount, from 100 to 119 mm. And annual mean MCS and IDC precipitation intensities decrease slightly as MCS precipitation intensities are somewhat larger than IDC in most regions (Tables A3, A4, A6, and A7). We summarize the regional precipitation statistics of the NGP, SGP, SE, and NE based on the new definition in Tables A6 and A7. Although the new definition changes the absolute values of MCS-IDC characteristics, the contrast between MCS and IDC events is still present. The new definition has small impacts on the spatial distribution patterns of MCS-IDC precipitation. And NC precipitation characteristics are almost the same as before. Therefore, our original definition captures the essential characteristics of MCS and IDC events. In addition, the original data product is complete and flexible. We store all criteria variables of MCS-IDC events in the data product. Users can easily change the definition of MCSs and switch between tracks that are attributed to MCS and IDC without re-running the FLEXTRKR algorithm. There is no need to change the "track" and "merge" lifetime criteria as we do above because they have little impact on the climatological characteristics of MCS and IDC events.
Recommendations for the usage of the MCS-IDC data product
Considering the limitations and uncertainties mentioned above, we generally recommend using the data product for observational analyses and model evaluations of convection statistics and characteristics over relatively long periods such as a month, a season, or longer to fully take advantage of the long-term dataset, although analysis of individual weather events is also possible as supported by the hourly temporal resolution of the data product. In addition, since the completeness and quality of the source radar dataset degrade dramatically beyond the US border and over the Rocky Mountains (Fig. A11), we recommend the usage of the data product within the CONUS east of the Rocky Mountains to alleviate the impact of the termination of MCS-IDC tracks due to poor radar coverage and missing radar data beyond their maximum scan range. Detailed investigation of a short period or a specific MCS-IDC event is acceptable, but caution should be taken when encountering missing data around the track during the period. Due to the complexity of the algorithms used to develop the data product, it is difficult to quantify the impact of missing data on the MCS-IDC track. Therefore, we do not recommend examining a specific MCS-IDC track if there are too many missing data (precipitation, T b , or Z H ) along the track. Users planning to apply the data product for a specific case study should examine the availability of the source data first, which are also stored in the data product except for 3-D Z H due to the large data volume. Users can access the original 3-D Z H at https://rda.ucar.edu/datasets/ds841.0/ (last access: 2 January 2020) (Table A1).
Lastly, although our sensitivity test in Sect. 4.4 shows that precipitation characteristics are similar between two different sets of MCS-IDC definition criteria, we still recommend users conduct further sensitivity tests and examine the impact of different definition criteria on the results if the data product is applied to other studies, such as the effects of MCS and IDC events on atmospheric circulation, environmental condi-tions associated with the initiation and evolution of MCS and IDC events, and MCS-IDC-associated weather hazards.
Data availability
The high-resolution (4 km hourly) MCS-IDC data product and the corresponding user guide document are available at https://doi.org/10.25584/1632005 (Li et al., 2020). The original format of the data files is NetCDF-4, and we archive them as compressed files for each year so that the data product is easily accessible. The user guide contains a brief explanation about the approach to develop the data product and a detailed description of the data file content to help users understand the data product.
Conclusions
Here we present a unified high-resolution (4 km, hourly) data product that describes the spatiotemporal characteristics of MCS and IDC events from 2004 to 2017 east of the Rocky Mountains over the CONUS. We produce the data product by applying an updated FLEXTRKR algorithm to the NCEP/CPP L3 4 km Global Merged IR V1 T b dataset, ERA5 melting level heights, 3-D GridRad radar reflectivity dataset, and Stage IV precipitation dataset. Climatological features of the MCS and IDC events from the data product are compared, with a focus on their precipitation characteristics. Consistent with our definitions of MCSs and IDC in the FLEXTRKR algorithm, we find that MCSs have much broader spatial coverage and longer duration than IDC events. While there are many more frequent IDC occurrences than MCSs, the mean convective intensities of IDC events are comparable to those of MCSs. MCS and IDC events both contribute significantly to precipitation east of the Rocky Mountains but with distinct spatiotemporal variabilities. MCS precipitation affects most regions of the eastern US in all seasons, especially in spring and summer. The MCS precipitation center migrates northwards from Arkansas in spring to northern Missouri and Iowa in summer, followed by a southward migration to Louisiana in autumn, and finally to Mississippi and Alabama in the Southeast in winter. IDC precipitation mostly concentrates in the Southeast in summer. IDC precipitation shows a significant diurnal cycle in summer months with a peak around 16:00-17:00 local time over all regions east of the Rocky Mountains. In contrast, MCS precipitation peaks during nighttime in spring and summer for most regions except for the Southeast, where MCS precipitation peaks in the late afternoon in summer, similar to IDC precipitation. Lastly, we analyze the potential uncertainties of the data product and the sensitivity of the dataset to MCS definitions and give our recommendations for the usage of the data product. The data product will be useful for investigating the atmospheric environments and physical processes associated with convective systems; quantifying the impacts of convection on hydrology, atmospheric chemistry, severe weather hazards, and other aspects of the energy, water, and biogeochemical cycles; and improving the representation of convective processes in weather and climate models.
SL3D category Criteria
Convective Z 1 H = 25 dBZ echo-top height ≥ 10 km; or Z H ≥ 45 dBZ above (Z 2 melt + 1) km; or Z H peakedness 3 exceeding thresholds 4 in at least 30 % of the echo column between surface and 9 km. After the above filtering, exclude isolated convective grid points. Finally, grid points that have Z 5 Hmax ≥ 25 dBZ and are immediately adjacent to other convective grid points are classified as convective.
Precipitating stratiform
Z H ≥ 20 dBZ at 3 km; or Z H ≥ 10 dBZ at 1 km or 2 km.
Non-precipitating stratiform No echo or Z H < 20 dBZ at 3 km, and echo presents above 3 km. If no echo at 3-5 km, but echo presents above 5 km, classified as an anvil.
Anvil
No echo at 3-5 km, but echo presents above 5 km.
Convective updraft Convective grid points satisfy (1) Z Hmax ≥ 40 dBZ, and (2) ∂Z H ∂z ≥ 8 dBZ km −1 with echoes in at least six of eight horizontally adjacent grid volumes presents between the surface and 7 km. Figure A1. Schematic of CCS merging and splitting. Figure A2. An example of CCS merging and splitting from 2005-05-07T04:00:00 Z-T09:00:00 Z. Cloud 1 and Cloud 2 at 05:00:00 Z merged into Cloud 1 at 06:00:00 Z. And Cloud 1 at 7:00:00 Z at least split to Cloud 1 and Cloud 3 at 08:00:00 Z. Figure A3. Schematic of "merge" tracks and "split" tracks. Here, we define that an MCS or IDC event initiates when the first PF appears. Therefore, we can derive the initiated time of all MCS and IDC events, which is the basis of this figure. For example, on average, more than seven MCSs initiated at 14:00 local time every June between 2004 and 2017. Figure A6. Distributions of the fractions of different types of precipitation in each season. Here, precipitation refers to annual mean seasonal amounts for 2004-2017. We exclude hourly data with precipitation ≤ 1 mm h −1 in the calculation. The first row is for total precipitation, the second for MCS precipitation, the third for IDC precipitation, and the fourth for NC precipitation. The first column shows spring precipitation, the second summer, the third autumn, and the fourth winter. | 2020-10-30T05:06:51.864Z | 2020-10-14T00:00:00.000 | {
"year": 2021,
"sha1": "57028a003766b9da99c69f08352bfd2995e22737",
"oa_license": "CCBY",
"oa_url": "https://essd.copernicus.org/articles/13/827/2021/essd-13-827-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "993ea67db50a053f23cf83477bf9834034acc50f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
125957165 | pes2o/s2orc | v3-fos-license | Theoretical challenges in neutrino scattering studies
New and more precise measurements of neutrino cross sections in the few GeV energy region have renewed interest in a better understanding of electroweak interactions on nucleons and nuclei. This interest comes from neutrino oscillation experiments and their need to reduce systematic errors. Neutrino fluxes used in contemporary long and short baseline experiments (K2K, T2K, MINOS, NOvA, MiniBooNE, MINERvA, …) are peaked in the 1–5 GeV energy domain. In this context, I will present some details about the theoretical development in the description of (anti)neutrino-induced quasielastic scattering and the role of multi-nucleon mechanisms.
Introduction
Knowledge of neutrino interaction cross sections is an important and necessary ingredient in any neutrino measurement, and it is crucial to reduce systematic errors affecting present and future neutrino oscillation experiments, making new discoveries, like the CP violation in the leptonic sector, possible. This is because neutrinos are detected through their interactions with the nuclei that form part of the detectors. For nuclear physics this represents a challenge and an opportunity. A challenge because precise knowledge of neutrino oscillation parameters requires an accurate understanding of the detector responses, and it can only be achieved if nuclear effects are under control. An opportunity because neutrino cross sections incorporate richer information than electron-scattering ones, providing an excellent testing ground for nuclear structure, many-body mechanisms and reaction models. In addition, neutrino cross-section measurements allow us to investigate the axial structure of the nucleon and baryon resonances, enlarging our views of hadron structure beyond what is presently known from experiments with hadronic and electromagnetic probes. Moreover, the nuclear medium effects in these processes can be linked to chiral symmetry restoration in nuclear matter, and shed light to some parts of the QCD phase diagram.
At low energies the neutrino interacts with composite entities such as nucleons or nuclei. Given enough energy, the neutrino can actually begin to resolve the internal structure of the target: the neutrino can scatter off an individual quark inside the nucleon: DIS (deep inelastic scattering) and it manifests in the creation of a hadronic shower. In this talk, however, I will address some aspects of the low energy neutrino interactions, and I refer the reader to the reviews [1,2,3] (and references therein) on partonic nuclear effects in DIS relevant for the present and future neutrino long and short baseline experiments.
Some of the recent cross section measurements at low energies raised doubts in areas which seemed to be well understood. Thus, in the last years some questions emerged triggered by interactions, and what it its influence in the reconstruction of the neutrino energy? How large is CC (charged current) coherent pion production at a few GeV neutrino energies? What is behind the large discrepancy between MiniBooNE pion production measurements and theoretical model predictions?, etc.. In this talk, I will concentrate on CC (anti-)neutrino-nucleus QE scattering, paying a special attention to the so called "MiniBooNE M A puzzle".
MiniBooNE determination of M A and multinucleon absorption
Neutrino and antineutrino CC scattering on nuclei without pions exiting the nucleus (CCQElike) is a fundamental detection channel for many neutrino experiments [4,5,6,7,8] and has been theoretically studied within different approaches [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. Experimentally, it has been generally assumed that most of those events, after subtracting the background due to the production of an on-shell pion in the first step followed by its absorption in the nucleus, could be attributed to the QE scattering of the weak probe with a bound nucleon.
In 2010, MiniBooNE published the first measurement of the muon neutrino CCQE double differential cross section [6], based in this assumption. It turned out that the measured cross sections were much larger than the theoretical predictions [33,2,34], and an unexpectedly large nucleon axial mass, M A = 1.35 ± 0.17 GeV, was required to describe these data [6,22,16]. The MiniBooNE antineutrino CCQE data [35], selected in a similar way and published three years later, reinforced this puzzling situation. The value of M A determined from these experiments is in clear conflict with M A = 1.016 ± 0.026 GeV [36] extracted from early CCQE measurements on deuterium and, to a lesser extent, hydrogen targets, which is, however, in excellent agreement with the pion electro-production result, M A = 1.014 ± 0.016 GeV, obtained from the nucleon axial radius [34,37] computed within chiral perturbation theory.
The solution to this puzzle came from the inclusion of long range RPA correlations and multinucleon nuclear effects. It was first pointed out in Refs. [13,14] that the QE-like sample includes also events where the W ± is absorbed by two interacting nucleons (excitation of a 2p2h nuclear component in the quantum many body framework). Neglecting re-scattering processes which could eventually produce secondary pions, 2p2h events will not give rise to emitted pions, and thus they are experimentally misidentified as QE events. The importance of these multinucleon events was confirmed in Refs. [23,24]. This latter approach and that derived in Refs. [13,14] also account for RPA corrections, and provided a good description of the MiniBooNE double differential (2D) neutrino [24,26] and antineutrino [28,29] cross sections, using a value of the order of 1 GeV for M A .
The inclusive cross section for the process ν ℓ (k) + A Z → ℓ − (k ′ ) + X is determined by the W gauge boson selfenergy in the nuclear medium [10,23], and in particular for the different modes in which it can be absorbed (the discussion is similar for antineutrino or neutral current driven processes). The most relevant ones are: the absorption by one nucleon, or by a pair of correlated nucleons that are exchanging virtual mesons (π, ρ, · · ·), or the excitation of a ∆ or a higher energy resonance, etc. (see Fig. 1). The mechanism depicted in Fig. 1a) accounts for what I will call genuine QE events in what follows. There, the gauge boson W is absorbed by just one nucleon, and it does not give rise to pions is the first step. MiniBooNE CCQE cross section includes events in which only one muon is detected, this sample (QE-like) does not include events with pions coming off the nucleus, since they will give rise to additional muons after their decay (see Fig. 1c). However, this event-sample includes multinucleon events, as those displayed in Fig. 1e, where the gauge boson is absorbed by two interacting nucleons. In any of these processes, the virtual pion, that is produced in the first step, will be necessarily absorbed by a second nucleon, and thus the process should be classified as a two nucleon W absorption mechanism. Hence, events originated by these kind of processes do not contribute to the genuine QE cross section, but they do to the cross section measured in the MiniBooNE experiment, because give rise to only one muon in the final state. Other events like real pion production followed by its absorption should be also included in the QE-like sample, though the MiniBooNE analysis MC (Monte Carlo) corrects for those. A word of caution is needed here. Let us pay attention to processes like the one depicted in the bottom panel of Fig. 1c, the crucial difference with the mechanism in Fig. 1e is that in the latter, the pion is off-shell instead of being on the mass-shell as in Fig. 1c. As a result of the final state interaction, this real pion might be absorbed or its four-momentum be changed, but this unwanted background can be in principle estimated by means of a MC simulation.
QE−like !
π,ρ,... π,ρ,... The existence of 2p2h contributions, in addition to the QE genuine ones, produces an unwanted bias in the measurements carried out in the far detector of long baseline experiments. This has a quantitative impact in the determination of the oscillation parameters, which might even exceed the current (m 2 31 − θ 23 ) 95% confident level contours [38,39]. The relevance of the multinucleon mechanisms has other unwanted consequences. Obviously, the neutrino energy reconstruction, based on the genuine QE kinematics is not so reliable [25,40,41,42], and that implies another source of systematic uncertainties in the analysis of the experiments. Furthermore, if the contribution of multinucleon mechanisms is substantially different in neutrinos and antineutrinos, as predicted for instance in Refs. [14,18] and this is not properly understood, it could lead to an asymmetry between ν andν which could be misinterpreted as a consequence of CP violation.
The microscopical model used in [23,28] describes the neutrino and antineutrino MiniBooNE CCQE flux averaged cross section dσ/dT µ /d cos θ µ [6,35] using M A = 1.05 GeV, as can be seen in Fig. 2. We should stress that, not only multinucleon mechanisms, but also RPA corrections turn out to be essential to describe the data. Medium polarization or collective RPA correlations account for the change of the electroweak coupling strengths, from their free nucleon values, due to the presence of strong correlations between the bound nucleons through the whole nucleus [10]. RPA strongly decreases the cross section at low energies, while multinucleon mechanisms accumulate their contribution at low muon energies and compensate for that depletion [24]. Therefore, the final picture is that of a delicate balance between a dominant single nucleon scattering, corrected by collective effects, and other mechanisms that involve directly two or more nucleons. Both effects can be mimicked by using a large M A value as done in the original experimental analysis [6]. However, neglecting either of the two effects would lead to a poor description of the data and also lead to incorrect neutrino energy reconstruction. M. Martini and collaborators find similar results [26,29], since their model contains the same ingredients: RPA correlation effects and multinucleon mechanisms. Both models provide similar neutrino genuine QE cross sections, with and without RPA corrections, but however, differ in about a factor of two in their estimation of the size of the multinucleon effects. As a consequence of this reduced 2p2h contribution, our predictions in Ref. [24] favor a global normalization scale of about 0.9, which is not required by the model of Refs. [26,29]. This value of the overall scale is consistent with the MiniBooNE estimate of a total flux normalization error of 10.7%. The evaluation in [23,24] of multinucleon emission contributions to the cross section is fully microscopical, and it starts from a state-of-the-art model [43,44,45,46] for the W N → πN reaction at intermediate energies and contains terms, which were either not considered or only approximately taken into account in [13,14,26]. Indeed, these latter works rely on the computation of the 2p2h mechanisms for the (e, e ′ ) inclusive reaction in [47], which results are used for neutrino-induced processes.
Recent results from T2K [48] and MINERvA [49] show sizable nuclear effects for all muon kinematics, with models including 2p2h+RPA contributions describing better the data. Thus, 2p2h effects result essential for complete modeling of neutrino interactions at low momentum transfer. In Ref. [49] is tested the model of Ref. [23], though it improves the description of the event rate in the region between the QE and the ∆(1232) peaks, and the rate for multiproton events, it does not go far enough to fully described the data. T2K measurement is not precise enough to disentangle between the models of Refs. [23] and [26].
Conclusions
I have addressed the neutrino and antineutrino MiniBooNE CCQE-like double differential crosssection data using the theoretical model of Refs. [10,23]. The model, that begins with a relativistic local Fermi gas description of the nucleus, includes long range RPA correlations and multinucleon mechanisms. The same model is quite successful in the analysis of nuclear reactions with electron [50,51,52], photon [53,54] and pion [55,56,57,58,59,60,61,62,63] probes and contains no additional free parameters. RPA and 2p2h contributions are essential for the description of the data, which in sharp contrast with previous analysis, can be described using previous determinations of the nucleon axial mass, both using neutrino and electron beams, around 1 GeV.
The existence of un-modelled 2p2h contributions, in addition to the QE genuine ones, produces an unwanted bias in the measurements carried out in the far detector of long baseline experiments. This has a quantitative impact in the determination of the oscillation parameters and in the neutrino energy reconstruction. GeV. In addition, in the top panels the red-solid line is obtained with a Fermi gas model with M A = 1.32 GeV and without including RPA and multinucleon mechanisms contributions, while the red-dash-dotted and and blue-dashed curves in the bottom panels corresponds to QE-RPA and to 2p2h events, respectively. See Refs. [24] and [28] for further details. | 2019-04-22T13:07:57.575Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "db22c3cd6f53ca5b1564a4c753e83c804beb0bc5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/888/1/012013",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9d7cb3c176f01951e16226b2f6c08efcf39ae5b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235380759 | pes2o/s2orc | v3-fos-license | Effects of a modified muscle sparing posterior technique in hip hemiarthroplasty for displaced intracapsular fractures on postoperative function compared to a standard lateral approach (HemiSPAIRE): protocol for a randomised controlled trial
Introduction Currently National Institute for Health and Care Excellence clinical guidelines in the UK suggest that surgeons performing partial hip replacements (hemiarthroplasty) should consider using the lateral approach. Alternatively, a newer, modified posterior approach using a muscle sparing technique named ‘Save Piriformis and Internus, Repairing Externus’ (SPAIRE) can be used leaving the major muscles intact. This randomised controlled trial (RCT) aims to compare the SPAIRE approach to the standard lateral approach, to determine if it allows patients to mobilise better and experience improved function after surgery. Methods and analysis HemiSPAIRE is a two-arm, assessor-blinded, definitive pragmatic RCT with nested pilot and qualitative studies. Two hundred and twenty-eight participants with displaced intracapsular fractures requiring hip hemiarthroplasty will be individually randomised 1:1 to either the SPAIRE, or control (standard lateral approach) surgical procedure. Outcomes will be assessed at postoperative day 3 (POD3) and 120 (POD120). The primary outcome measure will be level of function and mobility using the Oxford Hip Score at POD120. Secondary outcomes include: De Morton Mobility Index (DEMMI), Cumulated Ambulatory Score and Numeric Pain Rating Scale (NPRS) at POD3; DEMMI, NPRS and EQ-5D-5L at POD120, complications, acute and total length of hospital stay, and mortality. Primary analysis will be on an intention-to-treat basis. Participant experiences of the impact of surgery and recovery period will be examined via up to 20 semi-structured telephone interviews. Ethics and dissemination The protocol has been approved by Yorkshire and the Humber—Bradford Leeds Research Ethics Committee. Recruitment commenced in November 2019. Findings will be disseminated via research articles in peer-reviewed journals, presentations at conferences, public involvement events, patient groups and media releases. A summary of the trial findings will be shared with participants at the end of the study. Trial registration number NCT04095611.
ABSTRACT Introduction Currently National Institute for Health and Care Excellence clinical guidelines in the UK suggest that surgeons performing partial hip replacements (hemiarthroplasty) should consider using the lateral approach. Alternatively, a newer, modified posterior approach using a muscle sparing technique named 'Save Piriformis and Internus, Repairing Externus' (SPAIRE) can be used leaving the major muscles intact. This randomised controlled trial (RCT) aims to compare the SPAIRE approach to the standard lateral approach, to determine if it allows patients to mobilise better and experience improved function after surgery. Methods and analysis HemiSPAIRE is a two-arm, assessor-blinded, definitive pragmatic RCT with nested pilot and qualitative studies. Two hundred and twentyeight participants with displaced intracapsular fractures requiring hip hemiarthroplasty will be individually randomised 1:1 to either the SPAIRE, or control (standard lateral approach) surgical procedure. Outcomes will be assessed at postoperative day 3 (POD3) and 120 (POD120). The primary outcome measure will be level of function and mobility using the Oxford Hip Score at POD120. Secondary outcomes include: De Morton Mobility Index (DEMMI), Cumulated Ambulatory Score and Numeric Pain Rating Scale (NPRS) at POD3; DEMMI, NPRS and EQ-5D-5L at POD120, complications, acute and total length of hospital stay, and mortality. Primary analysis will be on an intention-to-treat basis. Participant experiences of the impact of surgery and recovery period will be examined via up to 20 semi-structured telephone interviews. Ethics and dissemination The protocol has been approved by Yorkshire and the Humber-Bradford Leeds Research Ethics Committee. Recruitment commenced in November 2019. Findings will be disseminated via research articles in peer-reviewed journals, presentations at conferences, public involvement events, patient groups
INTRODUCTION
Hip fractures are common in the elderly. Over 20 000 cases of hip hemiarthroplasty (replacement of the fractured femoral head) are performed annually in England, Wales and Strengths and limitations of this study ► Comprehensive patient and public involvement: outcome measures were inspired by patients who said that mobility and speed to regaining independence are the most important outcomes after partial hip replacement surgery. ► Addresses the current evidence gap on the impact on patient mobilisation and function after surgery using different techniques (Save Piriformis and Internus, Repairing Externus vs lateral). ► A pragmatic multicentre study across six National Health Service Trusts in the South West of England with broad inclusion criteria to recruit a population that will mirror as close as possible this frail group of patients as reduced cognition is not an exclusion criterion. ► Due to the frail characteristics of the target population, a relatively high dropout rate is anticipated, however, this has been accounted for in the sample size calculation. ► Due to COVID-19, the primary outcome measure was changed to a self-report measure, meaning some outcomes were not collected in the manner originally planned. on June 15, 2021 by guest. Protected by copyright.
Open access
Northern Ireland. 1 Hip fracture patients endure debilitating loss of function, and recovery is often complex and challenging. 2 The average total length of stay for hip fracture admissions is over 21 days, representing over 4000 National Health Service (NHS) hospital beds occupied by hip fracture patients at any one time. 3 The National Institute for Health and Care Excellence (NICE) recommends replacement arthroplasty (total hip replacement or hemiarthroplasty) to patients with a displaced intracapsular hip fracture. For patients who are not eligible for total hip replacement, hemiarthroplasty should be offered. 4 As a commonly performed procedure, it is important to consider innovative hemiarthroplasty techniques that may allow better and safer rehabilitation in this frail group of patients.
When planning a hip hemiarthroplasty, surgeons have a choice of surgical approaches. NICE guidelines 4 currently recommend using a lateral rather than a conventional posterior approach. This advice is based on evidence described as being of 'very poor quality'. 4 For adequate exposure, the lateral approach requires division and subsequent repair of 50% or more of the tendon attachments of the gluteus medius and minimus muscles on to the greater trochanter. These muscles are essential for normal gait. Literature quoted suggests a reduced dislocation rate for the lateral approach; however, this surgery has the disadvantage that the relatively extensive division of tendon attachments required may result in reduced levels of function postoperatively. Recent evidence from a cohort of over 20 000 patients from the Norwegian Hip Fracture Register 5 reported better patient-related outcome measures (pain, patient satisfaction and healthrelated quality of life) with a standard posterior approach compared with the lateral approach. However, the study also confirmed the higher dislocation rates in the conventional posterior approach group. Some studies report dislocation in up to 10% of patients undergoing this hip procedure through a standard posterior approach and such complications can lead to potentially catastrophic consequences. 6 To address the issue of dislocation in hip arthroplasty, modified surgical procedures have been attempted using muscle sparing techniques. In 2012, Han et al 6 described a modified posterior approach for use in patients with neurological disorders requiring hip hemiarthroplasty, where the piriformis, gemellus superior, obturator internus and part of quadratus femoris muscles were left intact, which combined with a standard capsule repair led to a reduced incidence of dislocation. 6 In 2016, the Hip Unit at the Royal Devon and Exeter NHS Foundation Trust developed a modified technique using a posterior approach for hip hemiarthroplasties applicable to all patients. This approach involves division of only the obturator externus tendon and part of quadratus femoris muscle from their femoral insertions. These are repaired, along with the posterior capsule, at the end of the surgery, with strong non-absorbable suture (as opposed to a standard repair) through an enhanced trans-osseous technique onto the posterior aspect of the greater trochanter. This is a modification of the more extensive posterior repair technique developed by Carlton Savory, MD at The Hughston Clinic, Georgia, USA. The tendon insertions of piriformis, gemellus superior, obturator internus and gemellus inferior muscles are spared, and the extensive abductor muscle insertions of gluteus medius and gluteus minimus onto the greater trochanter are left undisturbed, minimising the potential negative impact of dividing muscles during surgery on postoperative recovery and mobility. This technique is named 'SPAIRE' as it allows the surgeon to 'Save Piriformis and Internus, Repairing Externus'. These muscles have been shown to act as the main extensor and abductor of the flexed hip which is of significant importance for movements such as rising from a chair or climbing stairs. 7 This contrasts to the standard lateral approach, where a significant proportion of the gluteal muscle insertions is divided, potentially impacting hip function. The combination of this muscle sparing approach with an enhanced capsule repair aims to provide sufficient stability to enable patients to mobilise full-weight bearing, without any of the specific restrictions currently included in routine postoperative posterior approach protocols. The preservation of these muscle insertions may replicate or even surpass the improved patientrelated outcome measures observed in the study by Kristensen et al 5 and might achieve other benefits relating to more complete rehabilitation, reduced hospital stay and diminished requirement for social service support on discharge, with consequent savings from the health and social care budget.
Review of existing evidence
A scoping review identified 13 studies that compared the traditional posterior approach with the lateral approach, including two randomised controlled trials (RCTs). 8 These evidence a trend for increased incidence of dislocation using the traditional posterior approach versus other approaches, with the exception of one paper describing good results with a modified version of the posterior approach. 9 However, this approach was only attempted in a small subset of the population prone to these fractures. Evidence with regard to other outcomes is inconsistent and of limited quality, with few RCTs to inform guidelines. 8 Aims HemiSPAIRE is an RCT which, with its focus on postoperative function and mobility, aims to provide high quality evidence on the relative benefits of the SPAIRE surgical technique when compared with standard lateral technique. The findings may help to inform and update current guidelines. 4 In doing so it aims to contribute to improved function, mobility and quality of life outcomes for hip fracture patients, many of whom are elderly and frail, and reduce length of hospital stays.
Primary objective
To test whether the SPAIRE technique improves postoperative function and mobility at 120 days following surgery in adults with a displaced intracapsular hip fracture requiring hemiarthroplasty, compared with the standard lateral approach through conducting a definitive two-arm RCT.
Secondary objectives
Test whether the SPAIRE technique results in improved early function, mobility, pain and quality of life at 120 days, with reduced length of hospital stay, complication rates and mortality compared with the standard lateral approach, through collecting secondary outcome measures in the trial.
Investigate how patients experience the recovery period and mechanisms of recovery after surgery, by conducting a qualitative study with a sub-sample of patients in each trial arm.
Engage the contribution of a patient and public involvement (PPI) group to ensure the conduct and outputs of the study are relevant and useful to patients.
METHODS AND ANALYSIS Trial design
This is a definitive pragmatic, multi-centre RCT in patients attending hospital with a displaced intracapsular fracture requiring hip hemiarthroplasty. Patients will be randomised to have their operation performed either by posterior approach (SPAIRE technique) or the standard lateral approach. Patients, ward staff and all research staff involved in postoperative evaluations will remain blinded to allocation. Outcomes will be recorded on the third postoperative day (POD3) and 120 days after surgery (POD120). These include measures of function, mobility, quality of life, pain, surgical complications, hospital length of stay, mortality, discharge destination and place of residence. Figure 1 shows a flow chart of the trial design.
Trial setting
Six acute hospitals in the South West of England are recruitment centres, with a minimum of two surgeons per site. Training has been provided by the chief investigator (CI) and co-applicant JT, using lectures, mentorship, observations, videos and one-to-one sessions as necessary. All participating surgeons have carried out a minimum of five cases using the posterior approach (SPAIRE technique) prior to participating in the study, and are prepared to undertake both procedures. Each participating surgeon uses both techniques. If a surgeon is not in equipoise, then he/she will not participate. Trainees who complete training under direct supervision of the principal investigator (PI) at each site are eligible to operate under supervision and are included on the list of surgeon collaborators.
Participant eligibility criteria
All patients requiring hemiarthroplasty for a displaced intracapsular hip fracture are considered for inclusion. Inclusion criteria: patients aged 60 years or older presenting with an intracapsular hip fracture requiring hip hemiarthroplasty 4 who are resident in the South West of England. Exclusion criteria: patients who were immobile (unable to walk) before hip fracture, patients not expected to live until POD120 due to chronic illness and receiving surgery for palliative care, and use of femoral stems not of a proven stem design, in line with recommendations set by NICE clinical guideline on hip fracture management. 4 A sizeable proportion of this population suffer dementia and/or temporary delirium, and are not excluded; cognitive ability is not part of the eligibility criteria.
Trial procedures Recruitment and participant identification When a patient attends hospital with a confirmed intracapsular hip fracture, they are considered for inclusion. If potentially eligible the study is discussed with the patient and/or their carer(s). Surgery usually takes place 12 hours or more after admission or on the day after, providing patients with time to consider their participation. Potential participants are identified by orthopaedic surgeons involved in the study (co-applicants and collaborators) who admit patients under their care.
Consent
Identified patients interested in participating in the study are invited to read the participant information sheet and, if interested, to provide informed consent. Researchers will be present to encourage potential participants to ask questions and to ensure participants fully understand the purpose of the research, risks associated with the intervention, obligations of participation and their right to withdraw at any time. The main consent forms must be fully completed and signed before patients are accepted on to the study (see online supplemental material provided).
The qualitative consent form, if applicable, must be fully completed and signed between baseline and POD3. If a patient lacks capacity and is unable to consent, potential participation is discussed and consent later sought with a person whose relationship to the patient makes them suitable to act as his/her legal representative, for example, personal or professional consultee. If possible, the study is discussed or communicated to the participant in a way appropriate to their understanding. Participants who lack the mental capacity to consent and do not seem in agreement with any part of the study, even if agreement has been given by another, are not included. The participant's general practitioner is notified by letter that his/ her patient is participating in the study.
Randomisation scheme
Randomisation is undertaken as late as practically possible.
There is no special preparation required in theatre, and no difference in equipment required for either surgical technique. Participants will be individually randomised to receive the SPAIRE or lateral procedure in a 1:1 ratio. Concealed allocation is determined by the UKCRC registered Exeter Clinical Trials Unit (CTU) using a validated password-protected web-based system. Allocation is based on random permuted blocks of varying size, and stratified by study site and by cognition level (impaired vs nonimpaired). The surgeon is informed of allocation by the CTU via email through nhs. net mail. The CI and PI at site are copied in.
Blinding
Patients, and research staff performing outcome assessments are blinded to treatment allocation. Surgeons and operative team are unblinded. An unblinded coordinator transcribes the surgical data to the database. There is no difference between the SPAIRE and lateral approach techniques in the following: surgical time taken, application of surgical dressing or postoperative care. For medicolegal reasons, the surgical approach used in the procedure is specified in the operation notes. A cover sheet is attached in front of the printed operation note stating that the patient is a participant in the trial and reminding the research team to avoid inadvertent unblinding to treatment allocation during postoperative assessments.
Trial treatments Trial interventions
The two groups differ only in the surgical approach to the hemiarthroplasty. All preparation for surgery, patient positioning, skin incision, other aspects of surgery, surgical dressing and postoperative care, are the same according to current practice.
SPAIRE technique through the posterior approach to the hip If randomised to this arm of the trial, the SPAIRE technique is performed. This involves a modified muscle-sparing posterior approach where insertions of piriformis, superior gemellus, obturator internus and inferior gemellus are spared with division of only obturator externus and part of quadratus femoris. The single divided tendon and posterior capsule are subsequently repaired with a transosseous repair to their initial position prior to closure. The insertions of the abductor muscles are left intact throughout the procedure.
Open access
Lateral approach to the hip If randomised to this arm of the trial, the patient is prepared and the operation performed in accordance with criteria set by the study, so as to minimise issues of standardisation with this approach. This means that the gluteus medius and minimus insertions onto the greater trochanter are partially divided anteriorly, leaving the posterior part of their insertions intact. The anterior capsule is divided and subsequently repaired prior to closure, followed by repair of the detached portion of the gluteal muscles.
Baseline data collection and trial outcome assessments Prefracture (baseline) participant characteristics, collected on the day of surgery, include: physical status at time of operation, measured using the American Society of Anaesthesiologists' (ASA) Physical Status Classification and level of cognition, determined by the surgical team, using the Abbreviated Mental Test. Prefracture mobility and quality of life measures, collected retrospectively at 3 days after surgery, include: Oxford Hip Score (OHS) and EuroQol EQ-5D-5L.
For details of trial outcome assessments see table 1.
Qualitative research
We are conducting up to 20 semi-structured telephone interviews with patients (10 per arm) to examine their experience of the impact of surgery and recovery period, including factors such as pain, mobility, function, independence and quality of life. Participants are sampled from across participating sites. Interviews are conducted at POD120, after the quantitative data are collected, to gather information on patient experiences over the 4 months after their surgery. Written consent to take part in a telephone interview is included within the main trial consent form and confirmed verbally with patients when contacted for interview. Patients who lack capacity to consent are not contacted. The interview schedule is designed with advice from the patient and public involvement group, and two physiotherapists. Data will be analysed using thematic analysis, 10 with NVivo V.12. 11 The analysis will employ a combined deductive and inductive approach, and be underpinned by a critical realist perspective. We will triangulate the qualitative findings Open access with quantitative findings on mobility, function and quality of life.
Patient safety and reporting
This trial follows Sponsor (Royal Devon and Exeter NHS Trust) standard operating procedures (SOPs) on safety reporting. Serious Adverse Events (SAEs) within this study are defined as any untoward medical occurrence that: ► Results in death. ► Is life-threatening. ► Requires inpatient hospitalisation or prolongation of existing hospitalisation. ► Results in persistent or significant disability/ incapacity. ► Other 'important medical events' may also be considered serious if they jeopardise the participant or require an intervention to prevent one of the above consequences. All SAEs will be reported to Exeter CTU and the local site Research and Development team within 24 hours of the PI being aware of the event. Exeter CTU follows up all SAEs. An SAE occurring to a participant is reported to the research ethics committee (REC) where, in the opinion of the CI the event was 'related' (resulted from administration of any of the research procedures) and 'unexpected' in relation to those procedures, within 15 working days of the CI becoming aware of the event. Any adverse event that does not fit the definition of serious above is recorded and reported in 6 monthly safety reports to the Independent Data Monitoring Committee/Trial Steering Committee (IDMC/TSC).
Sample size
The primary outcome is the OHS measured at POD120. This gives a score between 0 and 48. 12 13 The minimal clinically important difference for OHS when comparing two groups has been estimated to be 5 points, with an SD of 9 points. 14 If we conservatively use an SD of 10 (ie, an effect size of 0.5), with 90% power, based on a t-test of two independent means at the 5% level of significance, we require 85 patients per trial arm, that is, a total of 170. Allowing for 25% drop-out, the total recruitment target is 228 (114 in each arm). On average, each participating site is expected to recruit around 38 patients in total, or around two to three patients per month, over 18 months.
Monitoring recruitment and follow-up: internal pilot phase Each site is expected to recruit two to three participants per month, but is allowed to recruit more participants if they are able to. The total target number recruited at 6 months is 72 participants. If the total number of participants at 6 months is 60 or more, the study will continue as planned. If the total number of participants at 6 months is less than 30, the trial will be stopped. If the number is between 30 and 60 participants, we will review procedures to see what improvements might be made, and discuss progress with the IDMC/TSC and the funder. Follow-up rates will also be reviewed at 6 months and if below what has been predicted (ie, 75%) we will review procedures and see what improvements might be made. If recruitment is progressing as expected, but follow-up rates are not as expected at 10 months the IDMC/TSC will be asked to decide if the trial should continue.
Statistical analysis
The primary analyses are pre-specified and a statistical analysis plan will be drafted and agreed by the IDMC/ TSC and signed off by the independent statistician on the IDMC/TSC, prior to analyses. The study will be reported in accordance with the principles of Consolidated Standards of Reporting Trials guidelines. 15 There are no planned interim analyses. Primary analyses will be conducted on an intention-to-treat basis, blinded to group allocation.
The primary outcome, OHS at POD120, will be compared between study arms using linear regression, adjusting for site, cognitive impairment and prefracture characteristics (age, gender, place of residence, comorbidities (ASA grouped into categories: 1 or 2, 3, 4+)). Continuous secondary outcomes (De Morton Mobility Index (DEMMI), 16 Cumulative Ambulatory Score (CAS) and Numeric Pain Rating Scale (NPRS) at POD3; DEMMI, EQ-5D-5L, NPRS at POD120; acute and total length of stay) will be compared between study arms using linear regression, using the same adjustment variables. Results will be presented as means and SD in the two study arms and estimated mean differences with 95% CIs and p values. As we expect length of stay to be skewed, we will check the validity of the CIs for that outcome using bootstrap methods. Frequencies of death and surgical complications (by type) within 120 days follow-up will be presented, in the two study arms. If there is no clear evidence of non-proportional hazards, Cox regression will be used to analyse time to death, and time to complication. Discharge destination (ie, whether the same as prefracture place of residence), place of residence at 120 days (whether the same as prefracture place of residence) will be compared between study arms using logistic regression. For each of these outcomes, the number of events will be checked when considering adjustment factors to be included in the analyses.
Unadjusted analyses of the primary outcome and secondary outcomes will also be run.
Analysis of safety outcomes (including operative complications) will be based on the per-protocol population as well as on an intention-to-treat basis.
Data management and confidentiality Data collection tools and source document identification The data collection tool for this study is paper Case Report Forms (CRFs). Data are entered directly onto the CRFs and considered source documents. The CRF consists of standardised outcome measures (listed in table 1).
Open access
Data handling and record keeping The recruitment sites store all original signed informed consent forms and copies of the CRF pages. Information on these documents is transcribed at site to a trial database. The CTU Trial Manager checks the trial database for data completeness and liaises with sites regarding any data queries.
Access to data Direct access is granted to authorised representatives from the Sponsor, host institution and the regulatory authorities to permit trial-related monitoring, audits and inspections in line with participant consent.
Data protection and patient confidentiality
The trial ID is used to identify data collected on CRFs and stored on the CTU database. Access to the CTU database is password protected and limited to those individuals necessary for quality control, audit and analyses. The Sponsor acts as the data controller for this study and will archive identifiable information for up to 5 years after the study has finished. Nonidentifiable information will be kept in an open access archive managed by Exeter University indefinitely.
Archiving
Archiving is authorised by the Sponsor following submission of the end of trial report. The Sponsor is responsible for archiving the essential documents. Exeter CTU is responsible for archiving the trial database. All essential documents will be archived for 5 years after completion of trial. Destruction of essential documents will require authorisation from the Sponsor.
Monitoring, audit and inspection
A Trial Monitoring Plan has been developed and agreed by the Trial Management Group (TMG), IDMC/TSC (for charter contact lead author) and CI based on the trial risk assessment. Monitoring is being conducted by the CTU Trial Manager both remotely using the trial database and also with in person visits to sites. The CTU Trial Manager monitors participant enrolment, consent, eligibility and allocation to trial groups; adherence to trial interventions and policies to protect participants, including reporting of harm and completeness, accuracy and timeliness of data collection. Site staff are expected to assist the Trial Manager when requests for information are made or when an in person site visit is arranged.
Patient and public involvement
The premise and the primary outcome of this trial was informed by discussions with patients about key outcomes for a separate project. We continue to integrate the involvement of patients in this work, organised by patient co-applicant AA and PPI facilitator EC. To date patients/carers with experience of hip fractures have helped develop the study design, plain English summary, patient facing documents and the qualitative interview schedule; they will also be closely involved in codesigning patient friendly dissemination materials.
ETHICS AND DISSEMINATION Ethics
The protocol has been approved by the Yorkshire and the Humber-Bradford Leeds Research Ethics Committee (REC Reference: IRAS 258327). The trial is conducted in accordance with the study protocol, the principles of the Declaration of Helsinki, International Conference on Harmonisation of Good Clinical Practice and the Medicines for Human Use (Clinical Trials) Regulations, 2004. 17 Also in accordance with the UK Policy Framework for Health and Social Care Research 18 the Mental Capacity Act 2005 19 and the Data Protection Act 2018. 20 The trial has been adopted by the National Institute for Health Research (NIHR) Clinical Research Network and has relevant local NHS research approvals. The trial is sponsored by Royal Devon and Exeter NHS Trust, and managed by the UKCRC-registered Exeter CTU.
Amendments
Sponsor SOPs are being followed for amendments. The decision to make an amendment is made by the TMG with Sponsor approval. The Sponsor decides whether an amendment is substantial or non-substantial. The Exeter CTU Trial Manager submits amendments to the REC, records approvals and communicates approved amendments to sites. Amendment history will be tracked and recorded in the Trial Master File maintained by Exeter CTU.
Dissemination and impact activities
Trial progress is reported at TMG and IDMC/TSC meetings. We will follow established practice in our institution in disseminating the results of the HemiSPAIRE trial using the widest range possible of peer reviewed scientific journals, professional publications and national academic meetings. We will present at national and international conferences. If proven to be superior, the SPAIRE technique is likely to continue at recruiting sites and also adopted in other centres around the UK. Results will be incorporated into our clinical training programmes and we will make recommendations to regulatory bodies such as NICE. At the end of the trial, we will seek input from our PPI representatives to help disseminate a lay summary of the findings to study participants. Results will be disseminated via public involvement events, patient groups, networks and media releases. | 2021-06-10T06:16:32.449Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "3b77a3fb80c9ddc4e7e233fca6bcc0fa3a5952fd",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/6/e045652.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "10bf2b3a83d2f52337b363de945d9bee5d6fe7fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7466066 | pes2o/s2orc | v3-fos-license | Effect of “Deqi” during the Study of Needling “Wang's Jiaji” Acupoints Treating Spasticity after Stroke
Background. Acupuncture has been shown to reduce spasticity and prevent the onset of spasticity after stroke. The purpose of this study is to assess the effect of “Deqi” during needling “Wang's Jiaji” acupoints treating spasticity in the early stage of stroke. Methods. This study is a multicenter, prospective, randomized, controlled trial. 238 patients with stroke (<21 days) participated and were randomly allocated to the verum-acupuncture (n = 121) group or sham-acupuncture group (n = 117). The verum-acupuncture group received verum acupuncture required to produce the sense of “Deqi” while the sham-acupuncture group received sham acupuncture without “Deqi.” Patients in both groups followed the same 30 min acupuncture regimen 5 times per week for a period of 4 weeks. Scales of MAS, FMA, ADL, MBI, NIHSS, SS-QOL, and MRS were measured at baseline and at 2, 4, and 12 weeks after intervention. Results. Significant differences were observed between two groups. The MRS rating composition has the statistical difference after 4 weeks (P = 0.017). The score of MAS, FMA, Barthel, and SSQOL in verum-acupuncture group has increased significantly compared with the sham-acupuncture group after 12 weeks. There was 14% reduction of higher muscle tension in the verum-acupuncture group. Conclusion. Acupuncture “Wang's Jiaji” points with sensation of “Deqi” in the early stage may reduce the occurrence and decrease the severity of spasticity after stroke.
Spasticity after Stroke.
Spasticity is explained by a loss of inhibition of reflex activity. Spasticity after stroke is due to damage of advanced brain motor control center, where regulation and inhibition to the low motor center such as spinal cord are interrupted and the initial function of the lower center is released, so the excitability of the motor loop is enhanced. Morbidity of spasticity after stroke is about 65% in the world [1], with about 80%∼90% in China [2]. It has the disabling effect on the stroke patients through reduced mobility, which may limit the potential success of rehabilitation [3][4][5], and affect activities of daily living (ADLs) [6], quality of life (QoL), and work, and add economic and caregiver burdens [3,7,8]. Also, the physical limitations associated with spasticity confer risk for falls and consequent fractures [9].
The management of spasticity after stroke should be guided by its potential impact on function and well-being, not only the difficulty with passive muscle stretch or loss of range of motion [10]. Oral medications, nerve blocks, botulinum neurotoxins (BoNT), intrathecal baclofen therapy (ITB), and surgical intervention that try to cure the spasticity have all been tried either separately or in combination but the success rate is not high. More powerful evidence coming from the randomized controlled studies is needed [11]. Study showed 2 Evidence-Based Complementary and Alternative Medicine that there is compelling evidence that acupuncture (Ac) may have a role in poststroke rehabilitation [12].
1.2.
Acupuncture. Acupuncture is used in China for more than 3000 years; it has been increasingly integrated into mainstream biomedicine recently [13]. Now it is commonly used in stroke patients [14][15][16]. It is one of the most important methods to improve the dysfunction after stroke [17]. It has been shown to reduce spasticity [18,19] and to have the minimal side effect [20]. In particular, early interventions may prevent the onset of spasticity after stroke and slow or limit its progression [21]. Recent systematic reviews suggest that acupuncture may be a useful adjunct to stroke rehabilitation, but stronger conclusions are limited by inappropriate designs, small samples size, and lack of appropriate controls [8,22,23]. Most previous studies provide limited objective evidence about the beneficial effects in terms of reduction in spasticity after the acupuncture treatment. A systematic quantification of the ability of acupuncture treatment to reduce spasticity is also lacked. What is more, data shows that any pain stimulation including that produced by acupuncture on limbs may cause the retraction reaction of the flexor and extensor which showed spasticity [24]. The choice of the traditional acupoints used on the limbs to reduce the spasticity is debated [25]. We designed this study, strictly randomized and controlled, with a larger number of cases, to evaluate the efficacy of "Wang's Jiaji" points on the back in spasticity.
Deqi. "Deqi" is a composite of unique sensations elicited when acupuncture stimulates. According to the theory of traditional Chinese medicine (TCM), it is regarded that the application of acupuncture through stimulating certain acupoints is to activate "qi" and blood of meridians and collaterals and to regulate the function of internal organs so as to prevent and treat diseases. Therefore, "deqi, " which literally means "the arrival of vital energy, " is a prerequisite for clinical effects and also an important judgment of the exuberance and decline of meridian "qi" and the prognosis of disease [26].
Aims.
The aim of our study is to evaluate the efficacy of acupuncture at nontraditional acupoints in poststroke spasticity patients.
Methods
This study is a prospective, randomized, single blind, controlled clinical trial. We perform this study according to common guidelines for clinical trials (Declaration of Helsinki, International Conference on Harmonisation (ICH)/WHO Good Clinical Practice standards (GCP) including certification by an external audit). The trial protocol has been approved by the Research Ethical Committee of Beijing Hospital of Traditional Chinese Medicine Affiliated to Capital Medical University. This trial was registered with ISRCTN at Current Controlled Trials (ISRCTN84985339).
Participants. All patients treated in the stroke wards in
Beijing Hospital of Traditional Chinese Medicine affiliated to Capital Medical University, Huguosi Hospital affiliated to the Beijing University of Chinese Medicine, and Beijing Fangshan Hospital of Traditional Chinese Medicine will be screened according to the following criteria at the inpatient clinic 2 weeks after onset of stroke. The trial is executed from October 2009 to June 2013.
(3) Subjects tested in other trials in the last 3 months.
(4) Combined serious primary heart, liver, kidney, and hematopoietic system diseases and psychiatric patients.
Sample Size.
Sample size estimation was based on previous data [28]; the scores of Fugl-Meyer Assessment (FMA) can be improved 36.78 after basic treatment for 6 months. Our previous study [29] indicated that the FMA scores of the verum-acupuncture group can be improved on average of 41.34. According to the comparison of two means, the standard deviation is estimated to be 10.2 points. Based on these assumptions, a sample size of 101 persons in each group is needed to reach a statistical power of 90%. This estimate is based on alpha 0.05. On the assumption that 20% of the participants may fall off during the course of the study, a target of 254 participants has been enrolled. In fact, during the process of study, 263 cases were enrolled, 25 cases fall off, and 238 cases were randomly allocated to the verumacupuncture group ( = 121) and the sham-acupuncture group ( = 117).
Randomization and Blinding.
Participants are consecutively enrolled and block randomised as they enter the trial at the inpatient clinic. The randomization procedure will be computerized and organized by the Research Center of Clinical Epidemiology affiliated to Peking University. We used block randomization to make the random allocation sequence and prepared predetermined computer-made randomization opaque sealed envelopes. The envelopes are numbered consecutively and were connected into a strain. It is requested that each envelope should be separated from the strain and be opened in sequence only after baseline period when the patient has been registered in the study. Patients and assessors are blinded about the acupuncture treatment administered. There is only random number not the group number on all the case report form (CRF) tables. Also, all evaluators are not permitted to ask patients about their treatment. Researchers who enroll patients and estimators who collect data in these two hospitals should be trained by the designer about the treatment modalities.
2.6.
Intervention. The interventions of two groups are operated by the physicians with acupuncture experience for 20 years. There are several approaches of verum/sham acupuncture in both groups including usage of disposable sterile stainless needle (0.32 mm × 40 mm), skin disinfection with 75% alcohol, needles retention for 30 minutes without moxibustion, or electrical stimulation. Patients of both groups received 20 sessions of verum/sham acupuncture in 4 weeks. All the acupuncture practices should be operated by the experienced practitioner after the unified training. Periodic checkup contained the coincidence of the practices taken in each hospital.
In addition to acupuncture, the basic therapies for cerebrovascular disease which are used in all the enrolled patients, including antiplatelet therapy, management of intracranial pressure and blood pressure, neuroprotective agents, treatment of complications, rehabilitation therapy (placement of good posture, routine rehabilitation training, etc.).
2.6.1. Verum Acupuncture. "Wang's Jiaji" points selected from Jiaji (EX-B2) are the necessary points used in acupuncture group, including the points located 0.3 cun lateral to the lower border of the 2nd, 4th, 6th, 8th, 10th, and 12th thoracic vertebra, and the 2nd and 4th lumbar vertebra. The patients are required to keep lateral position, hemiplegia limbs upwards. Piercing vertically, needles are inserted 10-25 mm in depth and manually manipulated by lifting, thrusting, and rotating methods with uniform reinforcing-reducing techniques to produce the sense known as "deqi. " 2.6.2. Sham Acupuncture. The points used in the sham acupuncture group located 0.1 cun lateral to the lower border of the 2nd, 4th, 6th, 8th, 10th, and 12th thoracic vertebra and the 2nd and 4th lumber vertebra. The patients are required to keep lateral position, hemiplegia limbs upwards. Piercing vertically, needles are inserted 5 mm in depth and remained for 30 minutes without moxibustion or electrical stimulation, with no needling sensation.
Outcome
Measures. The efficacy of acupuncture for spasticity after stroke is assessed by the following primary and secondary outcome measures; they will be assessed at baseline and at weeks 2, 4, and 12.
The main outcome measures are as follows.
(1) The modified Ashworth scale (MAS) was used to assess the muscle tension. The secondary outcome measures are as follows.
Statistical Analysis
The statistical analysis was performed by the Epidemiological Research Center of the Third Affiliated Hospital of Beijing University. Mean is used to describe the central tendency of continuous variables; the standard deviation is used to describe the discrete case. Kolmogorov-Smirnov method is used to test the normality of continuous variables. The data in line with the normal distribution will be analyzed usingtest to compare the differences between therapy group and control group. Nonparametric data will be analyzed using the Wilcoxon test to compare the difference between the two groups. If the outcome variables are two-way 2 × 2 contingency tables, chi-square tests or Fisher exact probability tests will be performed. In order to compare the effect of two groups before and after treatment, Mcnemer paired chisquare test is used to compare the difference before and after treatment in each group. Statistical analyses will be conducted using SPSS version 18.0. The value cutoff for statistical significance is defined as < 0.05, and all statistical tests are two-tailed.
Evidence-Based Complementary and Alternative Medicine 5
Results
From March 2009 to March 2013, 263 cases were included, 238 cases were completed, 121 cases belong to the treatment group, and 117 cases belong to the control group. 25 cases fall off; the rate of falls off was 9.50%. (During the process of the study, slow progress occurred due to the complicated inclusion of more cases, and we screened the Beijing Fangshan Hospital of Traditional Chinese Medicine as the third center to include the subjects. Of course, the researchers involved in the study received the same training.) The gender and age of patients between the treatment group and control group has no statistical difference; it is comparable of the onset time between two groups. The scores of MAS, FMA, Barthel index, NIHSS, SSQOL, and MRS were not statistical differences between the treatment group and control group (Tables 1, 2, 3, 4, and 5).
The MRS rating composition between the verum-acupuncture group and sham-acupuncture group has the statistical difference after 4 weeks, but there is no statistical difference at the time of 2 weeks and 12 weeks after treatment (Tables 6, 9, and 13).
The score of MAS, FMA, Barthel, and SSQOL has increased significantly compared with that in control group after 12 weeks. But the score of NIHSS has no statistical difference at the time of 2, 4, and 12 weeks after treatment (Tables 7, 10, and 14).
The differences between groups about the change value of MAS, FMA, Barthel, and SSQOL were statistically significant at 4 and 12 weeks after treatment compared to baseline. The change value of NIHSS score compared to the baseline had no statistical difference (Tables 8, 11, and 15).
Furthermore, the MAS of 37 cases was ≥3, 84 cases <3; the rate of higher muscle tension was 44%, which was below the 83% of the control group after 4 weeks of treatment. After 12 weeks of treatment, the MAS of 28 cases was ≥3, 93 cases <3 in the verum-acupuncture group; the rate of higher muscle tension was 30%, which was below the 60% of the control
Discussion
Spasticity after stroke especially of the affected limbs is a common complication which seriously impacts the quality of life of stroke patients, and it is also an important factor affecting the movement of stroke patients. The main purpose of this study is to define the acupuncture in the early stage of poststroke which may effectively relieve specificity or not, including reducing the probability of occurrence and decreasing severity of spasticity, on this basis, whether it affects damage and disability of patients or not, including the NIHSS, motor function, and quality of life. Acupuncture may decrease the increased spinal motor neuron excitability in paretic limbs of stroke patients [30]. The application of "Wang's Jiaji" points in acute stage of stroke patients may prevent the onset of spasticity, reduce the degree of spasticity [31], and improve the QOL of stroke patients in our preliminary study. And it is effective in spasticity superior to the conventional acupuncture in combination with rehabilitation training and the simple rehabilitation training [32]. Also, the effect is demonstrated to improve the hypertonia of the spasticity rat [33], but more advanced design is needed. Also, we had not found the similar study about Jiaji (EX-B2). The emphasis of this trial is strictly concealed randomization and the successful blinding of patients. All the physicians applying acupuncture treatment in this trial have more than 10 years of clinical experience of acupuncture.
The occurrence of spasticity is highly variable and may occur in the short-, medium-, or long-term poststroke period [34]; these variations in the time of onset of poststroke undermine efforts at measuring spasticity prevalence. Despite the fact that there is no clear consensus regarding the number of patients who develop spasticity after stroke [34], spasticity had emerged in the early stage of poststroke (5.4 days to 2 6 Evidence-Based Complementary and Alternative Medicine Table 9: Contrast of MRS rating between treatment group and control group after 4 weeks of treatment.
The study found that the scores of FMA of 12 weeks after treatment in the verum-acupuncture group were higher than the sham-acupuncture control group. The rate of higher muscle tension (MAS ≥ 3) in verum-acupuncture group decreased obviously after 4 and 12 weeks after treatment, especially after 12 weeks compared to sham-acupuncture group. Modified Ashworth scale (MAS) is now commonly used to assess the muscle tension and the Fugl-Meyer Assessment (FMA) is used to assess the motor function, including the upper and lower limbs. Therefore, we can draw the conclusion that acupunctures "Wang's Jiaji" points in the stage of stroke may reduce spasticity and decrease the severity of spasticity; it can effectively promote the movement of patients with stroke function improvement.
The study also found that the score of MBI and SSQOL at time of 12 weeks after treatment in the verum-acupuncture group was better than the sham-acupuncture control group. According to that, we concluded that acupuncture "Wang's Jiaji" points early may improve the life ability and quality of life.
What is more, the change in value of MAS, FMA, MBI, and SSQOL between groups was found at 4 weeks after treatment compared to the baseline, and it was more obvious at 12 weeks. The effect of acupuncture "Wang's Jiaji" points early in the stroke patients appeared at the time of 4 weeks after treatment. But it is strange that the score of NIHSS was not found to be improved during the study at 2, 4, and 12 weeks after treatment, even compared to the baseline.
The spasticity degree of limbs is an important factor which affects the motor function. Decreasing of spasticity can effetely improve the motor function. The recovery of limbs function of stroke patients needs long process, it is found that lower incidence and lighter degree of spasticity in verum-acupuncture group make the motor function benefit by interview 12 weeks after treatment. The score of NIHSS is used to reflect the overall level of stroke in patients with neurologic impairment, but there is only 2 items regarding the motor function of limbs. The improvement of motor function is difficult to be reflected by the score of NIHSS. In our study, the beneficial effects to score of NIHSS was not found because the improvement of limbs spasticity. "Wang's Jiaji" points come from the experience of Dr. Wangle ting who worked as an acupuncture doctor for 60 years. He believes that the position of "Wang's Jiaji" points is easier stimulated to "deqi" when acupuncture is compared to traditional Jiaji (EX-B2). We also believe that "deqi" during acupuncture at "Wang's Jiaji" points was helpful in the effective results of this study.
According to the view of modern neurophysiology, the posterior and anterior rami of spinal nerve and the sympathetic trunk are distributed in the region of "Wang's Jiaji" acupoints, with each of the posterior rami of spinal nerve connected with the adjacent 1-2 posterior ramus nerve upper and lower by fiber. It is effective that needling Wang's Jiaji acupoints to prevent and decrease the occurrence of spasticity by affecting the interactions of spinal cord and motor neurons, adjusting the spinal stretch reflex and balancing the function of motion system, which may adjust the state of the limb muscle tension. The study by Fink et al. [36] indicates that needle acupuncture may not be helpful to patients with chronic poststroke spasticity. But there was neurophysiologic evidence for specific acupuncture effects on a spinal (segmental) level involving nociceptive reflex mechanisms. Therefore, the purpose of our trial is to discuss the efficacy of needling unconventional acupoints such as Wang's Evidence-Based Complementary and Alternative Medicine 7 Jiaji points on preventing and decreasing spasticity after stroke. The results of this trial will be helpful to supply the evidence. The position of "Wang's Jiaji" acupoints is prone to produce "deqi"; in particular, the technique and depth of acupuncture stimulation in the verum group decided the produce of "deqi" better than the sham group. So we think that the effect of "deqi" during the acupuncture treatment is very important which will affect the curative effect about improvement of limb muscle tension which is a benefit for the stroke patients.
Our study has some limitations. Firstly, MAS is subjective and the Barthel index score or NIHSS cannot indicate the spasticity directly. We will add other objective measures which will be more appropriate to evaluate spasticity such as electrophysiology or biomechanics in future study. Secondly, we consider that a longer period of followup could be necessary to investigate the optimum timing for such an acupuncture treatment and to assess the value of repeated courses of acupuncture for patients experiencing spasticity after stroke.
Conclusion
It is helpful to reduce the occurrence and decrease the severity of spasticity after stroke by acupuncture "Wang's Jiaji" points in early stroke stage, especially for the patients whose occurrence of stroke is within 21 days, with limb muscle strength less than or equal to 3 evaluated by MAS. | 2018-04-03T00:15:32.017Z | 2014-11-12T00:00:00.000 | {
"year": 2014,
"sha1": "6576aa4f9fdf410134b922710bb6e73a26c90435",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2014/715351.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a342955b3e9fe253d464fe536c46785a367c95de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55164540 | pes2o/s2orc | v3-fos-license | Correction of Femoral Acetabular Impingement at the Time of Primary
Background: Primary total hip arthroplasty (THA) is considered one of the most cost-effective and functionally beneficial procedures to treat end-stage coxarthrosis worldwide. However, there is a small percentage of patients who are plagued by residual anterior hip pain and limited hip flexion. One explanation for this problem is bone and soft tissue impingement in the anterior hip region. In the native hip, the problem is described as femoral acetabular impingement (FAI). FAI is a form of developmental dysplasia of the hip (DDH). Not infrequently, these dysplastic acetabula are also retroverted. In primary THA, a retroverted boney acetabulum adversely affects prosthetic hip function. Specifically, when the acetabular cup is inserted in an anteverted position and the native acetabulum is retroverted, the proximal femur will impinge upon the retroverted acetabular bone with flexion and internal rotation. This causes mechanical dysfunction, prosthetic subluxation, and pain. We aptly name this condition prosthetic femoral acetabular impingement (PFAI). Methods: In this study we address PFAI with an anterior acetabular bone wall reduction (AABWR). In a consecutive series of 426 primary THA’s, we prospectively removed all impinging anterior retroverted bone during the THA procedure. Specifically, after final socket preparation and insertion, an AABWR was performed when acetabular bone extended more than 5 mm beyond the prosthetic acetabular cup. All acetabular cups were placed between 25-35 degrees of anteversion. Retroverted acetabular bone extending beyond the acetabular cup was removed along with impinging capsular tissues. All femoral stems were positioned between 15-20 degrees. We hypothesize that with an AABWR, groin pain and hip flexion will be commensurate with that of patients who did not require an AABWR (i.e., non-retroverted acetabulum). Results: The study group consisted of 426 primary THA’s. Three hundred patients (70%) had an AABWR and 126 patients did not require an AABWR. There were 140 females (47%) and 160 males (53%) in the AABWR group and 88 females (70%) and 38 males (30%) in the non-AABWR group. The average amount of bone resected in the AABWR group was 1.32 cm (0.3 cm to 3.4 cm). For females, the average bone resection measured 1.1 cm (0.3 to 2.0 cm). For males, the average bone resection measured 1.53 cm (0.3 cm to 3.4 cm). Harris Hip Scores (HHS) at minimum of 1 year follow-up (range 1 to 11.5 years) averaged 91 (64 to 100) for the entire study. In the AABWR group, HHS averaged 92 (71 to 100). Average hip flexion was 110 degrees (100 to 130 degrees). In the non-AABWR group, HHS averaged 87 (71 to 100). Average flexion was 109 degrees (88 to 125 degrees). In the AABWR group, 12 patients (4%) experienced groin pain symptoms. Ten of these patients rated his/her peak groin pain at a level of 1 (scale 0-4) and the remaining 2 patients rated his/her peak groin pain at a level of 2. As time progressed, 50% of these patients saw their groin pain resolve. In the non-AABWR
positioned between 15-20 degrees.We hypothesize that with an AABWR, groin pain and hip flexion will be commensurate with that of patients who did not require an AABWR (i.e., non-retroverted acetabulum).
Results: The study group consisted of 426 primary THA's.Three hundred patients (70%) had an AABWR and 126 patients did not require an AABWR.There were 140 females (47%) and 160 males (53%) in the AABWR group and 88 females (70%) and 38 males (30%) in the non-AABWR group.The average amount of bone resected in the AABWR group was 1.32 cm (0.3 cm to 3.4 cm).For females, the average bone resection measured 1.1 cm (0.3 to 2.0 cm).For males, the average bone resection measured 1.53 cm (0.3 cm to 3.4 cm).Harris Hip Scores (HHS) at minimum of 1 year follow-up (range 1 to 11.5 years) averaged 91 (64 to 100) for the entire study.In the AABWR group, HHS averaged 92 (71 to 100).Average hip flexion was 110 degrees (100 to 130 degrees).In the non-AABWR group, HHS averaged 87 (71 to 100).Average flexion was 109 degrees (88 to 125 degrees).In the AABWR group, 12 patients (4%) experienced groin pain symptoms.Ten of these patients rated his/her peak groin pain at a level of 1 (scale 0-4) and the remaining 2 patients rated his/her peak groin pain at a level of 2. As time progressed, 50% of these patients saw their groin pain resolve.In the non-AABWR group, 2 patients (1.6%) experienced groin pain and both patients rated his/her pain at a level of 1.
Discussion: Maximizing hip flexion and function for the active patient undergoing primary THA requires meticulous surgical technique.PFAI may be one reason for unexplained anterior hip pain in the highly active patient that demands higher hip flexion and rotation.Our experience shows that the anterior acetabular rim and part of the anterior column can be removed at the time of primary THA without compromising the THA procedure.The AABWR is now an integral part of our primary THA technique.
Background
Femoral Acetabular Impingement (FAI) causes hip pain when the native femoral neck contacts the acetabular rim in flexion.Impingement is most pronounced with flexion (above 80 degrees) and internal rotation.At the impingement point, soft tissues are damaged either by a pincer effect on the acetabular rim or via a cam effect upon the peripheral acetabular articular surface [1].FAI more commonly occurs when the native acetabulum is retroverted (Figures 1a & 1b) [2,3].This is a form of developmental dysplasia of the hip (DDH) that is underappreciated, as upon casual review radiographs of patients with retroversion dysplasia appear relatively normal.Retroversion dysplasia is a major cause of early degenerative arthritis in the middle-aged patient population and can lead to early Total Hip Arthroplasty (THA).Correction of acetabular retroversion with a periacetabular osteotomy in the pre-arthritic hip joint helps reduce pain and improve functional hip flexion [4][5][6].
THA in patients with uncorrected retroversion dysplasia is fraught with pitfalls.First, positioning of the acetabular cup in a manner that follows the native acetabular rim can result in clinical impingement [7].Following the native acetabular rim is the most commonly accepted method of cup placement when there is normal acetabular anteversion and inclination [8].However, if the surgeon does not appreciate that the native acetabular socket is retroverted, then placement of the prosthetic cup in this retroverted position will fail to solve the impingement problem that caused the original degenerative process.Additionally, excess femoral pelvic inclination resulting from lumbar hyperlordosis and/or spine fusion can cause the acetabular socket to be "functionally' retroverted [9].Furthermore, if acetabular retroversion is recognized and the prosthetic acetabular cup is placed in the correct anteversion and inclination, clinical impingement can still occur if the retroverted bone is not removed.This is not an uncommon scenario Figure 1b.Diagram of acetabular retroversion.On the AP radiograph, the anterior rim is in a lower position, whereas the posterior rim is higher.Radiographically, the two rim lines cross, creating the crossover sign.Frequently with acetabular retroversion, the ischial spine is prominently seen.Three dimensionally, the acetabular socket is closed and faces in a posterior direction.This configuration can also be seen with fixed flexed pelvic deformation, secondary to spine disease.
that we encounter clinically.The retroverted bone impinges upon the femur in hip flexion, causing pain and continued functional debility.We describe this latter scenario as Prosthetic Femoral Acetabular Impingement (PFAI).In the highly active young patient who demands a higher functional hip range, even mild relative retroversion of the acetabulum causes impingement in flexion and internal rotation.Repetitive PFAI can also adversely affect prosthetic bearing wear [5,10,11].If significant PFAI occurs, repetitive subluxation movements of the total hip bearing can result in abnormal bearing wear patterns.With large diameter alternative bearings, such as ceramic-ceramic, metal-metal, or ceramic-metal, repetitive subluxation can cause stripe wear marks [12][13][14][15].
In primary THA, acetabular retroversion can be corrected by reaming the acetabulum to a hemisphere and inserting the prosthetic cup into the "normalized" anteverted position.However, if the retroverted bone still remains, this can cause impingement.In this study, we utilize the surgical technique of removing retroverted acetabular bone at the time of primary THA.We call this maneuver an "Anterior Acetabular Bone Wall Reduction" (AABWR).This study prospectively reviews the clinical results of 426 consecutive primary THA's that utilized a large diameter monolithic acetabular cup.We review the incidence of AABWR and review the clinical results of patients that required this maneuver for correction of native acetabular retroversion.We hypothesize that with corrected acetabular cup placement and an aggressive AABWR, groin pain scores and overall Harris Hip function will be commensurate with that of patients not suffering from native acetabular retroversion.
Materials & Methods
Between August 2006 and June 2017, 455 primary THA procedures were performed at a single institution by the senior author (ejm).The study group included all patients who received a large diameter monolithic all-metal porous coated cementless acetabular cup.Patients who were excluded from the study group included those with traumatic or developmentally acquired acetabular deformities that needed an acetabular cage or a protrusio revision cup with augmentation and/or structural bone graft (29 patients).The study group thus included 426 consecutive primary THA's utilizing a large diameter monolithic all-metal cup.
All THA's utilized a standardized technique.A less invasive postero-lateral approach was used [16].Patients were secured in the lateral decubitus position using the Hip Grip System (Sun Medical, Redding, CA).Positioning was carefully performed by the operating surgeon (ejm).The anterior-inferior brace was positioned over the pubic symphysis and anterior superior iliac spine.The anterior-superior brace was centered over the xiphoid process.The posterior-inferior brace was centered over S1.The posterior-superior brace was positioned at the mid scapula level.The positioning technique allowed the posterior ilioischial line to be parallel to the length of the operative table.This line was used to help assist in cup positioning (Figure 2b).
With the less invasive postero-lateral approach, the superior one-half of the short external rotators were released from the upper posterior greater trochanter down to the base of the femoral neck.The hip capsule was preserved with transverse incisions made at the acetabular rim and the base of the femoral neck.A longitudinal incision was made along the femoral neck axis.
Acetabular cup preparation was performed with serial reaming in 2 mm increments, starting initially at 41 mm.The first ream was directed medially through the remaining cotyloid pads to the quadrilateral surface.Reaming was then performed in plane of 25-35 degrees of anteversion and a lateral opening of 40 degrees.Trialing was performed using a metallic hemisphere trial cup 1 mm larger than the last reamer.The trial cup was positioned at an anteversion angle of 25-35 degrees and a lateral opening of 40 degrees.The ilioischial line was used to assist as a reference (parallel to the long axis of the operative table).Any anterior acetabular bone that extended more than 5 mm beyond the acetabular cup was removed with osteotomes.This included the anterior rim, the anterior column, and the lateral portion of the superior ramus.Bone removal was performed as needed to assure flexion clearance up to 110-125 degrees, and to prevent impingement with the femur in flexion and internal rotation.The amount of bone removed (excluding osteophytes) was measured.If greater trochanteric impingement was evident with the combined flexion and internal rotation maneuver, then the anterior portion of the greater trochanter was trimmed with an osteotome to relieve this abutment.When necessary, as much as 20% of the greater trochanteric bone was removed.The acetabular cup utilized in the series was the Magnum™ Cup (Zimmer-Biomet, Warsaw, IN).The Magnum Cup is a monolithic cup with a metal internal bearing.The cup bearing was a Cobalt-Chromium (CoCr) alloy, treated with a hot isostatic pressure (HIP) technique that optimizes metal density.The carbide content was 2% by volume.The outer diameter was coated with a titanium plasma spray (applied as the cup was kept cool).The Magnum Cup also has four radial fins on its outer diameter.The cup was inserted with a flat face insertion device.Anteversion was selected between 25-35 degrees.The lateral opening was selected
Figure 2c. Intra-operative picture of trial cup placement (viewed from the posterolateral approach). The cobra retractor (top of picture) is located at the 11 o'clock position of the left acetabulum.
The retractor on the left side of the picture is placed just under the acetabular teardrop.The acetabular cup has been placed at 30° of anteversion, relative to the ilioischial line.This picture shows the retroverted acetabular bone anteriorly (arrows).All of this bone is removed with the "anterior hip decompression" maneuver.Inferiorly, the retroverted bone blends into the superior ramus and with the anterior bone wall reduction, some of the superior ramus is removed with the decompression maneuver.A careful, meticulous closure was performed.The hip capsule was closed as a separate layer.In all cases, the hip capsule was closed from the superior acetabulum down to the prosthetic femoral neck.The released proximal short external rotators were repaired to the posterior greater trochanter with sutures placed into the bone.All soft tissues were anatomically closed as best as possible.
All surgeries were performed with body exhaust suits (Stryker Corporation, Kalamazoo, MI) in non-laminar flow rooms.Anesthesia consisted of a general anesthetic combined with epidural anesthesia with low dose intrathecal preservative free morphine sulfate (0.1 mg).Wound closure was performed without drains.Intravenous antibiotics were administered preoperatively and continued for 24 hours, adhering to SCIP guidelines [17].
Post-operatively, patients were kept to 50% weight bearing for six weeks then progressed to full weight bearing.Patients were examined at 6 weeks, 3 months, 1 year, and annually thereafter.Hip review assessment was recorded with Harris Hip Scoring.Radiographs were taken at 6 weeks, 1 year, and every year thereafter.All charts were reviewed for complications and implant failures.Complications were defined as requiring re-operations for any reason.Failures were defined as requiring implant removal for any reason or declaring the hip joint clinically infected based upon the International Consensus Meeting on Musculoskeletal Infection [18].
Statistical analysis was performed using IBM SPSS Statistics 25 software.Spearman correlation analysis was utilized to check the presence of statistical correlation between researched variables.In order to verify the statistical significance between the two independent groups, the Mann-Whitney U test was used.Chi-square testing made it possible to compare if groups are equinumerous and if there is a significant relationship between nominal variables.For statistical significance a level of p < 0.05 was adopted.
Results
In this study group of 426 THA's, the average age was 64-years-old (21 to 93).There were 228 females and 198 males.Fifteen patients are now deceased.Of these 15 patients, there were no complications or reoperations.In the first 5.5 years of the study, we mated the Magnum cup with a Magnum CoCr head.Over these 5.5 years, 219 CoCr heads were used.After the Magnum head was discontinued in North America, we mated the Magnum cup with a DA bearing.Over the remaining 5.5 years of the study, we used 207 DA bearings.The study was comprised of two cohorts, one that underwent an anterior acetabular bone wall reduction (AABWR group) and one that did not (non-AABWR group).In the AABWR group, 267 of 300 femoral stems (89%) were of the lateral offset design (145 Mallory-Head and 122 Taperloc stems).In the non-AABWR group, 104 of 126 femoral stems (83%) were a lateral offset design (62 Mallory-Head and 42 Taperloc stems).The acetabular cup size ranged from 44 mm to 62 mm.The most common cup size utilized was 50 mm (93 cups).
In the entire study, 61% of females and 81% of males required an AABWR.For females, the average bone resection measured 1.1 cm (0.3 to 2.0 cm).For males, the average bone resection measured 1.5 cm (0.3 cm to 3.4 cm).Younger patients (< 55-years-old) required an AAB-WR 69% of the time (56 of 81 patients).Older patients (≥ 55-years-old) required an AABWR 71% of the time (244 of 345 patients).Statistical analysis revealed that males were more likely to require an AABWR than females (p < 0.001), while older patients are just as likely as younger patients to require an AABWR (p = 0.778).
In the AABWR group, Harris Hip Scores (HHS) averaged 92 (71 to 100).Average hip flexion was 110 degrees (100 to 130 degrees).In the non-AABWR group, HHS averaged 87 (71 to 100).Average flexion was 109 degrees (88 to 125 degrees).In the AABWR group, 12 patients (4%) experienced groin pain symptoms.On a scale from 0 to 4, the peak groin pain rating was 1 in 10 of the 12 patients and the remaining 2 patients rated his/her peak groin pain at a 2. As time progressed, 50% of these patients saw their groin pain resolve.In the non-AABWR group, 2 patients (1.6%) experienced groin pain and both patients rated his/ her pain at a 1. Mann-Whitney U testing demonstrated that patients who had an AABWR showed a statistically higher HHS score (U = 9344, p < 0.001) and a statistically greater hip flexion range (U = 16126.5;p < 0.05) compared to the non-AABWR group.With Chi-square testing, the incidence of groin pain between the two groups was not statistically different (λ2 (2) = 1.85; p > 0.05).
In this series there have been 5 dislocations (1%).All dislocations occurred during the postoperative recovery period (i.e., within the first 6 weeks).Of these dislocations, 3 were posterior and 2 were anterior.All 3 posterior dislocations occurred in the non-AABWR group (2.4%), while both anterior dislocations occurred in the AABWR group (0.7%).One posterior dislocation in the non-AAB-WR group required a reoperation to change the modular head to a longer length.All other dislocations were treated with a closed reduction and Spica bracing for 6 weeks.In these 4 cases there were no further dislocations.Statistical analysis demonstrated that there was no significant correlation between the two groups in terms of dislocation rate (p > 0.05).
There were 13 failures (3%) in the study group.The reasons for failure are listed in Table 1.The most common reasons were for a periprosthetic femur fracture after a fall, and conversion to dual articulation from a metal-to-metal hip bearing due to metal-related synovitis.There were 4 femoral neck cracks noted during femoral stem insertion.None required cabling or stem exchange.Despite having used 219 metal-to-metal hip bearings, there have been only 4 reoperations (1.8%) for metal-related synovitis and pain.In these cases we did not see any large pseudotumors (defined as greater than 4 cm in diameter).Of the 13 failures, 9 occurred in the AABWR group and 4 in the non-AAB-WR group.Statistical analysis demonstrated that there was no significant correlation between the two groups in terms of failure rate (p > 0.05).
Discussion
In THA, positioning of the acetabular cup is techniquedependent.Many surgeons espouse placing the prosthetic cup in a manner that follows the native acetabular rim.Others suggest cup placement at a set angle (10 to 30 degrees ReconstructiveReview.org • JISRF.org• Joint Implant Surgery & Research Foundation gressive in removing impinging anterior acetabular bone in a manner that would provide salutary improvement in groin pain and improved hip flexion without incurring an increased complication rate.By performing an AABWR during primary THA, we have so far enjoyed a relatively satisfying course with this study.It is our observation that males are more likely to require an AABWR than females (p-value <0.001), while older patients (≥55) are just as likely as younger patients (<55) to need an AABWR (pvalue = 0.778).
In this series, the incidence of unexplained groin and hip pain is low.We believe that by performing an AAB-WR, we restored functional hip flexion to a level that is commensurate with that of patients who do not have acetabular retroversion.Thus, both groups enjoy relative painfree activities of daily living.Patients who have challenged their hips with increased functional activities have not experienced any adverse consequences at an average followup of 5.3 years.Flexion range was also a priority in this study, to allow for improved functional range for patients who demand a higher activity profile.Increased functional range was achieved by setting combined implant anteversion between 40 and 55 degrees.By additionally removing all anterior bone extending beyond the prosthetic acetabular socket, the hip can flex further.This is borne out with our hip range measurements.The AABWR group showed a statistically greater hip flexion range compared to those who did not require an AABWR.This observation was unexpected, as we had hypothesized flexion in the two groups would be commensurate, but nonetheless pleasing.In addition, by reducing the anterior impinging bone, overall HHS scores were distinctly better in the AABWR group.We attribute the improved scores mainly to overall improvement in pain reduction, as the HHS is highly weighted towards the patient pain score.By replacing the arthritic joint and removing anterior impinging bone, the pain score is optimized.
In this review, we would be remiss in not discussing our favorable findings regarding the large diameter Magnum cup.Our aseptic loosening rate was low (<1%).The number of revisions due to reactive metal wear was also low (1.8%).We attribute our favorable results to three main factors: 1) implant mating, 2) implant design, and 3) implant biomaterial.Repetitive subclinical subluxation (as opposed to dislocation) is a problem with LDB cups that are not well-mated [24].With the less invasive posterolateral approach, we chose a "combined" hip version of 40 to 55 degrees [8].On the acetabular side, our cups were placed with an anteversion of 25 to 35 degrees with a theta angle no greater than 40 degrees.Femoral anteversion was chosen to be between 15 and 20 degrees.This combined of anteversion) based upon approach and surgeon philosophy [7].Acetabular cup positioning is more important in patients who require higher functional hip range and performance.If a patient requires high hip flexion, the placement of the acetabular cup in an orientation that follows the native acetabular rim can result in adverse consequences.Specifically, if the native hip is dysplastically retroverted, then the prosthetic cup will maintain this retroverted orientation [10].With high hip flexion (beyond 90-95 degrees), impingement will occur, resulting in clinical anterior hip pain, repetitive subluxation, or possibly recurrent dislocation [19].If a large diameter bearing (LDB) is placed, the risk of dislocation is reduced, but repetitive subluxation is still likely to occur [20,21].This will also cause abnormal bearing wear.This is of special significance with an alternative bearing, such as metal-metal, ceramic-ceramic, or metal-ceramic, where repetitive subluxation creates a stripe wear phenomenon [12,14,15].Furthermore, with an adverse wear scenario, the accelerated wear debris will result in adverse wear debris phenomenon including osteolysis, pseudotumor formation, bearing fracture, and/or implant loosening [7][8][9][19][20][21][22][23][24].
To optimize functional hip range and minimize repetitive subluxation, cup positioning should be centered within the patient's anticipated functional hip range.We have found that by placing the acetabular cup at a predetermined angle of 25 to 35 degrees of anteversion (relative to the ilioischial line) in every patient, we consistently center the acetabular cup to allow acceptably higher hip flexion, yet avoid hyperextension impingement with external rotation.This clinical study was conducted in an attempt to reduce the sequela of PFAI in primary THA.We chose to be ag-anteversion allows for functional hip flexion while minimizing head subluxation and cup edge scratching, which can lead to excess metal wear.The Magnum cup is a pure 180-degree hemisphere with insertion tabs on the extraarticular surface of the cup.This minimizes any bearing overload points.Additionally, the Magnum cup has 4 anti-rotation fins that help with initial stability.Finally, the CoCr alloy used for the bearing was treated with a hot isostatic pressure (HIP) technique that optimizes metal density, guarding against excess metal wear.This assumes first that the bearing couple is well-mated.
Although the Magnum cup continues to be sold, the large diameter metal head was discontinued in 2011.Since that time, we continue to use the Magnum cup as our first line choice in all primary THA's.We now utilize the dual articulation bearing, which thus far, at short-to intermediate-term follow-up (6 years), provides similar functional efficiency for our THA patients.
There are limitations to this study.First, the study was not randomized.A more definitive study would have exclusively identified patients with acetabular retroversion and randomized this group into two cohorts, those receiving an AABWR and those not.This study design was considered, but upon initial interrogation of study enrollees, we encountered a near unanimous opposition to this methodology.Secondly, this study used a large diameter cup and head construct.We believe this may have artificially reduced our complication rate.Had we utilized a traditional 32 or 36 mm head with a high molecular weight polyethylene socket, we may have incurred a higher dislocation rate.Thus, our AABWR maneuver cannot be extrapolated to provide similar efficacy with 32 or 36 mm head constructs.We recommend a similar study should be conducted in patients with 32 and/or 36 mm heads.
In essence, we performed an aggressive periacetabular "decompression" with removal of all bone extending beyond the prosthetic acetabular cup and with the removal of hypertrophic capsular tissue.We believe this maneuver complements the pain reducing effect of the THA procedure by decreasing the chance of mechanical impingement that can be a significant pain generator.This was borne out in our low groin pain scores.We further submit that had we not performed this maneuver, the hip flexion range in the AABWR group would have been significantly lower.Patients who suffer from stiff hips are limited not only by the arthritic process, but also by retroverted anterior acetabular bone that mechanically blocks flexion.We did worry that an over-aggressive anterior bone resection could weaken the pelvic ring, resulting in fracture or long-term pain, but the groin pain scores seen in the AABWR group improved over time as bone remodeling progressed.Additionally, we did not have any clinical cases of pelvic ring fractures.We believe the AABWR maneuver to be a safe and effective technique.
If a LDB THA is chosen for hip reconstruction, we advocate careful cup and stem positioning.The combined anteversion of 40 to 55 degrees chosen in this series appears to be acceptable for high flexion and combined hyperextension-extension-external rotation range.Based on our experience, we believe the removal of all retroverted acetabular bone extending more than 5 mm beyond the prosthetic acetabular cup is a critical step when using this bearing construct.The combination of careful cup/stem positioning along with an AABWR will minimize lever impingement.Our intermediate-term results in this study are encouraging.We hope that in the long-term this will have the salutary effect of minimizing abnormal bearing wear patterns.This series requires long-term follow-up to validate this claim.In this series we are also monitoring serum cobalt and chromium ion levels in our metal-metal bearing patients.Lastly, we advise against any cup additions (e.g., elevated posterior hoods) that would reduce implant range of motion.Hip stability should be obtained with careful selection of implant positioning, stem offset, and intraoperative trialing using a hemisphere cup without additions.Our experience shows that the anterior acetabular rim and part of the anterior column can be removed at the time of primary THA without compromising the THA procedure.The AABWR is now an integral part of our primary THA technique.
Figures
Figures 1a -1b Diagrams of anteroposterior view of the right hip demonstrating retroversion dysplasia.
Figure 1a .
Figure 1a.Diagram of acetabulum on anteroposterior radiograph.In a normal acetabulum, the socket is anteverted.On the AP radiograph, the anterior rim is above the posterior rim.Three dimensionally, the acetabular socket is open and faces in an anterior direction.
at 40 degrees.The cup was mated with either a Magnum CoCr head (Zimmer-Biomet, Warsaw, IN) or a Dual Articulation (DA) bearing (Zimmer-Biomet, Warsaw, IN).The inner head of the DA bearing was always a 28mm Delta ceramic head (CeramTec, Plochingen, Germany) with a titanium sleeve inserted into the ceramic head for neck length adjustments.We used two femoral stem designs in this study.Both were titanium alloy stems with proximal
Figures
Figures 2a -2gCase example of anterior acetabular bone wall reduction in a 69-year-old male (BMI 32) with end-stage arthritis of the left hip.
Figure 2a .
Figure 2a.Anteroposterior radiograph showing end-stage arthritis of the left hip.This patient also shows radiographic evidence of shallow socket dysplasia.Intra-operatively, after reaming the acetabulum to the native hip center, the acetabular socket was demonstrably retroverted.
Figure 2d .
Figure 2d.Photograph of resected anterior acetabular bone.We resected the anterior acetabular rim and part of the anterior column.At the maximum retroverted position, 2.2cm of bone was removed.Technically, this bone was removed with 1.5cm straight and curved osteotomes.
Figure 2e .
Figure 2e.Intra-operative photograph of the THA construct.In this case a Dual Articulation bearing was mated to the Magnum cup.The picture is taken with the hip in flexion of approximately 45 degrees and the femur internally rotated approximately 40 degrees.
Figure 2f .
Figure 2f.Intra-operative photograph showing hip stability after anterior hip decompression procedure.In this picture the patient's head is located to the right and the hip is flexed approximately 40 degrees.Notice that with 75 degrees of internal rotation, the hip still remains stable.
Figure 2g .
Figure 2g.Post-operative radiograph in recovery room.The pelvis is mildly externally rotated toward the left.The acetabular theta angle measures 32 degrees.Notice in the radiograph the retroversion of the native right hip in which the ischial spine is visible and there is a subtle crossover sign (black dot).
porous plasma spray coating.Depending on the boney architecture, we used either the Mallory-Head® stem (Zimmer-Biomet, Warsaw, IN) or the Taperloc® stem (Zimmer-Biomet, Warsaw, IN). | 2018-12-07T03:29:05.725Z | 2018-10-16T00:00:00.000 | {
"year": 2018,
"sha1": "d4b412ecf24820078b764f5cee90ffb2628756b0",
"oa_license": "CCBYNC",
"oa_url": "https://reconstructivereview.org/ojs/index.php/rr/article/download/206/260",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4b412ecf24820078b764f5cee90ffb2628756b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231644300 | pes2o/s2orc | v3-fos-license | Soil Solution Analysis With Untargeted GC–MS—A Case Study With Different Lysimeter Types
Surface-sourced organic compounds in infiltrating waters and percolates are transformed during their belowground passage. Biotic and abiotic processes thereby lead to continuously changing chemical environments in subsurface compartments. The investigation of such transformations of organic compounds aims for tracing subsurface fluxes as well as biotic and abiotic activity. To collect samples of soil solution, different kinds of lysimeters are available, spanning simple free-draining devices that sample water based on gravimetric flow and tension lysimeters allowing for approximating natural hydraulic conditions. Protocols for untargeted analytical profiling of organic soil solution constituents are scarce. We report here a solid phase extraction followed by GC–MS analysis, utilizing two long-term sampling devices in the Hainich Critical Zone Exploratory in Thuringia, Germany. In addition, we introduce a new lysimeter constructed exclusively from inert materials that allows for obtaining samples with little background signals in GC–MS. Polyvinylchloride (PVC)-based lysimeters introduce substantial background signals from plasticizers. We show how signals from these contaminants can be lowered during data analysis using chemometric background removal. Applying multivariate statistics for data analysis, we demonstrate the ability for monitoring of several sugars, fatty acids and phenolic acids at the topsoil-subsoil boundary and even beyond, via an untargeted analytical approach. Statistical tools facilitated the detection of differences in chemical signatures at three different land use sites. Data mining methods for metabolomics led to the identification of 3-carboxyphenylalanin as marker for a pasture site. The combined approach is suitable for the collection and extraction of topsoil and subsoil solution for untargeted metabolomics under near-natural flow conditions.
INTRODUCTION
Mobile organic compounds in surface and subsurface compartments, also called dissolved organic matter (DOM), are valuable markers for the monitoring of water fluxes, water quality, and pollution (Bianchi and Canuel, 2011). These surface signals can be altered or transformed but hardly traced within the Critical Zone (Küsel et al., 2016). The main groundwater recharge mechanism is the infiltration of precipitation and percolation of the seepage solution through soils (Freeze, 1969). Previous studies on DOM focused mainly on deposition, contamination, migration, and transformation of pollutants, such as polycyclic aromatic hydrocarbons, dioxins or insecticides (Frimmel et al., 2002;Haarstad et al., 2012). In contrast, natural organic signatures in uncontaminated environments are rather poorly understood, though there is a huge interest in quantifying such carbon compounds in soil solution, particularly to elucidate the formation and breakdown of organic matter in groundwater. Due to low concentrations and high chemical diversity of such compounds, untargeted sampling and analysis is highly challenging. Consequently, most existing methods focus on the targeted quantification of specific compounds with elaborate compound specific extraction steps (Fenoll et al., 2011). Untargeted approaches are a growing field for DOM analysis. The majority of these methods focus on compound classes (Brock et al., 2020;Ye et al., 2020) or they calculate sum formulas for more precise data annotation (Thieme et al., 2019). Still underrepresented are publications covering compound identification despite the fact that they have the potential to give novel insights into complex environmental processes including microbial and abiotic transformations (Bundy et al., 2009;Leyva et al., 2020;Withers et al., 2020). Metabolomics techniques have now matured sufficiently to serve as a tool for monitoring environmental samples (Garcia-Sevillano et al., 2015). Progress has been made in the high-resolution Fouriertransform mass spectrometry-based characterization of metabolic patterns in groundwater (Brown et al., 2005;Tautenhahn et al., 2012;Roth et al., 2015). These studies provide information on the elemental composition of analytes and allow correlation of observed patterns to the prevalence of substance classes. Another powerful approach is the use of hyphenated techniques where chromatographic methods are coupled to mass spectrometry (MS) allowing the sensitive detection and quantification of a broad range of metabolites. In both approaches it is however a challenge to spot relevant peaks from environmental samples in the presence of dominant contaminants. The availability of elaborate algorithms to filter out signals of contaminants allows tracing of biotic and abiotic signatures in even dilute matrices. Such background corrected data-sets can be the basis for comparative metabolomics as a tool to spot relevant differences in sample groups (Kuhlisch and Pohnert, 2015). Using computer-assisted statistical evaluation of datasets as well as data bases and analyzes of fragmentation patterns, relevant regulated compounds can be identified (Alonso et al., 2015). Such untargeted approaches require a rather unselective but reproducible and contaminationpoor sampling that has which is problematic in the case of soil solution.
In soil sciences, percolate collecting devices are powerful tools to collect and to quantify the passing water in the soil (Robertson et al., 1999;Weihermüller et al., 2007;Singh et al., 2018). Several studies have employed devices for water extraction from soil in general (Siemens and Kaupenjohann, 2004) or for the analysis of sum parameters, such as dissolved organic matter or for the targeted analysis of specific compounds (Winton and Weber, 1996;Fischer et al., 2003;Lloyd et al., 2012;Olofsson et al., 2014). Even the term lysimeters seems to be not clearly defined; such devices have also been used to quantify the export of suspended particulates, DOM, and hydrophobic organic compounds (Totsche et al., 2007;Dibbern et al., 2014) or to trace the carbon flow in the below-ground food web (Kramer et al., 2012;Malá et al., 2013). Challenging is the aspect that usually water collecting devices are constructed to cover (hydro) geological aspects rather than the needs of highly sensitive chemical analysis (Siemens and Kaupenjohann, 2003). Such devices are often resulting substantial background signals if samples are utilized for metabolomics analyzes. Here we present the development and evaluation of different lysimeters for water collection and subsequent sample preparation with the aim to establish a robust sampling method to enable comparative investigations of organic signatures using powerful algorithms developed for metabolomics. We use a GC-MS based approach for separation and detection that focuses on comparably small and mobile compounds, however, the workflow can easily be adapted to liquid chromatography/mass spectrometry (LC-MS) as well. Particular strengths of GC-MS are 1) high chromatographic separation performance, thus low competitive effects such as ion suppression, and more facile association of signals in comparative metabolomics compared to LC-MS; 2) EI-ionisation as employed by us is a more universal ionization technique compared to ESI which relies on protonation or deprotonation sites. Intrinsic fragmentation allows the use of common databases such as NIST and fragmentation theory also supports the assignment of unknown compounds. With seepage water, where many compounds do not match to database entries, we therefore consider the GC-MS approach strong to deal with unknown metabolites and enabling follow-up identification studies.
The introduced workflow permits the identification of marker molecules characteristic for land use that can be traced below ground. It thereby opens possibilities to monitor fluxes as well as (microbial) transformation. We introduce a lysimeter that reduces background signals of contaminants to a minimum thereby facilitating easy and reliable analysis of analytical data sets. Another focus of this study is the chemoinformatic elimination of background signals, caused by plasticizers or other contaminants introduced during sampling and workup. The optimized workflow is applied for monitoring of differences in organic signatures of water samples, collected at the topsoil-subsoil boundary at three different land-use sites.
Sampling Sites
For method development, soil solution was collected at closely neighboring locations at long-term monitoring sites of the Hainich Critical Zone Exploratory (CZE) Thuringia, central Germany ( Figure 1) (Küsel et al., 2016). The bedrock at the eastern hillslope of the Hainich low-mountain range is composed of alternating sequences of limestones and mudstones of Upper Muschelkalk (subgroup of Germanic Triassic) formations (Lehmann et al., 2020). Major soil types are cambisols, luvisols, and chromic cambisols that formed from the limestone-mudstone alternations and loess derivates as parent materials (Kohlhepp et al., 2017). Sampling took place at the monitoring plot transect, along the northern border of the Hainich National Park [51.105,508 N,10.407,190 E,440 m above sea level]. The different sites used in this study comprise the land uses: 1) managed forest of mainly European beech (Fagus sylvatica L.) with dominant cambisol soil type, 2) agricultural site with cropland used for wheat, corn, canola also dominated by a cambisol soil type, and 3) pasture site which is characterized by luvisols and chromic cambisols (Kohlhepp et al., 2017).
Lysimeter Set-Up
Three different lysimeters were evaluated for the collection of soil solution exiting the topsoil horizon (Ah). The lysimeters were installed in October 2014 up to May 2015 and regular sampling (every fortnight) as well as event-based sampling, following events like thunderstorms, has been performed for this analysis from October 2015 to January 2016. Table 1 lists the specifics of the respective instruments which are shown in Figure 2. Supplementary Table S1 lists the replicates utilized within this study as well as the sampling places and times.
Type I Lysimeter
This lysimeter consists of a polyvinylchloride (PVC) cylinder (diameter 19.5 cm) and a concave bottom plate ( Figure 2). The cylinder is separated by a punch plate into a bottom part, where the solution is collected gravimetrically and stored, and a top part with the forest floor/Ah soil monolith. The punch plate is perforated by 97 holes (each 10 mm in diameter) and covered by two polyethylene (PE) nets (mesh size 0.5 mm) that prevent soil material from dropping into the collected water but allow the passage of free-draining soil solution. The solution was sampled biweekly from the bottom part by a pump via a polyethylene (PE) outlet-tube using reduced pressure and a Woulfe bottle for sample collection.
The samples for the blank signal were acquired after assembly of the lysimeter in the laboratory. First, this set-up was carefully rinsed with bidistilled water and then bidistilled water was introduced and left for one week at 20°C before sampling.
Installation
The zero-tension lysimeters were installed in July 2014 underneath the topsoil horizon so that the punch-plate of the lysimeter was covered with ca. 4 cm soil. Care was taken that the soil and the overlaying forest floor were not substantially disturbed. For lysimeter installation, a soil monolith containing the Ah soil material and the overlying forest floor layer was cut out undisturbed using a steel cylinder (diameter 19.5 cm). Then, the soil column was carefully transferred from the steel cylinder onto the top of the net-covered punch plate of the lysimeter. The resulting soil pit was slightly enlarged and the lysimeter together with the soil monolith was installed and adjusted to the surrounding surface level. The open space between the lysimeter and the soil wall was refilled with soil material.
Type II Lysimeter
This lysimeter was prepared from a commercially available polyether ether ketone (PEEK) block with the dimensions of 150 × 245 × 40 mm. A cavity of 130 × 225 × 30 mm is milled into this block ( Figure 2). The set-up is covered by a 0.6 mm thick titanium grade 1 (material number 3.7025) plate perforated with 252 holes, each with a diameter of 2 mm. These materials were selected for a minimum of potential biological activity on surface to minimize further sample modification (Zhang et al., 2014). To collect the water a 3/8-28 thread adapter for standard 1/8 inch HPLC-PTFE tubing is attached. The whole apparatus was first rinsed in the laboratory three times with bidistilled water, before being installed. Soil solution was sampled from the lysimeter by suction of the collected water using a syringe. One replicate at the time was collected and extracts were measured in triplicate with GC/MS.
Installation
The lysimeter was installed in March 2014. As described for the Type I device the installation was done by cutting and lifting the soil carefully using a sharp spade removing additional soil under the lysimeter and placing the lysimeter in parallel to the surface. Water and control sampling followed the protocol described for lysimeter type I.
Type III Lysimeter
The tension-supported Type III lysimeter was specifically constructed to representatively and quantitatively sample the seepage with dissolved, colloidal and particulate mobile components in undisturbed soils (similar Set-up: (Zhang et al., 2018). We used commercially available lysimeters that were filled with glass beads (all from UMS GmbH, München, Germany), circular in geometry and made of stainless steel (d 0.3 m; h 0.14 m). Hydraulic contact of the lysimeter to the soil was mediated by a porous bed made of silica beads (∼2 mm diameter, Sigmund Lindner GmbH, Warmensteinach, Germany) and supported by a 1 cm thick silicon carbide porous plate (pore size: 10 µm) at lysimeter bottom. Prior installation the system was rinsed with bidistilled water. A controlled suction is applied to the lysimeter porous plate, regulated according to the actual matric-potential measured via tensiometer (T8, UMS) in the depth of lysimeter installation (30 cm). The suction is applied with a battery powered vacuum pump, connected to a suction control unit. This unit is connected to the lysimeter via a Woulfe-bottle that collects the soil solution from the overburden undisturbed soil while at the same time applying the suction to the lysimeter.
Installation
Type III lysimeters were installed in duplicates in a depth of 30 cm (boundary topsoil-subsoil) below surface at all three land use sites. In addition, a set of duplicates in a depth of 60 cm (subsoil) was installed at the same sites. In the forest the lysimeters were installed in October 2014, in the cropland and in the pasture site in May 2015. To install a type III lysimeter a soil pit (L × W × H: 2 × 1 ×1 m) was trenched and faced. From this pit, a vertical tray (L × W × H: 0.4 × 0.4 × 0.3 m) was excavated with its top ceiling located in the desired depth for seepage water collection. In this way, the overburden soil remained undisturbed. Prior installation of the lysimeter, the tray bottom and top were carefully leveled. The lysimeter was then placed in the tray and pressed to the ceiling with the load compensation unit (Lehmann et al., 2020).
Sample Preparation
Lysimeter samples were kept at 0-5°C for transport and handled immediately in the lab. Hydrophilic-lipophilic balanced solid phase extraction (SPE) was carried out using Strata X 500 mg, 6 ml amide modified polystyrene resin cartridges (Phenomenex, Aschaffenburg, Germany) that were conditioned by passing 5 ml methanol followed by 5 ml water. For extraction, a 50 ml syringe without plunger was attached to the Luer cone of a Swinnex filter holder (Ø 47 mm, Roth, Germany) equipped with a GF/C filter (VWR, Germany, Ø 47 mm, 1 µm pore size of glass fiber). The filter holder was connected with a male-male Luer adaptor to a tapered SPE adaptor which was placed onto the cartridge. After 50 ml seepage water sample passed the cartridge, 5 ml bidistilled water was used to remove salts followed by applying a vacuum to pass air through the cartridge for 5 min. Elution was carried out with a mixture of 5 ml methanol and acetonitrile 1:1 (v/v) by gravity. The samples were spiked with the internal standard ribitol (5 µL of 40 µM aqueous solution) and dried in a nitrogen stream followed by drying overnight in vacuum. Previous to adding ribitol, we verified that natural unspiked samples do not contain detectable traces of ribitol. Therefore a doubly concentrated sample was injected using identical conditions as above. No traces of ribitol were detected ( Supplementary Figures 2-5). The samples were dissolved in pyridine (20 µL) and derivatized with N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA) (20 µL) at 60°C for 1 h. GC-MS analysis was conducted immediately after derivatization.
GC-MS Measurement
Gas-chromatographic analysis was executed on a Thermo (Bremen, Germany) Trace 1,310 gas chromatograph equipped with a TriPlus RSH auto sampler. A Thermo TSQ 8,000 electron impact (EI) triple quadrupole mass spectrometer was used for detection; however, commonly available single quadrupole instruments would suffice. Separation was performed on a Thermo TG-5SILMS column with the following dimensions: length 30 m; 0.25 mm inner diameter, and 0.25 μm film thickness. The column was operated with helium carrier gas using a PTV injector with a column flow of 1.2 ml min −1 and Frontiers in Earth Science | www.frontiersin.org January 2021 | Volume 8 | Article 563379 splitless (1 min) injection. After initial 60°C operation the injector temperature was raised to 320°C with a rate of 14.5°C s −1 , held for 2 min and cleaned for further 5 min at 350°C with a split-flow of 50 ml min −1 . After the cleaning time, the split flow was set to 20 ml min −1 . The injector syringe was cleaned twice with 5 µL n-hexane and rinsed with 1 µL sample before injection. After injection, the syringe was cleaned five times with ethyl acetate and five times with n-hexane (5 µL each). The GC column oven was held at 100°C for 1 min and temperature was subsequently raised to 320°C at 5°C min −1 . This temperature was held for 3 min before cooling and re-equilibration. The mass spectrometer recording started at 10 min measurement time, monitoring the mass range between 50 and 650 m/z in EI+ (70 eV) mode. The MS transfer line and the ion source temperature were set to 300°C. Peak detection and integration were carried out using the software Thermo TraceFinder EFS 3.1. The retention time window was set to 30 s, and the genesis peak detection algorithm was selected. The relative amounts of the monitored substances were evaluated in relation to the internal standard (ribitol) by normalization integrals. An additional normalization for the sum parameter DOM (Sysi-Aho et al., 2007) did not substantially affect our analysis (data not shown) and was not pursued further.
Procedures for the identification of contaminants and corrections for signals caused by such compounds found in the blank are described below in the results and discussion section. Peaks used for quantification and confirmation are listed in Supplementary Table S2. Each replicate sample was measured three times using a randomized sample list. The quantification was carried out using a quantification ion and two or three confirmation ions. All relative quantifications were normalized to the internal standard ribitol. These values were additionally normalized to plot the relative intensities by defining the most intensive integral of each peak considered to a value of 1.00.
Substance identification was carried out by retention time comparison with authentic standards. Alternatively, suggested hits from the National Institute of Standards and Technology (NIST) MS library 2.0 g and the NIST database 11 were collected.
Statistical Analysis
The statistical data processing was done using a procedure including peak recognition and alignment with XCMS (see chapter XCMS) followed by a statistical analysis with MetaboAnalyst (see chapter MetaboAnalyst) to assess variation in the seepage of the different lysimeters, two environmental replicates each for the different land use sites based on GC/MS analysis were compared using principle component analysis (PCA).
XCMS
The raw data files were converted into -.cdf file format using the Xcalibur (3.0.63) file converter. Statistical data processing was carried out using the XCMS software platform (Version 3.01.03, La Jolla, CA, United States). For blank-subtraction, raw data were handled with Xcalibur using the data from blank measurement with a scaling factor of three applied to the whole file. Processing was carried out using the pre-defined setting for GC-measurements "Single Quad (matched filter)". In addition, the retention time correction was removed (Tautenhahn et al., 2012). The following settings in detail were applied: feature detection: matchedFilter; step: 0.25; FWHM: 3; S/N ratio cutoff: 10; max # chrom. Peaks: 100; mzdiff: 0.5; retention time correction: none; mzwid: 0.25; minfrac: 0.5; bw: 3; max: 100; minsamp: 1; statistical test: ANOVA (parametric); perform post-hoc analysis: true; p-value threshold (highly significant features) 0.01; fold change threshold 1.5; p-value threshold (significant features) 0.05.
MetaboAnalyst
The XCMSdiffreports harboring the peak intensity table were converted into a -.csv file and imported into the functional module "Statistical Analysis" of MetaboAnalyst. After data import neither "missing value estimation", transformation or scaling was performed. Data filtering was conducted based on the standard deviation, which removes data that were near constant throughout the experiment conditions (40% of the features were filtered). A sample specific normalization was used based on the peak area of the internal standard ribitol. Data scaling was done by performing a Pareto scaling to reduce the relative importance of large features while keeping the data structure partially intact (van den Berg et al., 2006). Heatmap visualization was conducted using a Euclidean distance measure and the cluster algorithm ward. To highlight the most prominent features, the top 200 entries were selected using a t-test/ANOVA based selection. On the basis of these 200 features a heatmap and cloud plot were generated.
Compound Identification
For compound identification it must be mentioned that due to the derivatization which makes polar compounds accessible for gas chromatography, each spectrum was assessed individually for the presence of TMS groups. Note that the TMS specific peak at m/z 73 is not always recognized by XCMS, because of its occurrence over the whole retention time range. Contaminants found in lysimeter Type I-The most substantial feature (cloud at 26.19 min) had an m/z value of 357. The compound was identified as bisphenol A (as bis-TMS derivative Prob: 85%; SI: 917‰; RSI: 919‰), a compound commonly used in PVC production. All other most dominant signals in this sample can also be attributed to contaminants such as phthalates Glucose isomers-Identity was verified by comparison of mass spectra and retention times with those of authentic standards. The αand β-isomers were assigned according to their retention order (Medeiros and Simoneit, 2007). Since αand β-glucose are readily interconverted during sample preparation, the relative abundance of both isomers is determined by the solvent properties during handling, rather than by their presence in the soil. (8). The presence of three silicon atoms necessarily implies that three groups in the molecule had undergone TMS derivatization. Hypothetically, replacing three TMS groups by hydrogen atoms leads to a molecular mass of 209 of the non-derivatized molecule. This compound harbors 11 ± 1 carbon atoms with the most plausible molecular formulas of C 10 H 11 NO 3 , C 10 H 15 N 3 O 2 , C 11 H 15 NO 3 , C 11 H 19 N 3 O, C 12 H 19 NO 2 , and C 12 H 23 N 3 based on the isotopic pattern. Due to the pristine environment in the pasture site we narrowed our search on natural products containing three derivatizable functional groups with the sum formulas mentioned above. All these criteria considered, the major MS fragments are in accordance with 3-carboxyphenylalanine ( Figure 3E), a compound that was previously isolated from the leaves, stems, roots, and inflorescence of Resedaceae species (Kaa Meier et al., 1979).
Lysimeters
We evaluated three types of lysimeters for their suitability to collect seepage water for the monitoring of organic signatures in soil water using GC-MS. The general aspects (set-up, costs, resistance, chemical artifacts) of the lysimeters are summarized in Table 1, pictures of the lysimeters are given in Figure 2. Lysimeter type I and II were designed for and installed at the topsoil-subsoil boundary thereby allowing the characterization of the input signal that would enter deeper layers. Lysimeter type III was installed to sample seepage water in 30 and 60 cm depth, thereby monitoring water after the passage through the topsoil and additionally the subsoil, respectively. Thus, the water may have estimated dwell time up to one month to undergo transformation until it reaches the sampling device (Sprenger et al., 2016). We aimed for an untargeted qualitative analysis and relative quantification of candidate compounds that allows identifying patterns of variability in water from subsurface environments.
Due to the installation and sampling procedure this set-up gives authentic results regarding the natural flow and water budget.
General Evaluation of the Lysimeters
For the initial evaluation, all three lysimeters were exposed to the environment. We analyzed the soil solution sampled after cumulative collection by the lysimeters. After SPE and desalting with a polymeric amide modified polystyrene resin. GC-MS measurements of the organic eluates were performed after evaporation of the solvent and derivatization with BSTFA, resulting in trimethylsilylation of -OH, -COOH, and -NH 2 groups, following a modified protocol from (Vidoudez and Pohnert, 2012).
It is evident that the three entirely different lysimeter types that also rely on different sampling strategies will give nonuniform patterns of detected compounds. We evaluated if and how environmentally relevant data can be obtained from these different samples and if universal marker compounds can be identified from all three lysimeter set-ups. Visual inspection of the chromatograms revealed substantial qualitative and quantitative differences (Figure 4). Signals caused by plasticizers dominated in chromatograms of type I lysimeter samples. Substantially lower contaminations were detected in type II lysimeter samples. Principal component analysis (PCA, Supplementary Figure S1) confirmed that chemical profiles are highly dependent on sampling instrumentation since the PVC lysimeter (type I) and the inert type II lysimeter showed entirely different profiles despite relying on identical sampling strategies.
The type III lysimeter should not be directly compared with the others since it was installed in deeper soil therefore resulting in long flow path and higher interaction time between soil and seepage. In addition, the sampling was not based on the collection of water by means of gravity but by applying suction in the range of the measured predominant matrix potential, therefore sampling capillary and macropore water. Thus both, the different lysimeter setup and the differences in the chemical composition of the water will be responsible for the major differences in the chemical composition of water sampled with this device. This is clearly illustrated by principal component analysis that show data separated from those obtained with the other lysimeters. However, data are reported here, since inclusion of these samples allows to judge about the suitability and quality of the samples generated. Especially with lysimeter I we faced several challenges in the further data analysis to reveal the naturally occurring water chemistry composition despite the background signals. Visualization of differences between the samples according to retention time-m/z-pairs of selected analyzed ions, as well as differences in the respective sample types, is given in Supplementary Figure S1A2. The largest difference of any detected signal can be attributed to an m/z value of 357 at 26.2 min in samples obtained from type I lysimeter. The compound was identified as bisphenol A, commonly used in PVC production. All other most dominant signals in this sample can also be attributed to contaminants such as phthalates and fumarates, oxidation stabilizers like butylated hydroxytoluene and decayed stabilizers such as triphenylphosphine oxide (see Supplementary Information). The amount of these contaminations decreased over time when the lysimeter was in use and exposed to environmental conditions (data not shown). Only bisphenol A was found in high amounts over the entire 8months exposure. As illustrated in detail in the Supplementary Figure S1 in silico data treatment like background subtraction leads to data sets of higher quality; however, contaminations could not be entirely suppressed in lysimeters of type I. Strategies for data evaluation that allow targeted and untargeted analysis despite the contaminations are described below.
The type II lysimeter was built from inert material without using any glue or plasticizers and could be used directly after rinsing with bidistilled water without further conditioning. Indeed, we did not detect any of the typical contaminations from e.g., plasticizers or stabilizers under routine sampling conditions. Even if production costs for these lysimeters are comparably high due to the utilized high prized materials, the reduction of contaminations is substantial and a robust data set for qualitative and quantitative analysis can be obtained (Figure 4).
The tension type III lysimeter collected water at greatest depth; this water gave a more than 10-fold lower overall signal intensity compared to those in upper soil (Figure 4). This result is in unison with general observations indicating a depth-dependent reduction of the concentration of total organic carbon and of specific phenolic acids originating mainly from decaying plant material in deeper sampling sites (Martens et al., 2004). The background signal from contaminations is low in lysimeter III, which makes it a good compromise for monitoring fluxes and transformation products in unaffected soils if sampling costs and sample purity is concerned.
Identification of Naturally Occurring Compounds in Lysimeter Samples
Naturally occurring compounds were identified upon visual inspection of the data from samples collected in lysimeters type II and type III. We also subjected the backgroundcorrected file-set from these lysimeters to XCMS data analysis and obtained a cloud plot that allowed identifying common features. Compounds that exhibit sample dependent variability were identified and matched to previously reported naturally occurring compound classes, including carbohydrates (Paul and Clark, 1996), lipids (Jandl et al., 2002) and phenolic acids (Malá et al., 2013). Six of the eight unambiguously identified compounds found in the untargeted analysis using type II and type III lysimeters could also be observed in type I lysimeters in a targeted quantification. This indicates that even the strong contaminants from lysimeter type I do not fully overshadow soil solution chemistry. Among the eight variable compounds were plant or microbe derived αand β-glucose (RT: 20.20 min) (RT: 22.02 min) that were detected as penta-(trimethylsilyl)-derivatives (supporting information). Glucose cannot be found in chromatograms of samples from the type I lysimeter where it is most likely underdetermined due to adsorption properties of the lysimeter material ( Figure 5A). In addition, underdetermination of glucose might also be caused by metabolism in biofilms forming on PVC. Such bacterial activity is suppressed by the use of titanium in the type II lysimeter (Zhang et al., 2014).
We could also identify saturated fatty acids with a chain length of 16 (palmitic acid) and 18 (stearic acid) carbon atoms as Frontiers in Earth Science | www.frontiersin.org January 2021 | Volume 8 | Article 563379 9 variable marker molecules in the lysimeter extracts. These fatty acids were previously detected in targeted analyzes of soil solutions, and the fact that they are covered with the sampling procedure here, highlights the suitability of the introduced protocols for untargeted screening (Jandl et al., 2002;Jandl et al., 2004;Jandl et al., 2005;Jandl et al., 2007;Schwab et al., 2017). We found the highest abundance of stearic acid in type I lysimeter samples; in comparison, in type II and type III lysimeter samples 3-fold and 12-fold lower amounts were detected, respectively. The differences in palmitic acid concentrations were lower, but again, samples from lysimeters installed close to the surface (type I and II) contained higher amounts compared to those from lysimeter type III at greatest depth ( Figure 5B). The fact that saturated fatty acids are more prevalent in samples collected closer to the surface has also been previously observed in targeted analysis of soil extracts (Martens et al., 2004;Schwab et al., 2017).
Four typically occurring phenols, that are connected to humus and lignin chemistry (Schmidt et al., 2011;Lehmann and Kleber, 2015), were also identified using the cloud-plot analysis. Retention time comparison with commercial standards revealed their identity as the phenolic acids protocatechuic acid and p-salicylic acid. Syringic acid and vanillic acid were identified based on the evaluation of their characteristic mass spectra (Malá et al., 2013). The type II lysimeter samples contained the highest amounts of all phenolic acids ( Figure 5C). Lower concentrations of these compounds were observed in the type III lysimeter, potentially as a result of transformation, sorption on eg clay minerals or degradation during the passage to greater depth (see Figure 5C). The lowest recovery of these compounds in lysimeters of type I might again be explained by adsorption to PVC. Besides these sugars, fatty acids and phenolic acids, we spotted eight additional common marker compounds. These compounds share a fragment at 131 m/z which might be from Δ 15,7 or Δ 15,8 sterol backbone (Goad and Akihisa, 1997) but could not be fully structurally confirmed (Supplementary Figure S1C2).
Thus, GC-MS-based metabolomics allows a broad survey of biotic and abiotic marker molecules in subsurface solutions. The information-rich EI-MS spectra allow for efficient library search and the chromatographic separation for precise integration. Additional LC-MS studies could be used with the same, underivatized, sample set to complement these data (Supplementary Figure 3). This additional data set covers also larger, more polar metabolites in a complimentary manner. The suitability of the workflow can, however be already judged with the set of GC-MS runs utilized and documented in detail here.
Comparison of the different extracts using the metabolomics workflow shows the strong dependence of profiles on the fabric, functioning, and placement of the lysimeters. This is indicative of a system containing highly diluted compounds that requires careful limitation of interfering signals. Working with devices like type II and type III lysimeters that introduce a minimum of contaminants allows for a straightforward data evaluation. These lysimeters are thus suitable to apply comparative metabolomics algorithms to spot similarities and variation between sites, covering a broad spectrum of natural products. Nevertheless, we show that data obtained with different sampling devices cannot be quantitatively intercompared.
Elucidation of Site-Specific Markers in Seepage Water
With the method evaluation in hands we undertook a survey at three different locations of the Hainich Critical Zone long-term monitoring sites (Küsel et al., 2016). This proof of principle study was undertaken to evaluate the suitability of the approaches for an untargeted survey of soil solution. Due to clear and expected differences of the samples from different lysimeter types, we selected two separate data sets from lysimeters of type I and III and performed two independent data analyzes to identify markers for different land use. Samples were initially analyzed using XCMS (Tautenhahn et al., 2012) and further evaluated by MetaboAnalyst (Xia et al., 2015). We first focused on samples from type I lysimeters installed at three different land use sites ( Figure 1). The initial step in our marker identification was the feature recognition including peak picking (of not background subtracted data). The resulting diffreport harbors the vast number of 6,576 total aligned features. To identify markers for the specific sampling site we next processed the data using the online platform MetaboAnalyst 3.0. Filtering, sample specific normalization based on the internal standard ribitol, and Pareto scaling were applied (van den Berg et al., 2006). Ribitol was not contained in unspiked samples (Supplementary Figures 2-5), however in a larger scale screening it would be advised to use labeled standard to fully exclude the potential interference with natural metabolites. In an unsupervised PCA analysis, data showed that 71.6% of the sample variability in the GC/MS data were explained by the first two principal components (PC): PC1, 53.6%; PC2, 18.0%. The forest (red) site clearly separated from the agriculture (green) and the pasture site (blue), which overlap in large areas ( Figure 6A). For the identification of marker molecules that are responsible for the separation within the PCA we plotted a heat map, based on the 200 most important features selected by analysis of variance (ANOVA) with a p-value threshold of 0.05. The Euclidean clustering confirmed the separation by the PCA. All three sampling sites can clearly be distinguished ( Figure 6B). The forest site shows more specific features than the agricultural site, and only one specific feature could be identified for the pasture site. It is most challenging to translate the detected relevant features into chemical compounds. By means of the features from the ANOVA based heatmap the peak intensities generated by XCMS were used to construct a color-coded cloud plot ( Figure 6C). We used the color code to distinguish between the sampling sites and the cloud size to visualize the specific (logarithmic) intensity. Acknowledging that the utilized electron impact ionization produces a series of fragments from a single compound, we combined dominant feature groups and identified one compound specific for the agricultural site, nine for the forest site and none for the pasture site. Three of the compounds had been already annotated in the preliminary lysimeter evaluation (see above): These compounds were p-salicylic acid (comp. 1, 14.87 min), vanillic acid (comp. 3, 17.83 min), and protocatechuic acid (comp. 5, 18.99 min). One further compound was assigned to be a monosaccharide based on typical fragments (m/z 147 and 117) and the retention time.
Since these markers were gathered from scaled data, we verified the findings by integrating and normalizing the original data ( Figure 6E). Despite large standard deviations, most of the relative concentrations were in accordance with their initial assignment to specific land use sites.
A second elucidation of potential marker compounds was based on data gathered using type III lysimeters installed 30 cm below surface. No clear patterns arose from those lysimeters even when testing several scaling techniques like auto scaling or level scaling (van den Berg et al., 2006). In contrast, samples originating from 60 cm depth resulted in clear patterns in unsupervised PCA with respect to the confidence interval. Partial least square discriminant analysis (PLS-DA, not shown) supported the result from the PCA. All three land use sites forest (red), agriculture (green) and pasture (blue) were separated with two principal components ( Figure 3A). Two-dimensional PCA summarized 87.9% of the sample variability in the GC-MS data by the first two principal components (PC1, 76.7%; PC2, 11.2%). A heat map was constructed based on the 200 most important FIGURE 6 | Statistical analysis and structure elucidation of compounds from seepage water collected with lysimeters of type I. Lysimeters were installed at three different land use sites and data are assigned by red labels for the forest sites, green for the agricultural sites and blue for the pasture sites. features (ANOVA, p value < 0.05). All three sampling sites can clearly be distinguished and the Euclidean clustering ( Figure 3B) confirmed the PCA separation. The pasture site shows more specific features than the forest and the agricultural sites (see Figure 3B). The color-coded cloud plot was used to visualize differences between the sampling sites ( Figure 3C). Only two compounds which are specific for the forest site are detectable, four compounds were specific for the agricultural and eight for the pasture site. In a NIST library search only the pasture specific feature group at 22.95 min gave a hit with an acceptable probability and spectral identity. The suggested 1,8diazacyclotetradecane-2,7-dione, a chemical used in nylon 66 production, was assigned as probable artifact or contaminant.
Due to the absence of library hits for the other putative markers, structure elucidation on the basis of the reconstructed mass spectra from the XCMS diffreport with further support from the original mass spectral data was conducted. For the forest site, marker structure elucidation was not successful, but compounds like sugars, aliphatic hydrocarbons or fatty acids can be excluded due to missing key fragments. The four agriculture specific markers could not be fully elucidated either, but two of them showed isotopic patterns indicative for two chlorine atoms in the formulae. The exclusive presence of these unknown compounds in the site with heavy land use lets us conclude that they likely represent degradation products of pesticides or related compounds. The elucidation of pasture site related compounds was also challenging and only for the chromatographic peak at 19.11 min we were able to define 3-carboxyphenylalanine as a plausible hit (Supporting information, Figure 3E). This compound was previously isolated from the leaves, stems, roots, and inflorescence of Resedaceae species (Kaa Meier et al., 1979). These belong to the order of Brassicales of which several species exist on our sampling site.
Monitoring of Time Series
The heterogeneity of soil might cause variability of the chemical composition of collected lysimeter water even if instruments are installed in close proximity. To compensate for such influences, we replicated our data in a time series where samples were taken from the same lysimeters (type III) four and six weeks after the above discussed sampling campaign. In addition, we took samples of one additional pasture site within the transect (south western spot in Figure 1). Compounds identified in the untargeted screening ( Figure 3D) were monitored in these sampling campaigns. Peaks from each campaign were integrated and normalized ( Figure 3F). Indeed, we could track 10 out of 15 putative marker molecules over the three sampling campaigns. All relative concentrations recorded in these independent experiments were in accordance with the findings in Figure 3D. Five compounds could be verified as specific markers for the pasture site (compounds 12 to 14, 16 and 17). Especially the putative 3-carboxyphenylalanine could be proven as marker over the entire monitoring period. Local variability, caused potentially by different histories in agricultural land use, was observed, since the chlorinated compounds two and four were only abundant in one of the two sampling sites.
CONCLUSION
The lysimeters tested in our analysis are suitable to monitor seepage water chemistry in an untargeted analysis. Costminimized PVC based free draining lysimeters enable the installation of several replicates with minimum expenses. However, initial bleeding leads to signals of contaminants that are dominant compared to the natural products and require for extensive data treatment. The newly designed PEEK/titanium lysimeter, as well as tension lysimeters made from stainless steel/ inert glass, allowed for seepage collection of samples that give high-quality chromatographic profiles. With some limitations of the PVC instruments, all lysimeters enable to analytically follow a structurally diverse series of compound classes as we have shown for sugars, fatty acids, phenolic acids. Sample heterogeneity is generally lower in deeper soil. Here not further discussed, but illustrated in Supplementary Figure 6, the SPE eluates and thus the whole sample pre-treatment is also suitable for LC-MS based metabolomic methods which broadens the field of application from this study. An untargeted data evaluation of lysimeter solutions allowed the identification of land use markers. Three phenolic acids can serve as specific markers for the forest land use site with lysimeters of type I. Additional markers for the land use forest (2), pasture (9), and agriculture (4) were identified. Two not further specified chlorinated compounds served as markers for the agricultural site in 60 cm depth, the putative 3-carboxyphenylalanine as a marker to the pasture site. Ten out of these 15 site-specific markers could also be traced over time in two additional sampling campaigns. Thus, GC-MS-based untargeted seepage analysis allows to identify a broad range of markers that can be used for the identification of spatiotemporal cycling patterns and for the tracing of matter fluxes within subsurface compartments of the Critical Zone.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
NU performed metabolomics analysis and implemented lysimeter type II. KL, AS, and SM maintained the Lysimeters type I and III implemented and KT, BM, and GP contributed to the conception and design of the work. NU, CZ and GP wrote the manuscript. All authors contributed to the final version of the manuscript. | 2021-01-20T14:09:23.239Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "4fbab9b73fa62729274c58d733e2759e48e4861c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2020.563379/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "4fbab9b73fa62729274c58d733e2759e48e4861c",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
233983766 | pes2o/s2orc | v3-fos-license | Post‐Cardiac Arrest Hydrocortisone Use Ameliorates Cardiac Mitochondrial Injury in a Male Rat Model of Ventricular Fibrillation Cardiac Arrest
Background Steroid use after cardiac arrest has been reported to improve survival and neurological outcome in cardiac arrest survivors. The study aimed to evaluate the effect of post‐arrest hydrocortisone use on myocardial damage and cardiac mitochondrial injury in a rat model of ventricular fibrillation cardiac arrest. Methods and Results Ventricular fibrillation cardiac arrest was induced and left untreated for 5 minutes in adult male Wistar rats. Cardiopulmonary resuscitation and electric shocks were then applied to achieve return of spontaneous circulation (ROSC). Successfully resuscitated animals were randomized into 3 groups: control, low‐dose hydrocortisone (2 mg/kg), and high‐dose hydrocortisone (8 mg/kg). The low‐dose hydrocortisone and high‐dose hydrocortisone (treatment) groups received intravenous hydrocortisone immediately after ROSC and the control group received saline as placebo. Each group consisted of 15 animals. Within 4 hours of ROSC, both treatment groups showed a higher cardiac output than the control group. At the fourth hour following ROSC, histological examination and transmission electron microscopy demonstrated less myocardial damage and mitochondrial injury in the animals treated with hydrocortisone. In the treatment groups, hydrocortisone mitigated the acceleration of Ca2+‐induced mitochondrial swelling and suppression of complex activity observed in the control group. At the 72nd hour after ROSC, a significantly higher proportion of animals treated with hydrocortisone survived and had good neurological recovery compared with those given a placebo. Conclusions Hydrocortisone use after cardiac arrest may mitigate myocardial injury and cardiac mitochondrial damage and thus improve survival, neurological and histological outcomes in a rat model of ventricular fibrillation cardiac arrest.
S udden cardiac arrest is a challenge to clinical physicians. Extensive efforts have been devoted to improving the quality of cardiopulmonary resuscitation (CPR) and ameliorate post-cardiac arrest syndrome. Even in cases where initial return of spontaneous circulation (ROSC) is achieved under aggressive resuscitation and organized teamwork, the cerebral and myocardial damage along with the systemic ischemia/reperfusion (I/R) injury during CPR result in high in-hospital mortality and poor neurological recovery.
Bundle care following cardiac arrest including ventilation, hemodynamic and targeted temperature management, and emergent coronary angiography as indicated has been proposed to improve the poor prognosis of cardiac arrest survivors.
For decades, the use of steroid during the resuscitation and post-cardiac arrest period has been investigated. Relative adrenal insufficiency is common in cardiac arrest survivors with vasopressor-dependent shock. 1 In both animal and human research, higher serum cortisol has been associated with a higher likelihood of survival and a lower likelihood of death from circulatory shock. [2][3][4] Several studies, including population-based analyses and clinical trials, have demonstrated the benefit of steroid administration after ROSC on survival and neurological outcomes. [5][6][7][8] Steroid use after cardiac arrest suppresses inflammatory responses, regulates catecholamine synthesis, and protects against ischemiareperfusion injury. 9 The cytokines and immune response after cardiac arrest are similar to sepsis and were once described as sepsis-like syndrome 10,11 until post-cardiac arrest syndrome was established in 2011. Prolonged low-dose corticosteroid therapy was suggested to benefit survival in patients with septic shock, 12,13 although the role of corticosteroid for severe sepsis has been undetermined for many decades. 14 Several studies have demonstrated the cardioprotective effects of hydrocortisone for myocardial ischemic injury. In a dog model of epinephrine-induced myocardial injury, hydrocortisone reduced microscopic damage and myocardial calcium concentration. 15 In rats with asphyxia-induced myocardial ischemia, pretreatment with hydrocortisone or dexamethasone ameliorated myocardial mitochondrial damage. 16 However, these 2 ischemic models did not address reperfusion injury, nor did they simulate most sudden cardiac arrest, which is of cardiac origin. Our previous study demonstrated myocardial mitochondrial injury after resuscitation from ventricular fibrillation (VF)induced cardiac arrest and electric shock. 17,18 Whether hydrocortisone administration after ROSC can reduce myocardial I/R injury and cardiac mitochondrial damage, thereby improve outcomes in VF-induced cardiac arrest, has not been investigated yet. Using a rat model of VF cardiac arrest, this study evaluated the effect of hydrocortisone use after cardiac arrest on myocardial damage, survival, and neurological recovery.
METHODS
The study was approved by the Institutional Animal Care and Use Committee of the National Taiwan University College of Medicine and College of Public Health, in compliance with the Guide for the Care and Use of Laboratory Animals published by the Laboratory Animal Center of National Taiwan University College of Medicine. The data that support the findings of this study are available from the corresponding author upon reasonable request.
Animal Preparation
The animals were prepared as described previously. 18 Male Wistar rats weighing 500±50 g were anesthetized with an intraperitoneal injection of sodium pentobarbital (50 mg/kg body weight). The trachea was orally intubated using a PE 200 catheter (Becton Dickinson, Franklin Lakes, USA). A saline-filled PE-50 tube (Becton Dickinson, Franklin Lakes, USA) was inserted through the right femoral artery to monitor arterial blood pressure and was advanced into the abdominal aorta. Left ventricle (LV) pressure was measured with another saline-filled PE-50 tube that was inserted through the right carotid artery and advanced into the LV. The third saline-filled PE-50 tube was inserted into the right jugular vein for fluid and drug administration and central venous pressure monitoring. To monitor temperature change, a thermodilution-tipped catheter (ADInstruments, Sydney, Australia) was inserted through the left femoral artery and advanced 10 cm into the abdominal aorta. Blood pressure, LV pressure, central venous pressure, body temperature, and needle-probe electrocardiogram monitoring data were recorded using a computer-based data acquisition system (ADInstruments). After surgical preparation, the animals were observed for 30 minutes to ensure hemodynamic stability. Before the experiment, body temperature was maintained at 37°C ±0.5°C by using an incandescent heating lamp.
Current-Induced Cardiac Arrest Animal Model
VF was induced with a guidewire advanced from the right jugular vein into the right ventricle. An alternating current increasing progressively from 60 Hz to a maximum of 1 mA was delivered to the endocardium and sustained for 1.5 minutes to prevent spontaneous defibrillation. The animals were subsequently left untreated for 3.5 minutes. After 5 minutes of VF, a mechanical chest compressor was used to administer chest compressions at 200 beats per minute) and mechanical ventilation was started. Mechanical ventilation was initiated with a tidal volume of 0.65 mL/100 g body weight at a frequency of 100 breaths per minute and at an inspired oxygen fraction of 1.0. CPR was synchronized to provide a compression/ventilation ratio of 2:1 with equal compressionrelaxation duration. After 1 minute of CPR, one 3-J monophasic electric shock was administered, and then 30 seconds of CPR was followed by one 5-J electric shock. Resuscitation was declared a failure
HE
hematoxylin and eosin HSC high-dose hydrocortisone I/R ischemia/reperfusion LSC low-dose hydrocortisone ROSC return of spontaneous circulation when ROSC could not be achieved after 8 shocks. All successfully resuscitated animals were closely monitored for another 4 hours following ROSC, wounds closed, extubated, and then put back in their cages. Dose of sodium pentothal at a dose of 10 mg/kg was given at 2nd hour after ROSC. Survival status was observed for 72 hours, and mortality was confirmed by loss of heartbeat and spontaneous respiratory movement over 2 minutes. 18
Study Protocol
The resuscitated animals were block-randomized into 3 groups: control, low-dose hydrocortisone (LSC), and high-dose hydrocortisone (HSC). Immediately after ROSC, the LSC group received 2 mg/kg of intravenous hydrocortisone (Solu-cortef, Pfizer, Belgium), the HSC group received 8 mg/kg, and the control group received 0.9% saline at the same volume as placebo. The hydrocortisone dosage in the LSC group was based on bolus injection of intravenous hydrocortisone 100 mg in patients with suspicious adrenal crisis. 19 The hydrocortisone dosage in the HSC group was calculated by dosage conversion between rat and human on the base of body surface area. 20 All the animals received therapeutic hypothermia at targeted temperature of 32°C, which was maintained for 2 hours by ice water spray, electric fan, or heat lamp. During the rewarming process, the temperature was increased at the rate of 0.5°C/h with electric fan and heat lamp when intubated. After extubation, the animals were rewarmed naturally under room temperature and monitored hourly before they waked up. A sham group received preparation without induction of cardiac arrest, experimental drugs, or hypothermia ( Figure 1A).
Hemodynamic Monitoring
The LV-positive dP/dt 40 and maximal LV-negative dP/ dt, which reflect systolic and diastolic function, respectively, were analyzed using a PC-based data acquisition system (ADInstruments, Sydney, Australia). For cardiac output measurement, 0.2 mL of isotonic saline indicator at room temperature was injected intravenously into the right atrium. As mentioned, temperature change was monitored using the thermodilutiontipped catheter (ADInstruments, Sydney, Australia) in the abdominal aorta, and cardiac output was calculated using a Cardio-Max II computer.
Evaluation of Neurological Function
The neurological function of the animals, including consciousness level, corneal reflex, respiration, righting reflex, coordination, and movement/activity, was evaluated with neurological function scoring on a scale of 0 to 12 at the 6th, 24th, 48th, and 72nd hours following ROSC (Table 1). Assessments were performed independently by 2 investigators who were blinded to group. A third investigator resolved any discrepancies in an independent assessment, and the score chosen by the majority was accepted. Good neurological recovery was defined as a score higher than 10.
Histological Examinations of Myocardial Damage
To further investigate the histological changes among the groups, the animals were prepared as described in the Methods section and euthanized at the end of the survival study (72-hour following ROSC) ( Figure 1A) and 4 hours following ROSC ( Figure 1B), respectively.
The apex, septum, and lateral wall of the LV were selected, embedded in paraffin, cut into sections, and observed under an optical microscope for hematoxylin and eosin (HE) and Gomori trichrome staining at 4 hours, and for HE and Masson Trichrome staining at 72 hours. In HE staining, myocytolysis was counted in 5 independent, randomly selected microscopic fields at ×200 magnification in each specimen. Each animal provided 6 specimens. In Gomori trichrome staining, aggregation of abnormal mitochondria was shown as red, ragged, thick, with irregular intracellular and intermyofibrillar deposits, and the damaged areas were counted as in HE staining. In Masson Trichrome staining, the collagen fibers were stained blue and the nuclei was stained black and the background was stained red.
In the transmission electron microscopy examination, the LV was fixed in glutaraldehyde, and 3 blocks were obtained in each myocardium. Thin LV sections (≈70 nm) were placed on uncoated 200-mesh copper grids, stained with 4% of uranyl acetate and 0.2% of lead citrate in 0.1 N NaOH solution, and examined under a Hitachi 7100 transmission electron microscope (Tokyo, Japan). An independent anatomist blinded to the grouping assessed the morphological and histological results.
Histologic Examination of Brain Injury at 72 Hours
The brains were removed, embedded in paraffin, and cut into coronal sections (5 μm) on a rotary microtome. The right half of the brain of each animal was selected and underwent H&E staining. The intact pyramidal cells of the hippocampus can be observed with complete cell structure with big and regular nuclei. When severely damaged, pyramidal cells were characterized by neuronal shrinkage and nuclear chromatin condensation. The morphological changes of neurons were examined in 3 independent, randomly selected microscopic fields at ×200 magnification in the Cornu amonis (CA1, CA2, CA3) of each hippocampal cut.
The neuron death was presented as percentage of total neurons counted. Three hippocampal cuts were counted per animal.
Analysis of Cardiac Troponin I, Adrenal Hormones, and Cytokines
At the fourth hour following ROSC, blood samples were taken and centrifuged, and the plasma was separated and stored at −180°C. Commercially available ELISA kits for cardiac troponin I (Elabscience Biotechnology, Wuhan, China), adrenocorticotropic hormone and cortisol (USCN Life Science Inc., Wuhan, China), interleukin 6 (IL-6) and tumor necrosis factor-α (TNF-α) (R&D Systems, Minneapolis, USA) were used.
Evaluation of Myocardial Mitochondrial Injury
Mitochondria were isolated from the LV by differential centrifugation. 17 The final crude mitochondrial pellet was resuspended in I buffer (0.25 mol/L sucrose, 0.5 mmol/L EGTA, 3 mmol/L HEPES). Mitochondrial protein concentration was determined using the bicinchoninic acid method with bovine serum albumin as a standard. Fresh mitochondria were used for measurement of mitochondrial permeability transition pore opening and complex activity.
Isolated cardiac mitochondria were dissolved in a swelling buffer (200 mmol/L mannitol, 10 mmol/L HEPES, 5 mmol/L succinate, 70 mmol/L sucrose), and mitochondrial concentrations were adjusted to achieved 1 of OD at 540 nm. Adding 400 µM/L CaCl 2 induced pore opening and caused mitochondrial swelling, which resulted in reduced absorbance at 540 nm. This reduction was measured using an ELISA reader for 30 minutes. The activities of NADHcytochrome c reductase (NCCR) and succinatecytochrome c reductase were evaluated by monitoring the reduction of oxidized cytochrome c. Cytochrome c oxidase activity was evaluated by measuring the oxidation of reduced cytochrome c. The change in absorbance was recorded using an ELISA reader. 17
Statistical Analysis
We estimated that for HSC treatment to increase survival rate from 40% to 90%, the required sample size to achieve an 80% power at α=0.05 to correctly detect such a difference was 13. We chose 15 animals for each group and used block randomization for each group. Binomial variables were analyzed using a Chisquare test and Fisher exact test. Differences between the treatment groups and control group in continuous variables were evaluated using ANOVA with Dunnett post-hoc test. Repeated measurement was applied to compare mitochondrial swelling rates. Survival curves were determined by the Kaplan-Meier method and compared using the log-rank test. A P value of <0.05 was considered statistically significant. Multiple testing adjustment (P<0.025) was applied during the comparison of good neurological recovery between treatment and control groups. All analyses were performed using IBM SPSS Statistics for Windows, version 19.0 (IBM Corp., Armonk, NY, USA).
RESULTS
A total of 153 animals were used in the current study. Excluding 55 animals without ROSC (51 for mitochondrial experiments and histology examinations at 4 hours following ROSC, and 2 more in the control group because of insufficient blood sample), 98 successfully resuscitated animals were finally enrolled for further experiments. Another 14 were used as the sham group. Among them, 45 successfully resuscitated animals were equally randomized into the 3 groups. No significant between-group differences were observed in body weight, CPR events, body temperature, and hemodynamics (Table 2).
Hemodynamics, Survival Outcomes, and Neurological Outcomes
The LV-positive dP/dt 40 and maximal LV-negative dP/ dt, which are representative of the systolic and diastolic functions of the heart, decreased following ROSC and recovered gradually in all groups. Compared with the control group, both treatment groups had significantly better LV-positive dP/dt 40 in the second hour and better maximal LV-negative dP/dt in the fourth hour following ROSC (Figure 2A and 2B). In the first 4 hours after ROSC, the treatment groups had significantly improved cardiac output compared with the control group, except for a trend in the first hour for the LSC group (P=0.051; Figure 2C). No significant betweengroup differences were observed in mean arterial pressure between groups ( Figure 2D).
Only 9 animals (60%) in the control group survived to 72 hours. By contrast, 14 animals (93.3%) in each of the treatment groups survived to 72 hours (P=0.000) ( Figure 2E). Among the animals that survived, 3 (20%) in the control group, 10 in the LSC group (66.7%, Figure 2. Hemodynamics, survival, and neurologic outcomes. A and B, Both the low-dose hydrocortisone and high-dose hydrocortisone groups showed better but not sustained systolic and diastolic functions within the first 4 hours following return of spontaneous circulation (ANOVA with post-hoc correction; n=15). C, Animals receiving hydrocortisone had improved cardiac output compared with the control group during the first 4 hours after ROSC (ANOVA with post-hoc correction; n=15). D, No significant between-group difference in mean arterial pressure was noted (ANOVA with post-hoc correction; n=15). Animals receiving hydrocortisone had (E) better 72-hour survival rates (log-rank test; n=15) and (F) higher incidence of good neurological recovery (Fisher exact test with multiple testing adjustment; n=15). CO indicates cardiac output; CPR, cardiopulmonary resuscitation; HSC, high-dose hydrocortisone; LSC, low-dose hydrocortisone; LV, left ventricle; and ROSC, return of spontaneous circulation.
P=0.027 compared with control), and 11 in the HSC group (71.4%, P=0.020 compared with control) had good neurological recovery at the 72nd hour following cardiac arrest ( Figure 2F).
Myocardial Damage and Mitochondrial Injury
Compared with the sham group, at 4 hours after ROSC, the HE-stained LV sections in all groups exhibited myocytolysis, waving, and transverse contraction bands. Both treatment groups had reduced myocytolysis compared with the control group ( Figure 3A and 3B). With GT staining, significantly abnormal mitochondrial aggregation was noted in the control group but was less observed in both treatment groups ( Figure 3A and 3C). Transmission electron microscopy examination of the control group showed mitochondrial swelling and edema, outer membrane rupture, and loss of innermembrane cristae with amorphous densities. The treatment groups had less mitochondrial membrane damage, much lower swelling and edema, and more intact lamellar cristae ( Figure 3A). Moreover, plasma troponin I levels were significantly lower in the treatment groups than in the control group ( Figure 3D).
Compared with the sham group, the control group experienced an accelerated rate of Ca 2+ -induced mitochondrial swelling. The HSC group mitigated the acceleration ( Figure 4A). The control group demonstrated decreased activities of NCCR, succinatecytochrome c reductase, and cytochrome c oxidase compared with the sham group. Administration of hydrocortisone, regardless of dosage, mitigated the inhibition of NCCR and cytochrome c oxidase but not succinate-cytochrome c reductase ( Figure 4B through 4D).
Cytokines, Adrenocorticotropic Hormone, and Cortisol
Compared with the sham group, the control group demonstrated increased IL-6 and TNF-α levels. Significantly reduced IL-6 and TNF-α levels were noted in the treatment groups ( Figure 5A and 5B). No significant between-group differences were observed in adrenocorticotropic hormone or cortisol levels, except for a trend of higher cortisol level in the HSC group as compared with the control group (P=0.078; Figure 5C and 5D).
Cardiac and Brain Changes at 72 Hours Following ROSC
Compared with the sham group, at 72 hours following ROSC, the HE staining of LV in all groups showed increased fibroblast proliferation. Both LSC and HSC groups demonstrated improved histological outcomes as compared with the control group. Fibroblast proliferation and collagen fiber was less observed in the treatment groups than in the control group ( Figure 6A through 6C). As to the cerebral injury, less neuron death was noted in the animals receiving post-arrest hydrocortisone than ones receiving saline as placebo ( Figure 6A and 6D).
DISCUSSION
The current study demonstrated that hydrocortisone administration immediately after ROSC may mitigate post-arrest myocardial damage, ameliorate cardiac mitochondrial injury, and improve survival, neurological, and histological outcomes in a rat model of VF cardiac arrest. Besides, hydrocortisone decreased serum IL-6 and TNF-α level after I/R injury but did not affect serum ACTH and cortisol concentration.
As an extreme stressor, cardiac arrest causes devastating physiological changes. Compared with other stress states, cardiac arrest is associated with low cortisol levels during and after CPR. 3,21 Relative adrenal insufficiency was identified as a poor prognostic factor of shock-related mortality after cardiac arrest. 22 Several population-based studies 7,8 and clinical trials 5,6 have demonstrated the benefit of steroids on survival and neurological recovery in patients resuscitated from cardiac arrest, although some retrospective studies and one small clinical trial have not produced consistent findings. [23][24][25] Donnino et al showed no difference of shock reversal, survival to discharge, and neurological recovery between the cardiac arrest survivors receiving hydrocortisone and placebo. 26 A meta-analysis including 4 randomized controlled trials and 3 observational studies demonstrated that steroid use during CPR and after CA were significantly associated with an increased rate of ROSC and survival to discharge. 27 The significant improvement in both 72-hour survival and neurological outcomes in the treatment groups in the present study is consistent with the findings of these studies.
Post-resuscitation myocardial dysfunction has been reported, in both VF asphyxiaial cardiac arrest. [28][29][30] Two clinical trials have reported elevated systolic, diastolic, and mean blood pressure in patients receiving hydrocortisone for post-resuscitation shock. However, because the treatment groups also received vasopressin, epinephrine, and methylprednisolone during CPR, the individual effect of hydrocortisone is difficult to determine. 5,6 Hydrocortisone Figure 5. Cytokines, adrenocorticotropic hormone, and cortisol levels. A and B, Low-dose hydrocortisone and high-dose hydrocortisone groups showed significantly reduced interleukin-6 and tumor necrosis factor-α levels compared with the control group (ANOVA with post-hoc correction; n=6). C and D, No significant differences in ACTH or cortisol levels between treatment groups and control group were noted (ANOVA with post-hoc correction; n=6). ACTH indicates adrenocorticotropic hormone; HSC, high-dose hydrocortisone; IL-6, interleukin-6; LSC, low-dose hydrocortisone; and TNF-α, tumor necrosis factor α. *P<0.05 compared with the control group, # P<0.05 compared with the sham group.
Tsai et al
Post-Arrest Hydrocortisone Reduces Cardiac Injury has been reported to ameliorate vasoplegic syndrome (vascular hyporesponsiveness to vasopressors) 31 and critical illness-related corticosteroid insufficiency in patients with septic shock. 32 In this study, although no significant differences in mean arterial pressure were observed, the animals treated with hydrocortisone showed improvement in cardiac output and systolic/ diastolic function. The improved outcomes may be attributed from not only the alleviation of vasoplegia and critical illness-related corticosteroid insufficiency but also improved cardiac performance. Steroids exert both positive and negative effects on the heart. The use of steroids after myocardial infarction may impair myocardial healing. 33 In an experimental myocardial infarction model of isolated perfused rat heart, dexamethasone aggravated cardiac damage. 34 Moreover, a preliminary study revealed an association between corticosteroid therapy and increased mortality in patients with shock following acute myocardial infarction. 35 However, steroids appear to be cardioprotective against I/R injury without definite vessel occlusion. In cardiac surgery, steroids strongly inhibit inflammatory responses and attenuate postreperfusion myocardial injury. 36,37 In epinephrine-induced myocardial ischemia, hydrocortisone pretreatment mitigated microscopic myocardial damage. 15 Similarly, in the present study, both microscopic and troponin I examinations revealed reduced myocardial injury in the animals that received hydrocortisone.
During cardiac arrest, cardiac mitochondria is progressively damaged and CPR is able to preserve their function and viability. 38 Upon full resumption of reperfusion, the cardiac mitochondrial damage aggravated by oxidative stress surges. 17,18,39 Our results revealed overt morphological damage to cardiac mitochondria after successful resuscitation. Adequate mitochondrial function is highly dependent on the complex ultrastructure of mitochondria, to which damage may play a vital role in the pathogenesis of cardiac mitochondrial dysfunction. 40 Therefore, in the present study, inner-membrane instability (mitochondrial permeability transition pore opening) and mitochondrial dysfunction (complex activity) following I/R injury may have resulted from ultrastructural change. Post-arrest hydrocortisone attenuated both cardiac mitochondrial injury and dysfunction.
The cardioprotective effects of steroids against I/R injury may also come from their modulation of immunological and inflammatory responses. Postcardiac arrest syndrome is characterized by high levels of circulating cytokines and adhesion molecules, presence of plasma endotoxin, and dysregulated leukocyte production of cytokines. 21 Cardiac arrest significantly increased the level of TNF-α in striatum and hippocampus, 41,42 IL-1α, IL-1β, IL-6, and TNF-α in heart, 43 and IL-6 and TNF-α in plasma. 43,44 Compared with healthy individuals, production of endotoxininduced TNF-α, IL-6, and IL-10 were substantially increased in cardiac arrest survivors and associated with increased need of vasopressor support. 10,45,46 In patients with sepsis, TNF-α and IL-6 are the most frequently detected circulating cytokines 47 ; along with several others. These cytokines are involved in sepsis-induced cardiac dysfunction 48,49 and are correlated with the prognosis and severity of sepsis. 50 Hydrocortisone reduces the levels of proinflammatory cytokines, including interleukin-1β, interferon-γ, TNFα, and IL-6. 51 Our current study also showed reduced TNF-α and IL-6 levels in animals treated with hydrocortisone compared with those given a placebo. Recently, extracellular mitochondria and mitochondria-derived molecular patterns were recognized as potent inducers of inflammatory responses. 52 The relationship and interaction between cytokines and cardiac mitochondria under hydrocortisone treatment after cardiac arrest warrant further investigation.
Our study has several limitations. We chose 72 hours as the observation period for survival and neurological outcomes, which may be insufficient for observation of long-term outcomes. Second, we used male and healthy animals, whereas in clinical practice, VF cardiac arrest usually accompanies heart diseases. Because cardiac arrest in clinical practice is more complex than in the animal model we used, our results should be interpreted cautiously with regard to clinical application. Besides, the exclusion of female rats may also limit application of our study. Third, troponin, adrenal hormones and cytokines were not checked at 72 hours. Therefore, evolution or resolution of damage can only be known partly from the histological examinations of heart. Besides, the blood gas and lactate were not followed during the post-arrest period. Finally, hydrocortisone was administered as a single dose immediately after ROSC. Therefore, whether continuous infusion or repeated bolus injection of hydrocortisone would better improve outcomes requires further investigation. | 2021-05-08T06:17:04.317Z | 2021-05-07T00:00:00.000 | {
"year": 2021,
"sha1": "0184dd0e637af061af981e9be0706a73d7435288",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.120.019837",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9eaadeb9178fb787938928b7cb5f8299b1b230da",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53538369 | pes2o/s2orc | v3-fos-license | Religiosity, Spirituality, and Mental Health in Portugal: a call for a conceptualisation, relationship, and guidelines for integration (a theoretical review)
: In the Portuguese research society, particularly in the mental health field, little has been done regarding religiosity, spirituality and mental health. Thus, this paper strives to stimulate the interest in this area by providing an overview of the body of research on religiosity, spirituality and mental health, highlighting the role and importance these dimensions represent in the life of many people, whether in health or mental distress and illness. A brief review of the conceptualisation of religion (religiosity) and spirituality is provided, as well as some areas of disagreement and contention. Guidelines for the integration and professional training are also included, not discarding the ethical considerations inherent in this process. Finally, reflections are offered as to why bringing religiosity and spirituality into mental health field is important, as well as some implications for clinical practice, with particular focus on Portuguese mental health system.
However, almost no research has studied this subject so far. For instance, in terms of previous work focusing on the integration of Religion and Spirituality into professional healthcare field, most of the work was conducted in nursing and end-of-life care (Caldeira, Castelo Branco & Vieira, 2011), whereas in mental healthcare research (Psychology, in particular) no academic research and publications were made, apart from a few masters dissertations (e.g. Garrett, 2010;Taranu, 2011) and the research in the context of quality of life and well-being (e.g. Pinto & Pais-Ribeiro, 2007;Gouveia, Marques & Ribeiro, process. It has been hypothesised that this impact is largely due to the meaning, meaning making and connection to the transcendent religion and spirituality may provide to one's life (George, Ellison & Larson, 2002;Dein, 2004;Büssing, Ostermann & Matthiessen, 2005;Park, 2005;Dein, Cook, Powell & Eagger, 2010;Koenig et al., 2012); in a sense that these domains can influence one's beliefs, goals, emotions or even the core of human existence (Park, 2005).
In light of the above, the present paper aims to highlight some of the current perspectives on the relevance of religion and spirituality in mental health research and, most importantly, how to incorporate these life aspects in psychological treatment with religious and/or spiritual clients.
Therefore, this theoretical review is organised according to the current trends of research on religiosity, spirituality and mental health; discussing firstly the ambiguities and confusions surrounding the relationship between the concepts of religion, religiosity and spirituality. Then and briefly, the relationship between religion, spirituality and mental health will be addressed, focusing on the impact the former have on mental health outcomes and to the psychological treatment process. Next, and from an integrative and multicultural point of view, some guidelines will be provided, along with the ethical issues the integration of religious and spiritual issues in psychological treatment requires.
RELIGION (RELIGIOSITY) AND SPIRITUALITY: RELATIONSHIP AND CONCEPTUALISATION
When recognizing the different dimensions of "religion" and "spirituality", the understanding and definition of these phenomena is by nature complex and often these two are used interchangeably in the scientific research and clinical practice (Pargament, 1999;Zinnbauer, Pargament & Scott, 1999;Büssing et al., 2005;Koenig, King & Carson, 2012). This is essentially due to a great number of different perspectives, starting points and cultural contexts, as these terms can be seen as synonymous, overlapping, or completely different concepts (Hill et al., 2000;Tummala-Narra, 2009).
The ambiguity is so profound that, for some authors, religion should be considered as a broader and more general construct than spirituality (Pargament, 1999;Hill et al., 2000); whereas others see religion as a component falling into an "umbrella", of the spirituality (Miller, 1999;Dein, 2004). As mentioned before, it is commonly accepted, though, that both religion and spirituality emphasise the depth of meaning and purpose in life, and connection to the sacred (Dein, 2004;Büssing et al., 2005;Dein, et al., 2010;Koenig et al., 2012); where some may be "religious and spiritual"; "religious, but not spiritual"; or yet again "spiritual, but not religious", without dismissing those who are "neither religious nor spiritual" (Worthington, Kurusu, McCollough & Sandage, 1996;Koenig, George, Titus & Meador, 2004;Richards & Bergin, 2005;Saucier & Skrzypińska, 2006;Büssing, 2010).
As a matter of fact, religion and spirituality as concepts do overlap, and in some regards might also be similar (Richards & Bergin, 2005;Fisher, 2011;Hadzic, 2011;Koenig et al., 2012). However, one must also acknowledge they are not identical (Zinnbauer, et al., 1999) and for that reason conceptualisations that focus or are narrowed to certain aspects of religion or spirituality might not be appropriate (Saucier & Skrzypińska, 2006;Nelson, 2009).
Obviously, this ambiguity and confusion has a great impact on the credibility and understanding of research and practice in this area (Koenig, 2009), leading for instance to: limited or otherwise too broad definitions of religion and spirituality, which results in a loss of their idiosyncratic characteristics (Zinnbauer, et al., 1999;Hill et al., 2000); different forms of polarisation of religion and spirituality, the former being institutional/negative/harmful and the latter individualistic/positive/beneficial (Pargament, 1999;Zinnbauer et al., 1999;Hill & Pargament, 2003;Pargament, 2007;Smith, 2007;Mutter & Neves, 2008); and pathologization of religious and spiritual beliefs/practices in diverse or minority contexts (Frame & Williams, 1996;Fallot, 1998;Lukoff & Turner, 1998;Tummala-Narra, 2009;Adams, 2012).
Concerning this tendency for religious and spiritual pathologization, it is crucial to emphasise the weight a Western conceptualisation has put on this process of isolating, analysing, and defining both religion and spirituality (Smith, 2007). Indeed, Western conceptions of religion and spirituality have a tremendous impact on defining acceptable forms of approaching these concepts, with a particular interest in the realm of mental health; and undoubtedly a considerable progress has been made in this field.
However a complete discussion of this conceptual history, as important as it is, is beyond the purposes of this article (for extended discussions please see Fallot, 1998;Zinnbauer et al., 1999;Hill et al., 2000;Hill & Pargament, 2003;Smith, 2007;Koenig, 2009). Thus, this paper presents some of the most significant and recent inputs in this field, highlighting that special care must be taken to properly conceptualise these two multidimensional concepts.
Religion (Religiosity)
One of the leading definitions of religion is provided by Argyle and Beit-Hallahmi (1975 as cited Argyle andBeit-Hallahmi, 2014). These authors define religion as "a system of beliefs in a divine or superhuman power, and practices of worship or other rituals directed towards such a power" (p. 6); This definition by nature emphasises the substance of religion, focusing primarily on institutionalised beliefs, emotions, practices, and relationships of individuals that are clearly related to the institutionalised sacred (Fallot, 1998;Pargament, 1999;Zinnbauer, et al., 1999;Richards & Bergin, 2005;Lindridge, 2007). Pargament (1997, as cited in Pargament, Magyar-Russell & Murray-Swank, 2005) also defined religion as "a search for significance in ways related to the sacred" (p. 32), highlighting that the sacred here should be understood in a broader sense, rather than only limited to the traditional concepts of higher powers, divinity or God (Zinnbauer et al., 1999).
More recently, Koenig and colleagues suggested a more comprehensive definition of religion in an attempt to bridge, for instance, the gap between western and eastern traditions. Therefore, for these authors religion represents a "multidimensional construct that includes beliefs, behaviours, rituals, and ceremonies that may be held or practiced in private or public settings, but are in some way derived from established traditions that developed over time within a community" (Koenig et al., 2012, p.45).
Moreover, religion has connection to other dimensions, such as a search for personal goals (i.e. meaning and purpose in life and good physical health) as well as having a social function (i.e. sense of belonging and closeness to others) (Zinnbauer et al., 1999;Johansen, 2010). Accordingly, religious life, religiosity or religiousness involve participation in a set of beliefs, rituals, and activities such as attending religious and church services, scripture reading, prayer, meditation, among others (Fallot, 1998;Haynes, 2009).
Spirituality
Compared to religious studies, the interest and research on spirituality are much more recent, yet there are numerous and diverse findings. In the health context across different fields, studies have been conducted in an attempt to define and distinguish spirituality from religion. Nonetheless, perhaps researchers might still be using personal definitions of spirituality, typically based on their own understanding of this dimension (Hadzic, 2011).
Also, spirituality is progressively being used to refer to the individual's subjective aspect of religious experience (Hill & Pargament, 2003). Therefore, research and society in general, have been witnessing a growing movement of its usage, shying away from the institutionalization of their beliefs (religion) toward its individualization. Therefore more and more individuals identify themselves as being "spiritual, but not religious" (Zinnbauer et al., 1999;Miller, 2003;Saucier & Skrzypińska, 2006;Winslow & Wehtje-Winslow, 2007;Baetz & Toews 2009;Büssing, 2010), i.e. those who are "committed to the spiritual dimension of life, but not be identified with any specific religion" (Miller, 2003:150).
RELIGIOSITY, SPIRITUALITY AND MENTAL HEALTH: ROLE AND IMPACT
As mentioned above, it has been long since religious and spiritual issues were considered to be completely outside of the mental health domain, to where it stands currently. In the meantime, these two dimensions were pathologized; considered to have a strong and negative effect on mental health and psychosocial functioning; depathologized; considered to have a slight, but positive impact on mental health; to what are believed today to be key dimensions in many peoples' lives, with practical and important repercussions on therapeutic outcomes (for extended knowledge on the relationship between religion, spirituality and mental health, see Lukoff & Turner, 1998;Johansen, 2010).
Some developments in the mental health field have contributed to the changed interest in religion and spirituality, which are reviewed here, so the current standpoint of the relationship between these dimensions can be understood and supported.
Research regarding religion, spirituality, and health (either physical or mental) is currently being much more encouraged and included in the literature, not exclusively but, in the fields such as: medicine, neuroscience, epidemiology, and psychology.
Starting from a developmental perspective, there is little doubt that religion and spirituality play an important role in the development across the human life span (Levenson, Aldwin & D'Mello, 2005). Therefore, innumerable models and theories have been developed in an attempt to cover and explain how, across the human life span, one sees oneself in relation to what is considered to be divine or transcendent (Dowling & Scarlett, 2005). Most of the work in this field has been done concerning the importance religion and spirituality has for adaptation in later life; however, lately the interest also covers early life, including childhood, adolescence and middle adulthood (Levenson, et al., 2005).
Concerning the relationship between religion, spirituality and health, most of the work involves studies on mental health, and again the majority shows a positive and strong relationship between these dimensions. Specifically, it seems that religious and spiritual people tend to present better indicators of physical and psychological well-being, in that they present lower rates of physical illnesses and psychological disorders, such as depression, suicide, anxiety, substance abuse, marital problems (Koenig, 2000(Koenig, , 2001Cohen & Koenig, 2004;Büssing et al., 2009;Johansen, 2010;Koenig, 2012).
There also seems to be a positive association between the onset and/or worsening of psychopathological symptoms and increased importance of religion and spirituality in the life of many patients/clients (Baetz & Toews, 2009;Gockel, 2011). Other studies indicate that, in psychological distress and suffering, religious clients tend to recover faster with better outcomes when mental health professionals seek the integration of their clients' religious beliefs and practices (Fallot, 1998;Aukst-Margetić & Margetić, 2005;Curlin et al., 2007;Baetz & Toews, 2009). This might be in response to clients' wishes that their religion or spirituality be included in psychological treatment (Miller, 1999;Knox, Catlin, Casper & Schlosser, 2005;Martinez, Smith & Barlow, 2007;. However, it is important to highlight that not all studies have found a positive relationship between religion, spirituality and mental health, and these should also be taken into account. Some researchers have indicated, for instance, that negative/harmful emotions such as dissatisfaction or anger towards God and/or a congregation and sense of guilt may arise; or yet a strict religious background or membership correlate with impaired mental health (Fallot, 1998;Cohen & Koeing, 2004;Lindridge, 2007;Baetz & Toews, 2009;Hadzic, 2011;Curry & Roach, 2012). It also seems true that when understood, used or manipulated in an unhealthy way, some religious and spiritual beliefs and practices may result in worse mental health and neurotic behaviour (Cohen & Koeing, 2004;Curry & Roach, 2012). For instance, among patients with obsessive-compulsive disorders (or even individuals in nonclinical samples), it is not uncommon to find beliefs or fear of sin/God, and consequently the use of daily practices such as church attendance or pray numerous times per day to try to neutralise or decrease levels of distress (Abramowitz et al., 2002;Cohen & Koeing, 2004;Williams, Lau & Grisham, 2013).
Accordingly, the research and professional challenge here may require not only the recognition of the positive and negative impact that religion and spirituality might have on mental health, but most importantly the understanding of how this impact occurs; how it can be used as a resource; how it can be challenged; and how it can be integrated accurately into therapeutic settings, thus benefiting the client, the therapeutic relationship and the therapeutic process as a whole.
RELIGION, SPIRITUALITY AND MENTAL HEALTH: AN INTEGRATIVE VIEW
It is important to highlight that this paper does not serve as a preparation or as a training manual for mental health professionals when working with religious people or religious matters in therapy. Rather, it should be seen as a "wake-up call" or another helper (so to speak), guiding mental health professionals to search and define what to do prior to beginning working with these issues. Therefore, hereinafter some guidelines to an accurate integration of religious and spiritual issues in therapy found in literatures will be provided, followed by the ethical issues in achieving this accurately.
Standing from an integrative point of view, Psychotherapy and Religion/Spirituality seem to share similar purposes, despite using different logics, methods, and strategies. Fundamentally, these worlds try to emphasise the importance of self-knowledge (resources, responsibilities and difficulties); promote the acquisition of strategies to deal with the guilt and shame; guide the search for solutions to personal conflicts; seek to answer the questions of purpose and meaning of life, among others (Corey, 2001;West, 2004). Nonetheless, getting these two worlds together appears to be a complex and ethically challenging process (Martinez et al., 2007;Smith, Bartz & Scott Richards, 2007), starting with the clarification and definition of the boundaries between a religious and spiritual sensitive psychotherapeutic intervention from a spiritual care or counselling (Miller & Thoresen, 1999).
As mentioned earlier, international academia and professionals have been striving to demonstrate how important religion/spirituality can be in the life of many people, which lead to the understanding of its effects on health, emphasising both the advantages and disadvantages for mental health status. Nowadays, the focus has shifted towards understanding what role religion and spirituality play when integrated into therapeutic settings. It was not surprising to witness a rising tide of hypotheses, theories, researches and studies in the last years, concerning the development and integration of religious and spiritual perspectives and interventions into the mainstream of psychological practice (Richards & Bergin, 2004).
In Portugal, however, these dimensions are still poorly examined in its related scientific fields, intensifying the gap between research and practice. Few studies were conducted in mental health field researching religion and spirituality and to date no training programme (graduate or post-graduate) guides mental health professionals on how to integrate these issues into clinical practice. An important step, though, was taken by The Order of Portuguese Psychologists (OPP, 2011) presenting religion as one of the dimensions to be taken into consideration when "practice is aimed at minority populations, psychologists seek to obtain professional and scientific knowledge in order to intervene abiding by the ethics and efficiently, adapting their intervention to factors associated with sex, age, sexual orientation, gender identity, ethnicity, cultural origins, nationality, religion, language, socioeconomic level, capacity, or others" [section 5.6. cultural minorities].
Following this path, a small qualitative study was conducted by Freire and Moleiro (2011), which found that participants (members of religious minority groups) perceived religiosity/spirituality as a dominant factor in their lives. Even though they expressed openness and positive representations of the psychologists' work, this was not considered to be a first help-seeking option, since all participants agreed the primary help would be either within their family or their religious community (leaders and other members).
These steps raise a few questions related firstly to the fact that Portuguese society is predominantly religious and Catholic, being religious minority groups represented only by 4% of the population. Furthermore, as presented above, one must argue with the importance and impact religion and spirituality have on people's life and therapeutic processes (for a discussion see Koenig, 2012;Vieten, Scammell, Pilato, Ammondson, Pargament & Lukoff, 2013). What is certainly not known, so far, is what role religion and spirituality play on life and work of Portuguese mental health professionals. Or again the importance and impact religious and spiritual issues have when integrated into therapy.
In contrast to the Portuguese context, many studies have been conducted worldwide. As a matter of fact, international contributions range from approaches where religion and spirituality can be integrated (or not), to a continuum of spiritual care, as advanced by Saunders, Miller and Bright (2010). These authors conceived integration of a patient' spiritual or religious beliefs and practices -SRBP (credits to Saunders et al., 2010) from: spiritually avoidant care, where the mental health professionals avoid issues related to a patient's SRBP, even if the patient indicates a need or desire to discuss them; to a spiritually conscious therapy as a respectful and sensitive way of determining the impact that religious and spiritual matters have on the patient and the patient's problems; whereas spiritually integrated care focus on patients' SRBP, without explicitly seeking the maintenance or transformation of those; and finally spiritually directive psychotherapy where the goal of therapy is to help a patient resolve psychological problems either by maintaining or transforming SRBP (Saunders et al., 2010; Figure 1). To highlight that, the latter three approaches differ not only in content, but also on the level of the competence needed and ethical concerns inherent to each type of care. Examples of the latter three approaches range from those grounded and tailored on religious foundations, i.e. Buddhist (Nauriyal, Drummond & Lal, 2006); Christian (Jones & Butman, 1991); Islamic (Dwairy, 2006); targeting specific religious or ethnic groups i.e. Jews (Rosmarin, Pargament, Pirutinsky & Mahoney, 2010); Mormons (Martinez et al., 2007;Lyon, 2013); Latinos (Cervantes, 2010); African-Americans (Frame & Williams, 1996;Constantine, Lewis, Conner & Sanchez, 2000); Asians (Hall, Hong, Zane & Meyer, 2011;Leong & Kalibatseva, 2011); targeting specific psychological disorders, i.e. depression (Agishtein et al., 2013;Pearce & Koenig, 2013); alcoholism and drug abuse (Hodge, 2011 b ;Lietz & Hodge, 2013); eating disorders (Kristeller & Hallett, 1999;Marsden, Karagianni & Morgan, 2007); founded on specific psychotherapeutic approaches, i.e. spiritually modified cognitive therapy (Hodge, 2006); religiously oriented cognitive behaviour therapy (Robertson, Smith, Ray & Jones, 2009); religious cognitive-emotional therapy (Rajaei, 2010); mindfulness-based cognitive therapy (Fresco, Flynn, Mennin & Haigh, 2011); Christian cognitive behavioural therapy (Pearce & Koenig, 2013); developed from the new forms of therapy such as spiritually or religiously integrated, oriented, or accommodative psychotherapy (McCullough, 1999;Pargament, 2007;Sperry, 2012;Ripley et al., 2014); or again coming from the multicultural counselling field (Hage, Hopson, Siegel, Payton & DeFanti, 2006;Savage & Armstrong, 2010;Hall et al., 2011); among many other contributions.
Following the multicultural perspective and for the purpose of this paper, interventions aiming to provide specifically a spiritual counselling or direction are not covered. This includes all interventions aiming to: train religious and spiritual professionals (e.g. chaplains or religious leaders) with knowledge of psychology, psychotherapy or counselling; or the training of mental health professionals to exclusively work with religious and spiritual issues [not discarding or questioning their value]. Thus, a multicultural perspective of integration of religious and spiritual issues into clinical practice will be discussed below.
Multicultural Perspective
Although religion and spirituality are only explicitly outlined under the non-discrimination; boundaries; and assessment affairs of APA 2 , APA 3 and ACA 4 's code of ethics, a remarkable step was taken in 2003, when a set of 6 extensive guidelines on multicultural education, training, research, practice, and organizational change for Psychologists were presented (APA 1 , 2003). These guidelines intended to guide psychologist's work "in the midst of dramatic historic socio-political changes in U.S. society" (APA 1 , 2003, p.377), and therefore help professional society embrace multiculturalism and diversity into psychological settings.
However it is relevant to highlight that this absence of spiritual and religious reference can be also found within European code of ethics, being the Croatian (2004) Another ground-breaking step was the work of the "Spirituality and Psychiatry Special Interest Group" (SPSIG), a group within The Royal College of Psychiatrists (United Kingdom), that recently published the "Recommendations for psychiatrists on spirituality and religion" (Cook, 2013). These recommendations are included in the discussion segment later on.
According to the new edition of ACA's code of ethics (ACA, 2014, p.20) a multicultural or diversity counselling "recognizes diversity and embraces approaches that support the worth, dignity, potential, and uniqueness of individuals within their historical, cultural, economic, political, and psychosocial contexts". From this point of view a [multi]culturally competent mental health professional must, first of all, be aware of his/her own cultural background, as well as its client's cultural background. Most importantly, this professional must acquire specific knowledge and skills to accurately and effectively attend to client's needs and expectations (Sue, Arredondo & McDavis, 1992;Arredondo et al., 1996;Hage et al. 2006;Metzger, Nadkarni & Cornish, 2010;Vieten et al., 2013;ACA, 2014).
However and once again, the Association for Multicultural Counseling and Development (AMCD), involved in the task force to develop the guidelines on multicultural issues, initially focused only on the 5 major cultural/ethnic groups of the United States, namely African/Black, Asian, Caucasian/European, Hispanic/Latino and Native American or indigenous group (Sue et al., 1992;Arredondo et al., 1996).
Nonetheless, later works and in a much broader definition, understanding and view of a person, other dimensions were introduced in order to include age, sex/gender, sexual orientation, spiritual/religious identification, among others (APA 1 , 2003;Hage et al., 2006;Metzger et al., 2010;Vieten et al., 2013). This inclusion allowed the emergence of new trends in the practice of psychology, such as psychological practice with older adults (APA 1 , 2009; 2013 1 ); with girls and women (APA 1 , 2007); with lesbian, gay and bisexual clients (APA 1 , 2011) and the endorsement by ACA of the competencies for addressing spiritual and religious issues in counselling, as multicultural issues into clinical practice (Association for Spiritual, Ethical and Religious Values in Counseling [ASERVIC], 2009).
It is important to highlight, however, that this status was not earned only under the multiculturalism agenda. As a matter of fact, for the past four decades the APA 1 's 36 th Division, Society for the Psychology of Religion and Spirituality, has been promoting discussions aiming to understand the significance religion and spirituality have in people's lives and in the Psychology field. The Division latest achievement was the publication of APA 1 's Handbook of Psychology, Religion, and Spirituality (2013), a two-volume handbook, introducing the most comprehensive analysis of the current state of the psychology of religion and spirituality 5 .
SPIRITUAL AND RELIGIOUS COMPETENCIES: GUIDELINES FOR INTEGRATION
First things first: one must acknowledge that it is not necessary (or even possible) for mental health professionals to be specialists in all range of religious and spiritual perspectives represented in one society. As a matter of fact, even though, professionals' personal religious and spiritual views can serve as important components for expertise (meaning these views can influence their assessment of their patient's spiritual and religious issues, as well as the decision as to whether the use of religious and spiritual interventions is relevant), these are not sufficient or even necessary conditions for the competence (Plante, 2007;Gonsiorek, Richards, Pargament & McMinn, 2009;Gockel, 2011).
As behavioural scientists, expertise must be achieved through learning and training, therefore promoting (multi)culturally sensitive competencies may represent an effective way to achieve an accurate integration (Miller & Thoresen, 1999); in addition, of course, to an adequate education and ongoing training.
Therefore, it is important to highlight that this processes should always be conducted in a clinically and ethically competent manner with no exceptions. A psychologist, for instance, should always "provide services, teach, and conduct research with populations and in areas only within the boundaries of their competence, based on their education, training, supervised experience, consultation, study, or professional experience" (APA 1 , 2010:5). It means not only are mental health professionals encouraged to become competent when working with religious and spiritual clients and issues, but are also compelled to act in accordance with the ethical guidelines in the field.
The development of religious and spiritual competencies might, above all, strengthen the therapeutic relationship between mental health professionals and their patients; as well as allowing these professionals to be in a better position to distinguish the religious and spiritual issues (healthy and unhealthy beliefs, practices and behaviours) from psychopathology (Knox et al., 2005;Hage et al., 2006;Johansen 2010;Savage & Armstrong, 2010;Vieten et al., 2013).
With the increasing evidence that mental health professionals needed knowledge and skills to become competent in the area of spirituality and religion in therapy, ASERVIC, a division of ACA, convened a summit on spirituality aiming to discuss ideas on how to incorporate religious and spiritual issues into psychological treatment process (Miller, 1999). As a result, a set of 9 competencies were proposed to Accreditation of Counselling and Related Educational Programs (CACREP), which later on were revised (Cashwell & Watts, 2010) and endorsed by the ACA. This new version of ASERVIC's guidelines for addressing spiritual and religious issues in counselling comprises now 14 competencies, divided into 6 categories. These competencies were firstly validated using a factor analysis research (Robertson, 2010) and more recently revisited by Reiner and Dobmeier (2014) 6 and revised by Dailey, Robertson & Gill (2015).
The ASERVIC' steps towards integration of spiritual and religious issues into clinical practice appear to be an important driving force. Consequentially it led (directly or indirectly) to new forms of competencies and integration, as outlined by Savage and Armstrong (2010). These authors presented a multicultural model (attitudes/values; knowledge; skills) based on ASERVIC' list of spiritual and religious competencies for psychotherapists. This chapter is so comprehensive (and recommendable) that it incorporates not only a rich amount of literature review and case vignettes illustrating "real" cases in therapy, but also (and most importantly) an extensive resource of practical guidelines supporting mental health practitioners. For instance, they provide important recommendations on how to assess, diagnose and conceptualise a case through multicultural and religious/spiritual perspectives, as well as suggestions and resources to develop spiritual and religious skills through self-assessment and training activities.
More recently, Vieten and colleagues (2013) published the "Spiritual and Religious Competencies for Psychologists", a set of basic spiritual and religious competencies, based on attitudes, knowledge, and skills, in an attempt to overcome the lack of guidelines empirically validated or ultimately to be used in policy changes (Vieten et al., 2013). Therefore, these competencies might be another important resource guiding the mental health professionals in "determining how and when to actively include religious or spiritual interventions into psychotherapy for those clients who request it and requires proficiency, rather than basic competence" (Vieten et al., 2013, p.138). However, what one must acknowledge as groundbreaking is the fact that these competencies were designed not only to help mental health professionals to provide care to those in need (e.g. religious and/or spiritual clients), but it also prepares mental health professionals to attend those clients lacking religious or spiritual involvement 7 .
For instance, in their paper, Steen and colleagues (2006) presented four topics addressed in the ACAs' code of ethics as a basis to explore the ethical challenges associated with integration of religious and spirituality matters into counselling. They highlighted the need for a mental health professional to be firstly aware of his/her own beliefs about religion and spirituality, by questioning how those beliefs might affect their work; and ensuring that he/she does not affect clients (Steen et al., 2006;Cook, 2013). For instance, proselytism (in an attempt to convert a client to engage or leave a religious or spiritual faith/community) is one practice involving ethical issues that need to be considered.
This issue was recognised to be particularly difficult to deal with, in cases when a mental health professional is either actively involved in a particular religious tradition or holds anti-religious and antispiritual beliefs, risking falling into some ethical pitfalls (Plante, 2007). For instance, one might fail to recognise the potential harm of specific religious and spiritual beliefs and practices or overestimating them by pathologising these beliefs and practices (Vieten et al., 2013).
Additionally, Plante (2007) proposed the RRICC model (that stands for Respect, Responsibility, Integrity, Competence, and Concern), composed by five main virtues considered to be common to the different professionals codes of ethics. The importance of a mental health professional being proactive in overcoming the lack of adequate training for graduate and postgraduate programme is one of the virtues highlighted. Plante also cites Richards and Bergin's recommendations, such as the need for mental health professionals to: stay informed about the research in this area; attend workshops and seminars; seek for appropriate supervision and consultation; and personally learn about the religious and spiritual traditions of their clients. Subsequently, these recommendations are encouraging mental health professionals not to avoid working with religious and spiritual issues due to the lack of adequate training, or doing so inappropriately, since many resources are available to start becoming competent in this area.
However, this leads to another important issue concerning service provision only within the boundaries of ones' competence, as APA 1 (2010) established. When confronted with a religious and spiritual client or a client's wish to discuss religious and spiritual issues and problem, a mental professional should not proceed with the work when feeling uncomfortable, unprepared or holding negative feelings towards client's religiosity and spirituality, risking being more harmful than helpful for the therapeutic relationship and the client. Nevertheless, termination and/or referring to another professional are not supported by ACA (2014) when the reason being simply conflicting views, since mental health professionals are expected to be respectful and non-discriminator. Therefore, recommendations are made in the following respects: seek for collaboration with a religious adviser, i.e. religious leader, clergy or chaplains; seek for consultation/supervision with a more competent mental health professional; or ultimately refer and/or terminate the therapeutic relationship (Miller, 1999;Steen et al., 2006;ASERVIC, 2009;Cook, 2013;Vieten et al., 2013;ACA, 2014).
CONCLUSION AND IMPLICATIONS FOR CLINICAL PRACTICE
International academics and professionals have been working to advance the research on mental health, religion and spirituality. Portugal, however, might not hold equivalent standards concerning the research and proper integration of religious and spiritual issues into clinical practice. Much is unknown so far. Nonetheless, some researches focusing on multicultural aspects is being developed; OPP, in 2011, included religious and spiritual issues as a cultural minority characteristic that need to be considered; and overall Portugal, as state is advancing in terms of legislation. Furthermore and following the Law nº 253/2009 on the spiritual and religious care in hospitals and other establishment of the National Health Service a practical manual was developed "Manual de Assistência Espiritual e Religiosa Hospitalar" (manual for the spiritual and religious care in hospital settings), and as the Directorate-General of Health presents: "With this tool, health professionals have an advantage to develop an indispensable therapeutic complement to care with patients. Moreover, everyone realize the therapeutic dimension of spirituality. The religious and spiritual support is essential to the healing and caring of a patient." (Quotation retrieved from http://www.dgs.pt/?cr=21645) This represents a ground-breaking step. However, this step was taken by the "Grupo de Trabalho Religiões e Saúde" (Religions and Health Working Group) in an attempt to make Portuguese chaplaincy more "multicultural"; and although it concerns health (mental) professionals as it promotes collaboration with religious advisers, it is not confined to mental health integration of religious and spiritual issues. For instance, it does not offer recommendations on how to use the information provided and again it appears to rely "only" on the respect and non-discrimination assumption. Also, other questions are yet to be answered: are, health (or mental) professionals aware of these techniques and opportunities of collaboration and its use? How have these had impact on professional work, therapeutic relationships and clients? Is there a need to go further? Here we would say inevitably: of course.
To sum, this paper strived to demonstrate, not only the need to acknowledge the importance of religion and spirituality in people's lives, but also how important it could be when properly integrated and worked through in clinical practice. By showing how much has been done internationally, this paper intended to provide mental health professionals an opportunity to consciously "choose" to engage in integration of religious and spiritual issues into therapy, choosing to go as far as one think might be suitable for his/her practice. And this might be by engaging in approaches that explicitly integrate religious and spiritual issues, or legitimately eschew them by providing a spiritually and culturally sensitive care (Vieten et al., 2013). What is definitely not adequate to the standard required in mental healthcare is to "deny" a proper care by avoiding these issues to be part of a client's therapeutic process or to merely refer them to other professionals or domains claiming: "this is no longer my jurisdiction". | 2018-10-18T23:55:36.804Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "c3c3af3f4b128637a48b6db5625e02cf9eb98087",
"oa_license": "CCBYNC",
"oa_url": "http://scielo.pt/pdf/psi/v29n2/v29n2a01.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7c4dbce0ee033708038e8eba6580284e8b7ba2ba",
"s2fieldsofstudy": [
"Psychology",
"Philosophy",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
169919131 | pes2o/s2orc | v3-fos-license | Motivations for tourism adoption by vineyards worldwide: A literature review
Wine tourism has been increasingly implemented by wineries in several countries around the world. This article aims at identifying the motivations for wine tourism adoption by wineries around the world through a narrative bibliographic review. This paper includes an analysis of articles related to the subject taken from the Web of Science database. In this article, we identified the objectives, methods, and conclusions reached by authors, who researched wine tourism. The bibliographic survey based on the keywords “wine” and “tourism”, resulted, after refining it by reading titles and abstracts, in 20 articles. The results show that most wineries in the studied sample adopt wine tourism as a strategy to achieve marketing-related benefits. Also, some wineries are motivated by the competitive advantage, economic development and increasing demand for such service. It is concluded that wineries are increasingly seeking to adapt to current consumer demands. From the perception of the importance of service experience offered to the consumer, mainly in the product sale.
Introduction
The wine sector is being studied under different perspectives in the academic world; however, concerning the tourism perspective an interest for further studies on this subject awakened as of the year 2000 [1]. Every year, millions of tourists travel to Europe and other wine regions such as Australia, New Zealand, the United States, South Africa and Latin America [2].
Tourism is an activity that has stood out in recent decades because it is a product that can only be consumed on the spot, and has been considered necessary in the local development of the regions in which it occurs [3]. Segmented tourism emerges alongside the development of new proposals and destinations, in which wine tourism is found (enotourism). Wine tourism is characterized by the delocalization motivated by the interest in getting to know wine-producing regions [4].
In European countries, considered "the old world" when it comes to wine, the relationship between tourism and the beverage is already considered a tradition. These countries perceived that the association between these two chains showed potential regarding economic and social development as well as wine heritage [5] and added value to enable new consumer experiences. Thus, this study proposes to answer the following question: Why do winemakers adopt wine tourism? This question arises from the fact that winemakers are increasingly investing in tourism, but it is essential to know their motivations for this adoption.
Theoretical background
Wine tourism had its first research phase in the 1990s since the first conference on the subject was held in Australia (1998). This first conference focused on justifying and better understanding the dimensions of the emerging subfield [6].
Due to competitiveness, some products such as still and sparkling wine were associated with specific territories, with regional brands and certification of origin. Some regions organized themselves into productive clusters and specialized in particular products based on this structure [7]. The concept of wine tourism, even today, is not uniform, due to the many approaches to defining it. When based on the marketing approach, most of the definitions are set on motivations and experiences related to wine potentials and tourists [6]. Also, wine tourism can also be considered as the visit of vineyards, wineries, wine festivals or grape shows in which the primary motivation for tourists is related to wine tasting and the attributes of a wine region [8].
Taking other elements into consideration wine tourism can be: [....] segment of the tourist phenomenon, which involves the movement of people, motivated by the organoleptic properties and by the whole context of wine tasting and production, as well as the appreciation of traditions, culture, gastronomy, landscapes and the typical characteristics of the regions producing grapes and wines. It is a phenomenon endowed with subjectivity, whose main substance is the encounter with those who produce grapes and wines [9].
Still, on the existing theories within wine tourism, the different perceptions related to wine tourists' motivations are noteworthy. If on the one hand there are those who defend that its primary motivation is the wine, its tasting, celebrations, and grape growing [8]. On the other hand, Johnson (1988) apud [8], infers that the particular landscape of the vineyards and the search for leisure through wine, in recreation not directly involving the consumption of the drink, also defines the motivation for the enotourist.
For a destination to be attractive to tourists, it is necessary to have the support and infrastructure to receive and provide expected services. Wine is considered the central product of both the wine and the wine tourism industry. Some services are considered essential for increasing benefit perception by tourists, such as tasting, courses and lectures, customer service, wine clubs, wine tours, as well as the attributes of wine and the landscape [10].
In addition to the services considered basic, enotourism also offers complements as recreation and relaxation, entertainment and events, as well as hospitality and other typical tourism services (such as lodging and transportation) [10].
In this context, it is possible to perceive that the growth of the tourism is associated, to a large extent, to the sum of the wine products to other tourist attractions and of diffusion [1]. Moreover, in the world scenario [11], states that the number of segments in the "old world" countries, such as France, and in "new world" countries such as South Africa was made possible by the combination of activities gastronomic tourism, ecotourism and rural tourism in wine producing regions.
As of the 1950s, with the technological and scientific evolution, some European countries began to perceive the dimension of tourism and began to take advantage of its cultural and infrastructural characteristics in the search for greater competitiveness in the market. Tourism, therefore, has become one of the pillars of the international scenario [9].
Wine tourism development, therefore, requires not only the infrastructure of grape growing but also additional services related to wine and other activities, sometimes not related as much. The enotourist looks for an experience that goes beyond the core offered by the wine industry; the enotouristic consumption expects to feel and perceive the local culture, from other products, costumes, and gastronomy. Also, wine tourists are also looking for attracting destinations, in the landscape and climate, as well as to obtain information [6]. These perceived items demonstrate the magnitude of the wine tourism, the complexity of its services and the impact of its changes, once it involves different sectors and activities to generate an experience that the tourist will consider valid.
If, on the one hand, there is the highest perception of benefits related to the negative impact caused by tourism, on the other hand, since 1990 the negative impact has also begun to be studied, including inhabitants of the regions visited in the research [12].
Therefore, the development of wine tourism was studied taking into consideration the companies involved in the business, the tourists, the inhabitants of the tourism regions and other elements involved as control and regulation agents of the sector, as well as incentive foundations, among others.
Method
The objective of this study is to identify the motivations for the adoption of wine tourism by winemakers. To achieve this objective, a bibliographic review was carried out. The method consisted of analyzing objectives, methods, studied segments and conclusions of papers published in the Web of Science database. This method allowed us to analyze what has been produced in the world about wine tourism.
Initially, 391 texts were found regarding the subject using the words "wine" and "tourism." To refine the selection, the titles and summaries of the papers found were read. The reading process allowed the identification of 20 articles that were relevant to this research. An Excel file was used for the recording of the articles for their analysis. After that, a full reading was done.
The analysis of the data was carried out from the data bank generated in Excel, resulting in some tables and charts for the presentation of the sample.
Results and analyses
In order to meet the proposed objective, this study gathered articles published in the Web of Science database. The data were published until December 2017, regarding the theme of wine tourism.
Twenty articles were selected, which indicated some motivation on the part of winemakers to adopt the wine tourism strategy. The selected articles are listed in Table 1.
Then, we wanted to know more about the selected sample. Therefore, a table was elaborated according to each selected year, as well as the various sources and the number of articles published by year ( Table 2).
Wine tourism has grown continuously in the last 50 years, becoming one of the fastest growing economic sectors in the world [13]. There is still, however, a debate about its benefits for winemakers and the environment by Hall (1996) apud [14].
The definition of wine tourism most cited in the articles is that of Hall (1996) apud [14]: Wine tourism can be defined as visits to vineyards, wineries, wine festivals and wine shows where a tasting of grape wines and experiencing the attributes of a grape wine region are the main motivating factors for visitors.
On the other hand, the definition of [15]complements the previous definition with other services and experiences, making it broader and more complete: Wine tourism is, therefore, the offer of goods and services that are not only associated with wineries and canteens, but also with all the businesses involved with wine and tourist activities, including the means of accommodation, transport, tours, tourist guides and drivers, gift vouchers and agreements, among other services. From these analyses, it is possible to see all the involvement and commitment of the wine tourism with the tourist enterprises in general.
Previous studies have shown the characteristics of the tourist services' consumers, their choices regarding destinations and the motivations that bring these tourists to certain places. It is, in the meantime, essential to identify the motivations of the winemakers to adopt the wine tourism strategy, since the implementation of this service demands some investment. Therefore, considering Table 3, it is possible to identify the apparent motivations in the articles analyzed.
The winemakers presented various motivations for the adoption of wine tourism. However, in the sample of articles studied, most of the articles presented some motivation related to marketing. [16] present wine tourism as a form of consumer experience, bringing direct sales and relationship to the consumer. Wine tourism is increasingly associated with marketing, which is an important strategy for the development of the wine industry. Even in countries where wine production is a tradition, winemaking is a strategy to restrict the relationship with the consumer [16,17] present wine tourism as an opportunity linked to distribution, arguing that it is mainly an opportunity for small producers -wine tourism eliminates the need for intermediaries, increasing margins, and facilitates marketing since small businesses typically face greater difficulty in finding their way into the market. Furthermore, they draw attention to the infrastructure and the need for higher investments, which may be an initial difficulty for entry into the wine tourism business.
The wineries inserted in regions where there are an active organization, routes, and associations, also present motivations for the adoption of wine tourism. [...] the Wine Routes are established to build and/or consolidate a reputation for quality in a wine region [...] [18]. Thus, the reputation can be one of the motivations regarding the adoption of a wine tourism strategy, since the winemakers who choose to participate in this process must agree with it. For [15], the attraction of tourism related to winemaking can be related to the improvement of the image of the wines offered by a company. Therefore, it is an opportunity to show the tourist the quality of a company's products and to link this quality image to the region where it belongs: Thus, the image of the products offered during the visit is perceived by means of an intrinsic meaning to the visited region [15]. Another apparent motivation in a sample of articles was competitive advantage. [19] comment on the addition of wine tourism to the already consolidated gastronomic tourism and complement the importance of considering the growing agricultural tourism. [20] affirm that some key-factors are capable of promoting and broadening the market, drawing competitive advantages for wine tourism. They are landscape, cultural heritage, rural lifestyle, richness in typical and quality agricultural products and brands of products and territorial brands.
De [21] added the importance of improvements in services to the debate, to guarantee better business performance and increase competitiveness.
Moreover, the growing demand for cultural tourism was one of the most important aspects of the adoption of tourism according to [22]. The addition of wine and gastronomic tourism as primary activities and not as secondary is one of the ideas pointed by the authors.
The qualification of the offer was one of the motivations found in the sample studied [23] affirm that for some winemaking companies there is no study of the architecture, history, and culture of their region and property so that they can present the most qualified information and experience to tourists.
[24] increment the discussion, saying that the image of wine or food is not enough to bring a tourist to choose a particular tour operator, but rather to link this image of the product to the culture of the region could be a solution for this.
According to [25], wine-related tourism brings out other aspects such as economic, social and environmental, as it is an activity carried out through the culture of wine and the experience that this service provides. Wine tourism is a form of pleasant pleasure around wine and its culture, which is a reflection of wine use of the environment and its impact on the social and economic development of regions [25]. Therefore, for the author, the wine tourism is directly related to sustainability and sustainable development.
The local development is a motivation presented by [26] for the adoption of wine tourism by some winemakers. Since it is a service that takes place in a particular place and involves the consumption of products from the region itself, the authors believe that wine tourism generates wealth. Wine tourism implies a joy of local products and is always established as a more general experiential tourism. Landscapes, environments, architectural constructions (traditional or more avantgarde), local gastronomy, are resources that play a fundamental role in the appreciation of tourists [26].
Economic development, according to [27], is an important motivation for exploring wine tourism. The generation of employment and income traced by this activity is important for the regions where tourism takes place. Also, wine tourism generates experience and makes it possible for tourists to get to know a region and appreciate its products.
Another motivation was the participation of clusters. The strategic advantages provided by being part of a cluster related to wine tourism are so clear that many wineries decide to take part in it. In [17], the authors sought to identify innovations in the wine tourism sector based on geographical clustering.
Finally, the last motivation found in the exhibition was the low production cost. [28] presents wine tourism as a business opportunity in regions where the production cost of wine and the available labor force are cheap, and whose landscapes attract tourists. According to this author, financial support and partnerships for investment in infrastructure are essential.
The articles classified in the category other/unspecified are those that, although part of the sample, do not present aspects that could be identified as motivations for the winemakers to adopt the wine tourism strategy.
Conclusions
The objective of this study was to identify the motivations for the adoption of wine tourism by winemakers. A bibliographic study was carried out analyzing 20 articles regarding wine tourism to identify these motivations.
From this analysis, it was possible to identify the reasons that led the wineries to choose to add tourism services related to wine. Among them are Marketing, competitive advantage, perception of growing demand from cultural tourism/ strengthening of tourism, qualification of supply, strengthening of the cultural identity of the product, Sustainable development, Local development/economic development, Cluster and Increasing the consumption of local wine.
The articles analyzed contain information related to winemakers from different regions of the world, but serve as an initial tool to understand the motivations for the search of tourism as a strategy. The motivation that went the furthest among the samples studied was Marketing. The wineries seek to increase their options of selling channels, improve their relationship and better position their brand in the mind of the client who passes through the experience within the winery. More and more wineries are adding services to the sale of wine, bringing tourists to participate and get to know and how the wine is produced and the region where production takes place. In so doing, the customer gets more involved with a specific brand.
Since the service experience to the consumer is of great importance to the sale of the product, winemakers have begun to look for ways to meet their consumption requirements. The wine companies are inserting the consumers in the process and offering more benefits so that they choose their region as a tourist option. In addition, not only do winemakers benefit, but the region's culture is valued, generating income and employment.
Despite presenting the motivations that led the winemakers to adopt wine tourism, the analyzed articles did not present concrete data for these affirmations. Thus, a point to be highlighted for future studies is the measurement of the return on wine tourism investment. Many articles make comments on the high costs in infrastructure to offer wine tourism services, while not presenting return regarding sales, brand image, and economic development. | 2019-05-30T23:46:53.019Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "5b7e4347c55ce44b1cb79f42e6fc4210f87ad393",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2019/01/bioconf-oiv2018_03005.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fa1f667354b187a081e5b771c4063ef26d7876a3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
244709175 | pes2o/s2orc | v3-fos-license | The gradient discretisation method for the chemical reactions of biochemical systems
We consider a biochemical model that consists of a system of partial differential equations based on reaction terms and subject to non--homogeneous Dirichlet boundary conditions. The model is discretised using the gradient discretisation method (GDM) which is a framework covering a large class of conforming and non conforming schemes. Under classical regularity assumptions on the exact solutions, the GDM enables us to establish the existence of the model solutions in a weak sense, and strong convergence for the approximate solution and its approximate gradient. Numerical test employing a finite volume method is presented to demonstrate the behaviour of the solutions to the model.
Introduction
In this paper, we are interested in studding reaction diffusion equations: ∂ tū (x x x, t) − µ 1 div(∇ū(x x x, t)) = F (ū,v), (x x x, t) ∈ Ω × (0, T ), (1.1a) ∂ tv (x x x, t) − µ 2 div(∇v(x x x, t)) = G(ū,v), (x x x, t) ∈ Ω × (0, T ), (1.1b) where µ 1 and µ 2 are the diffusion coefficients corresponding to the chemical concentrationsū andv respectively. The functions F and G describe the governing kinetics of the chemical reactions and some biochemical phenomena have been expressed in literature based on the choice of these reaction terms. For an example, in 2014, Yung-Rong Lee and Sy-Sang Lia [16] proposed a reaction-diffusion model that stimulated the relationship between concentrations of oxygen and lactic acid to simulate oxidative phosphorylation and glycolysis reactions in tissues. They concluded that a reaction-diffusion model can generate and maintain the ideal micro-environment for stem cells.
Another example is the Gray-Scott model, a very well-known reaction-diffusion system. The model describes the chemical reaction between two substances, an activatorv and substrateū where both of which diffuse over time and so represents what is called cubic autocatalysis [15].
The Brusselator model is also an example of such chemical reaction systems which is used to describe mechanism of chemical reaction -diffusion with nonlinear oscillations [17]. The model occurs in a large number of physical problems such as the formation of ozone by atomic oxygen, in enzymatic reactions, and arises in laser and plasma physics from multiple coupling between modes [4,19].
The gradient discretisation method (GDM) is a generic framework to design numerical schemes together with their convergence analysis for different models which are based on partial differential equations. It covers a variety of numerical schemes such as finite volumes, finite elements, discontinuous Galerkin, etc. We refer the reader to [1-3, 8, 10-12, 14] and the monograph [9] for a complete presentation. The main purpose of this paper is the proof of convergence, in the GDM setting, of the approximate solutions to the weak solution of a system of reaction diffusion equations with non-homogeneous Dirichlet boundary conditions. The convergence is established without non-physical regularity assumptions on the solutions since it is based on discrete compactness techniques detailed in [7].
The paper is organised as follows. Section 2 introduces the continuous model and its weak formulation. Section 3 describes the gradient discretisation method for the model and states four required properties. Section 4 states the theorem corresponding to the convergence results. In Section 5, we include numerical test by employing a finite volume scheme, namely the Hybrid Mimetic Mixed (HMM) method, to study and analyse the behaviour of the solutions of the Brusselator model as an example of the biochemical systems. The resultant relative errors with respect to the mesh size are also studied.
Continuous model
We consider the following biochemical system of partial differential equations: Our analysis focuses on the weak formulation of the above reaction diffusion model. Let us assume the following properties on the data of the model.
Assumptions 2.1. The assumptions on the data in Problem (2.1)-(2.6) are the following: • • g and h are traces of functions in L 2 (0, T ; H 1 (Ω)) whose time derivatives are in L 2 (0, T ; H −1 (Ω)), • the functions F, G : R 2 → R are Lipschitz continuous with Lipschitz constants L F and L G , respectively.
Discrete Problem
The analysis of numerical schemes for the approximation of solutions to our model is performed using the GDM. The first step to reach this analysis is the reconstruction of a set of discrete spaces and operators, which is called gradient discretisation. • the set of discrete unknowns X D = X D,0 ⊕ X D,∂Ω is the direct sum of two finite dimensional spaces on R, corresponding respectively to the interior unknowns and to the boundary unknowns, • the linear mapping I D,∂ : H 1 2 (∂Ω) → X D,∂Ω is an interpolation operator for the trace, • the function reconstruction Π D : X D → L 2 (Ω) is a linear, • the gradient reconstruction ∇ D : X D → L 2 (Ω) d is a linear and must be defined so that · || D := ||∇ D · || L 2 (Ω) d defines a norm on X D,0 , • J D : L ∞ (Ω) → X D is a linear and continuous interpolation operators for the initial conditions, Let us introduce some notations to define the space-time reconstructions Π D ϕ : .
Setting the gradient discretisation defined previously in the place of the continuous space and operators in the weak formulation of the model leads to a numerical scheme, called a gradient scheme.
Definition 3.2 (Gradient scheme). The gradient scheme for the continuous problem (2.7) is to find families of pair (u (n) , v (n) ) n=0,...,N ∈ ( , and for all n = 0, ..., N − 1, u (n+1) and v (n+1) satisfy the following equalities: In order to establish the stability and convergence of the above gradient scheme, sequences of gradient discretisations (D m ) m∈N described in Definition 3 are required to satisfy four properties; coercivity, consistency, limit-conformity and compactness.
Definition 3.3 (Coercivity). Let D be a gradient discretisation and let C D be defined by Definition 3.4 (Consistency). If D is a gradient discretisation, define the function Definition 3.5 (Limit-conformity). If D is a gradient discretisation, and the space Definition 3.6 (Compactness). A sequence of gradient discretisation (D m ) m∈N in the sense of Definition 3.1 is compact if for any sequence (ϕ m ) m∈N ∈ X Dm,0 , such
Convergence results
Our convergence results are stated in the following theorem.
Theorem 4.1 (Convergence of the gradient scheme). Assume (2.1) and let (D m ) m∈N be a sequence of gradient discretisations, that is coercive, consistent, limit-conforming and compact. For Dm,0 ) be a solution to the gradient scheme (3.1) with D = D m . Then there exists a weak solution (ū,v) of (2.7) and a subsequence of gradient discretisations, still denoted by (D m ) m∈N , such that, as m → ∞, (1) Π Dm u m converges strongly toū in L ∞ (0, T ; L 2 (Ω)), Proof. The proof relies on the compactness arguments as in [9], and is divided into four stages.
THE GRADIENT DISCRETISATION METHOD FOR THE CHEMICAL REACTIONS OF BIOCHEMICAL SYSTEMS 7
Apply the inequality, a, b ∈ R, (a − b)a ≥ 1 2 (|a| 2 − |b| 2 ), to the first terms in the above equalities, sum on n = 0, ..., m − 1, for some m = 0, ..., N and apply the Cauchy-Schwarz inequality to obtain From the Lipschitz continuous assumptions on F and G, one has, with letting L : max(L F , L G ) and C 0 = max(F (0), G(0)), Then, using the Young's inequality in the right-hand side of the inequalities, we conclude ε i 2 , and C 1 depends on C D , ∇ D g D L 2 (Ω) d and ∇ D h D L 2 (Ω) d , which are bounded. The desired estimates follow from combining the above inequalities together and take the supremum on m = 0, ..., N .
Step 4: Owing to these estimates and the strong convergence of Π Dm g Dm , Π Dm h Dm , ∇ Dm g Dm , and ∇ Dm h Dm , the remaining of the proof is then similar to that of [9, Theorem 3.2 ].
Numerical Results
To measure the efficiency of the gradient scheme (3.1) for the continuous problem (2.1)-(2.6), we consider a particular choice of the gradient discrtisation method known as the Hybrid Mimetic Mixed (HMM) method, which is a kind of finite volume scheme and can be written in three different formats; the hybrid finite volume method [13], the (mixed-hybrid) mimetic finite differences methods [5], and the mixed finite volume methods [18]. For the sake of completeness we briefly recall the definition of this gradient discretisation. Let T = (M, F , P, V) be the polytopal mesh of the spatial domain Ω used in the previous section and described in [9,Definition 7.2]. The elements of the gradient discretisation are: • The discrete spaces are v K = 0 for all K ∈ M, v σ = 0 for all σ ∈ F int }. • The non conforming a piecewise affine reconstruction Π D is defined by ∀ϕ ∈ X D , ∀K ∈ M, for a.e. x x x ∈ K, • The reconstructed gradients is piecewise constant on the cells (broken gradient), defined by ∀ϕ ∈ X D , ∀K ∈ M, ∀σ ∈ F K , where a cell-wise constant gradient ∇ K ϕ and a stabilisation term R K (ϕ) are respectively defined by: in which x σ is centre of mass of σ, x K is the gravity centre of cell K, d K,σ is the orthogonal distance between x K and σ ∈ F K , n K,σ is the unit vector normal to σ outward to K and D K,σ is the convex hull of σ ∪ {x K }.
The HMM scheme for (3.1) is the gradient scheme (2.7) written with the gradient discretisation constructed above. As a test, we consider the Brusselator reaction-diffusion model (2.1)-(2.6) with non-homogeneous Dirichlet boundary conditions over the domain Ω = [0, 1] 2 . The reaction functions in the Brusselator system are defined as where a is positive constant and b is a parameter that can be varied to result in a range of different patterns. With x x x = (x, y) ∈ Ω, the exact solution in such a case is given as v(x x x, t) = exp(x + y + 0.5t), (5.2b) with parameters chosen as a = 0, b = 1, µ 1 = µ 2 = 0.25. The initial and the Dirichlet boundary conditions are extracted from the analytical solutions (5.2). The simulation is performed on a sequence of triangular meshes and is done up to T = 1. The chosen meshes are of size h = 0.125, h = 0.0625, h = 0.03125, and h = 0.015625, respectively with time step is fixed as 0.0001. Table 1 shows the relative errors onū andv and the corresponding rates of convergence with respect to the mesh size h. The resultant errors on the solutionsū andv are proportional to the mesh size h, indicating that the HMM scheme behaves very well.
Moreover, the L 2 relative errors on the gradients of the solutions with respect to the mesh size h are shown in log-log scale Figure 5.1a for ∇ū and in Figure 5.1b for ∇v. A line of slope one is added in both figures as a reference. We observe that the relative errors on ∇ū and ∇v scale linearly with h, giving a rate of convergence of one, which are compatible with behaviour expectations associated with the loworder methods such as the HMM method. h ū(·,t (n) )−ΠD u (n) L 2 (Ω) ū(·,t (n) ) L 2 (Ω) Rate v(·,t (n) )−ΠD v (n) L 2 (Ω) v(·,t (n) ) L 2 (Ω) Table 1. , A gradient discretization method to analyze numerical schemes for nonlinear variational inequalities, application to the seepage problem, SIAM Journal on Numerical Analysis, 56 (2018), pp. 2375-2405. | 2021-11-29T02:15:52.084Z | 2021-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "04ae015502cc5a717bdb2e0f3271daa076aac726",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "04ae015502cc5a717bdb2e0f3271daa076aac726",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
253402562 | pes2o/s2orc | v3-fos-license | Population-Based Meta-Heuristic Algorithms for Integrated Batch Manufacturing and Delivery Scheduling Problem
: This paper addresses an integrated scheduling problem of batch manufacturing and delivery processes with a single batch machine and direct-shipping trucks. In the manufacturing process, some jobs in the same family are simultaneously processed as a production batch in a single machine. The batch production time depends only on the family type assigned to the production batch and it is dynamically adjusted by batch deterioration and rate-modifying activities. Each job after the batch manufacturing is reassigned to delivery batches. In the delivery process, each delivery batch is directly shipped to the corresponding customer. The delivery time of delivery batches is determined by the distance between the manufacturing site and customer location. The total volume of jobs in each production or delivery batch must not exceed the machine or truck capacity. The objective function is to minimize the total tardiness of jobs delivered to customers with different due dates. To solve the problem, a mixed-integer linear programming model to find the optimal solution for small problem instances is formulated and meta-heuristic algorithms to find effective solutions for large problem instances are presented. Sensitivity analyses are conducted to find the effect of problem parameters on the manufacturing and delivery time.
Introduction
Recently, many studies have been conducted on individual manufacturing and delivery problems, both of which are an important part of supply chain management (SCM).The methodologies for an integrated scheduling problem (ISP) generally provide better performance to improve the efficiency of the entire supply chain than individual manufacturing and delivery problems [1].The study on ISPs is difficult even if ISPs provide better performance because of the complexity of the supply chain and the conflict of stakeholders in the supply chain.Nevertheless, ISPs are required for many sectors of industry such as ceramics, food, port cargo handling, and freight logistics [2].In this study, we confine our study to ISPs regarding the manufacturing and delivery process.We apply the batch loading and scheduling problem (BLSP) in the manufacturing process [3] and the direct-shipping problem in the delivery process [1,4].
In the manufacturing process, jobs can be processed simultaneously on a batch processing machine, and a set of jobs that are processed simultaneously is called a production batch.The volumes of jobs are different.The total volume in a production batch must not exceed the machine capacity.Jobs with different families must not be assigned to the same production batch.The batch production time depends only on the family type assigned to the production batch.Furthermore, we consider deterioration and rate-modifying activities.In collaborative works between operators and machines, such as machining, assembling, and maintenance, the batch production time can increase due to operator fatigue or machine failure, where the increased portion of this batch production time is called deterioration.The recovery process from the deteriorated state to the original state is called rate-modifying activity (RMA) [5].In this study, the batch production time increases in proportion to the interval between the recent RMA and the start time of the batch because we assume that the deterioration effect occurs linearly.
In the delivery process, each job after batch manufacturing is reassigned to delivery batches.The delivery batches are directly shipped by fixed numbers of homogeneous trucks.In this study, we consider that the delivery batch is independent of the production batch.The total volume in a delivery batch must not exceed the truck capacity.Jobs from different customers must not be assigned to the same delivery batch.The truck can deliver only one delivery batch at a time.The truck leaving the factory returns immediately after shipping the delivery batch to the customer.The delivery time including return time depends only on the customer of that particular delivery batch.The objective function is minimizing the total tardiness of jobs delivered to customers with different due dates.
Figure 1 describes a Gantt chart example for the presented ISP.The number of jobs, families, and customers are 5, 2, and 2, respectively.Jobs 1 and 2 belong to Family 1 and Jobs 3, 4, and 5 belong to Family 2. Jobs 1, 3, and 5 are requested from Customer 1, and Jobs 2 and 4 are requested from Customer 2. The production time for the family and the delivery time for the customer are (50, 100) and (229, 161), respectively.The due date and volume of jobs are (264, 235, 401, 477, 459) and (5,10,7,14,10), respectively.The machine capacity, truck capacity, deterioration rate, and RMA processing time are 20, 20, 0.3, and 20, respectively.The jobs are assigned to production batches ( ) while keeping the constraints on the family compatibility and machine capacity.The production batches are sequenced with RMAs inserted between them.The production time of Batch 2 increases in proportion to the interval between the start time of Batches 2 and 1 due to deterioration.The original production time of Batch 2 is 100.The interval between the start time of Batches 2 and 1 is 50.The deterioration rate is 20.Thus, the production time of Batch 2 is 115 (=100 + 50 × 0.3).Assuming the RMA was performed before Batch 3 is processed, the deterioration for Batch 3 is restored and the batch production time is not increased.The manufacturing completion time of jobs is (50, 50, 165, 285, 165).Jobs that have been processed are assigned to delivery batches ( ) while keeping the constraints on the customer compatibility and the truck capacity.Truck 1 transports Batch 2 at time 279, but the manufacturing completion time of Job 4 is 285.Therefore, the waiting time is 6 (= 285 -279) between Batches 2 and 3 in Truck 1.The completion time of jobs in the batches for the corresponding customer is (279,211,440,446,440).By comparing the due dates of each job, the total tardiness of each job is 54 (=15 + 0 + 39 + 0 + 0).
Literature Review
In this section, we survey studies on ISPs, including batch processing.For ISPs including deterioration or RMA, we focus on their scheme.
For ISPs with a direct-shipping method, Liu [6] dealt with a two-stage delivery problem.The first stage of delivery is to deliver jobs from the warehouse to the batching machine by crane.The second stage of delivery is to deliver the processed jobs to the customer by only one vehicle.He proposed genetic algorithms to minimize the sum of makespan and the total setup cost.Jia et al. [2] studied a problem with parallel batch processing machines with different capacities.They proposed several heuristic algorithms for minimizing the total weighted delivery time of jobs.Selvarajah and Steiner [7] assumed that only items with the same customer and product belong to one batch.They presented a polynomial algorithm for minimizing the sum of total inventory holding cost and the batch delivery cost.Gao et al. [8] studied a problem with limited vehicle capacity.The jobs are batched without breaking the vehicle capacity constraints before being processed.They presented polynomial-time algorithms for two special cases with the same production time and delivery time of order, respectively.Furthermore, they provided a heuristic to solve a general problem.Cheng et al. [9], Cheng et al. [10], and Jia et al. [11] assumed that the vehicle capacity is an integer multiple of the machine capacity.Cheng et al. [9] and Cheng et al. [10] assumed that the batches are packaged in the same size of boxes or pallets and propose each ( ) time algorithm for identical and arbitrary job sizes.Jia et al. [11] dealt with a problem with parallel batch machines.They present two hybrid meta-heuristic algorithms based on ant colony optimization and a deterministic heuristic for minimizing total weighted delivery time.In addition, they proposed a lower bound for evaluating the presented algorithms.Li et al. [12] studied a problem with unbounded parallel-batch and job families.They defined the family as the customer who requested the job.They assumed that jobs with the same family have identical sizes in a vehicle, and jobs with different families are not delivered together.They showed that the problem is NP-hard and proposed a heuristic algorithm for minimizing completion time.Li et al. [13] studied a problem with both machine and vehicle capacity.Jobs have different sizes and the total volume of jobs in each batch does not exceed the machine capacity.Likewise, the total volume of jobs in the delivery batch does not exceed the vehicle capacity.They proposed a polynomial algorithm for identical job sizes and heuristics for different job sizes to maximize the total profit.Zhang et al. [14] dealt with a problem including the orderpicking process.The orders are batched without breaking the capacity constraint of picking devices.They proposed an on-line algorithm for minimizing the makespan and total delivery cost.Nogueira et al. [15] and Feng and Xu [16] studied ISPs with parallel batching machines.Nogueira [15] assumed that job size and production time are generic.They presented a mathematical formulation model and several heuristic algorithms to maximize the total profits.Feng and Xu [16] developed a 0-1 mixed-integer programming (MIP) model.Jia et al. [17] further considered parallel non-identical batch machines.He et al. [18] proposed an enhanced branch-and-price algorithm for integrated 3D printing with JIT delivery systems.Li et al. [19] developed a MIP formulation and proposed a column generation-based approach for an ISP with dual delivery modes.
For ISPs with vehicle routing problems (VRP), Karaoğlan and Kesen [20] dealt with the problem of distributing products with a limited shelf life to customers in a vehicle.They proposed a branch-and-cut (B&C) algorithm to minimize lead time.Low et al. [21] and Low et al. [22] studied the problem of delivering the product to the customer after processing it in the distribution center.They provided adaptive genetic algorithms (AGAs) to minimize total cost, including delivery cost, vehicle cost, and penalty cost.Li [23] considered the bi-objective problem minimizing both customer waiting time and vehicle delivery cost.
For ISPs with deterioration or RMA, Kong et al. [24] considered the integrated problem of CCHR and delivery scheduling in steel production.They assumed that the rolling time is linearly proportional to the starting time of slabs.Liu et al. [25] dealt with the integrated problem with parallel batching machines and deteriorating jobs.The production time of a job increases non-linearly concerning the starting time, and the production time of a batch is assumed to be the maximum value of jobs belonging to that batch.Yin et al. [26] studied batch delivery scheduling on a single machine with RMA.They assumed that processing a job after RMA would reduce the original production time by modifying rate times.
Table 1 shows the classification for studies on ISPs with batch processing.The studies are categorized according to compatibility in a production batch, vehicle number, deterioration, RMA, and shipping method.
To the best of our knowledge, an ISP simultaneously considering the family compatibility, batch deterioration with multiple RMAs, and direct-shipping method has received very limited attention; however, several ISP scheduling problems with batch manufacturing and delivery processes are often dealt with (See Table 1).immediately precedes delivery batch in truck ; 0 otherwise The MILP formulation using the above notation is as follows: • • , , , , τ ≥ 0 , , , , = 0 or 1 Constraint ( 2) denotes a restriction wherein each job must be assigned to one of the production batches.Constraint (3) confirms that the total volumes of jobs in each production batch must not exceed machine capacity.Constraints ( 4) and ( 5) guarantee that nonempty production batches are assigned to one bucket.The bucket is defined as a set of batches processed between RMAs [27].Constraints (6)(7)(8) ensure that production batches assigned to the same bucket are processed once in a specific sequence.
= 1 means that production batch k is in the first position in each bucket.
Constraint (9) determines the precedence relation of production batches within the same bucket and calculates the interval between their starting time and the completion time of the recent RMA.Constraint (10) calculates the completion time of buckets.Constraint (11) calculates the starting time of production batches.Constraint (12) guarantees that each job is assigned to one delivery batch.Constraint (13) confirms that the total volumes of jobs in each delivery batch must not exceed truck capacity.Constraint (14) ensures that jobs in the same delivery batch are shipped to the same customer.Constraint (15) guarantees that the shipping starting time of delivery batches is larger than the completion time for all jobs in that delivery batch.The completion time of each job is defined as the completion time of production batches to which that job is assigned.Constraint (16) denotes a restriction wherein each delivery batch is shipped to at most one customer.Constraint ( 17) enforces a customer-delivery batch relationship through job-customer and job-delivery batch relationships.Constraint (18) determines the precedence relation of delivery batches within a truck and calculates the shipping starting time of each delivery batch.Constraint (19) denotes a restriction wherein delivery batch must be assigned to one truck.Constraints ( 20)- (22) guarantee that delivery batches assigned to the same truck are shipped once in a specific sequence.
= 1 means that delivery batch m is in the first position in each truck.Constraint (23) calculates the tardiness of jobs.
Meta-Heuristic Algorithms
An ISP is generally an NP-hard problem, and since the proposed problem is an ISP with batch processing, it is NP-hard.Therefore, other efficient algorithms that can solve large problem instances quickly are required instead of the proposed MILP model.In many scheduling problem papers, the problem is effectively and efficiently solved through meta-heuristic algorithms [28][29][30].Due to this reason, three meta-heuristic algorithms, namely particle swarm optimization (PSO), the imperialist competitive algorithm (ICA), and the genetic algorithm (GA), are presented.The three meta-heuristic algorithms have the same decoding process.
Solution Representation and Decoding Method
The solution is divided into two parts: manufacturing and delivery.Thus, there are two one-dimensional arrays; one represents batching and scheduling for the manufacturing process, and the other represents truck assignment and scheduling for the delivery process.Figures 2 and 3 show an illustrative example of the decoding process for an encoded manufacturing and delivery solution using the meta-heuristic algorithms proposed in Sections 4.2-4.4.In all the presented meta-heuristic algorithms, the two one-dimensional encoded arrays are formed independently.
In Figure 2, Figure 2a is converted to Figure 2b by the random-key method [31].The main idea of random keys is that real numbers in the range [0, 1] represent the sequence of integers.In Figure 2a, the smallest number, 0.07, is in the 10th position.So, the first number in Figure 2b is 10.In Figure 2a, the smallest number after 0.07, 0.15, exists in the first position.So, the second number in Figure 2b is 1.In the same way, Figure 2a is converted to Figure 2b.Each element in Figure 2b represents a job or RMA.Suppose that the number of jobs is n.Then, the maximum number of RMAs is n − 1, assigned to the position between the jobs.Thus, 2n − 1 elements are required for Figure 2c.Since the number of jobs is 8 in Figure 2b, the number of elements becomes 15 (= 8 + 7).Odd numbers are converted to (original number + 1)/2, indicating the job index.All even numbers are converted to RMAs. Figure 2c contains information about the job index and RMA, which is converted into the manufacturing solution in several steps.Suppose that Jobs 1, 2, 3, and 4 belong to Family 1, and Jobs 5, 6, 7, and 8 belong to Family 2. The volume of jobs is (10,9,3,7,8,6,15,11).The machine capacity is 20.The orders of jobs in Families 1 and 2 are (1,3,2,4) and (7,6,5,8) from Figure 2c, respectively.For each family, jobs belonging to the family are assigned to batches in the corresponding order of Figure 2c and satisfy the machine capacity constraint.According to Figure 2d, Figure 2c is converted to Figure 2e.The position of batches in Figure 2e is the same as the position of each job index located at the front of Figure 2c among jobs belonging to the batch.Finally, the first and last RMAs are removed and consecutive RMAs are considered as one.
. Figure 2.An example of decoding procedure for manufacturing.
In Figure 3, Figure 3a is converted to a job array using the random-key method.Each element in Figure 3b represents a job.Since the number of jobs is eight, the number of elements in Figure 3b is eight.Jobs are assigned to delivery batches in the order of Figure 3b while simultaneously satisfying the truck capacity constraint and the customer compatibility constraint.If jobs requested from different customers between two jobs requested from the same customer exist in the job array, they must not be assigned to the same delivery batch.Suppose that Jobs 1, 3, 5, and 7 are requested from Customer 1, and Jobs 2, 4, 6, and 8 are requested from Customer 2. The volume of each job is (8,7,5,13,6,6,9,10).The truck capacity is 20.The manufacturing completion time of each job is (200, 350, 300, 65, 220, 70, 55, 160).The delivery time for the customer is (100, 150).The orders of jobs for Customers 1 and 2 are (7,1,5,3) and (6,4,8,2) from Figure 3b, respectively.Jobs 7 and 1 must not be assigned to the same batch.Although the total volume of Jobs 7 and 1 does not exceed the truck capacity, there are Jobs 6 and 4 between them.According to Figure 3c, Figure 3b is converted to Figure 3d.The manufacturing completion time of delivery batches is equal to the maximum value of manufacturing completion times in each job assigned in the delivery batch.Thus, the manufacturing completion time of delivery batches is (55, 70, 200, 160, 300, 350).Batches are assigned to a truck with the smallest value of differences between the manufacturing completion time of delivery batches and the available time of trucks according to their order in Figure 3d.If multiple trucks are assigned to a batch, the delivery batch is arbitrarily assigned to one of these trucks.An encoded solution of three meta-heuristic algorithms is introduced using the decoding process presented in Figures 2 and 3. PSO, ICA, and GA are presented in Sections 4.2-4.4,respectively.
Particle Swarm Optimization (PSO)
The position and velocity consist of two one-dimensional arrays representing the manufacturing and delivery process, respectively.The position and velocity are independently initialized by the uniform distribution of real numbers between 0 and 1 ( (0,1)).After that, the best solution for specific particles ( ) and the global best solution ( ) are updated.The velocity ( ) and position ( ) of each particle are updated using Equations ( 27) and (28) based on and , respectively.
The PSO procedure is shown in Algorithm 1.
Imperialist Competitive Algorithm (ICA)
The countries consist of two one-dimensional arrays representing the manufacturing and delivery process, respectively.The countries are independently initialized following the distribution (0,1).Afterward, powerful countries become imperialist.Any country that is not imperialist becomes a colony.Colonies are assigned to imperialist countries, and many colonies are assigned to powerful imperialist countries.To measure the power of imperialist countries, the normalized objective function value of nth imperialist country is calculated as follows: where and are objective function values for nth imperialist country and maximum objective function values for all the imperialist countries, respectively.The power of nth imperialist country is defined as follows: The initial number of colonies of the nth imperialist country ( ) is calculated as follows: Colonies ( ) move toward the direction of their imperialist ( ).The degree of approach is determined by and a random number from the distribution (0,1).
Each element of countries probabilistically reset the value to (0,1).This probability is called the revolution rate and is set in parameter calibration.After performing the moving and revolution process, the imperialist countries of each empire are updated.Among all the countries including the existing imperialist countries, the country with the smallest objective function value becomes the new imperialist country.Afterward, the weakest colony in the weakest empire is taken away by other empires.This is called imperialistic competition.The total power of an empire is a measure of imperialistic competition.It is Algorithm 1: The PSO procedure 1 Input iteration (Iter), population size ( ), and acceleration weight ( ) and ( ). 2 Randomly generate initial positions and velocities through (0,1).For (i = 1 to ) 14 End for 17 End while determined by the imperialist and colony power of each empire.The total objective function value of the nth empire ( ) is calculated as follows: where ξ is the weight for the colony power and is the objective function value for colonies belonging to empire n.Based on the , the normalized total objective function value of the nth empire ( ) is calculated as follows: where max( ) is the maximum total objective function value for all empires.The possession probability ( ) is calculated as follows: The ICA procedure is shown in Algorithm 2.
Genetic Algorithm (GA)
The chromosomes consist of two one-dimensional arrays representing the manufacturing and delivery process, respectively.The chromosomes are independently initialized by (0,1).The one-cut point crossover and uniform mutation are used as genetic operators.Crossover and mutation also proceed independent of two chromosomes.The uniform mutation operator is to replace the numeric value of a gene with a random number that follows (0,1), The roulette wheel selection is used as a selection method.The fitness function for chromosome i ( ) used in the roulette wheel method is as follows: where and are objective function values for the ith chromosome and maximum objective function values for all the chromosomes, respectively.The objective function value for the ith chromosome is calculated as the aggregate solution of the ith manufacturing and delivery chromosomes.The GA procedure is shown in Algorithm 3.
Computational Results
Problem instances for evaluating the performance of the proposed meta-heuristic algorithms are divided into large and small problem instances.In the experiment of small problem instances, the performances of PSO, ICA, and GA are validated by comparing them with the performance of the MILP model.The MILP is solved by CPLEX solver 12.7 using IBM ILOG CPLEX Optimization Studio.In the experiment of large problem instances, the performance of GA is relatively measured by comparing with performances of PSO and ICA.All meta-heuristic algorithms are implemented in C# and all computational experiments are performed by PCs with 3.60 GHz Intel Core i7-7700 CPUs.
Calibration of the Algorithm Parameters
Calibrating meta-heuristic algorithm parameters can significantly affect the performance of algorithms.The Taguchi method was used to find the best parameter combinations for PSO, ICA, and GA.The algorithm parameters are set at five levels.Tables 2 and 3 show values for each level and an orthogonal array (5 ) of GA parameters, respectively.The significance level is set to 15%, and and with a p-value less than 15% are judged to be significant.and are set to (5) and (5), respectively.and , which are parameters for which the difference in the S/N ratio between levels is not significant, are selected as the level with the smallest RDI. Figure 4b shows the mean RDI ratio plot for each GA parameter.and are set to (5) and (3), respectively.Parameter calibration for PSO and ICA is also executed in the same way as for GA.For PSO, the best , , , and are 1000, 100, 0.1, and 0.8, respectively.For the ICA, the best , , , , , and are 1000, 60, 3, 0.05, 2.5, and 0.25, respectively.
Ps
Pc Pm
Setting of the Problem Parameters
Two problem instance groups are generated based on the number of | |, | |, | |, and the expected ratio of tardy jobs (δ) that determine the complexity of problems.The planning horizons (PH) of small and large problem instances are one day (=8 h = 480 min) and five days (=8 × 5 h = 2400 min), respectively.The expected lead time of the last job ( [ ]) for each instance should be approximately equal to PH.The expected lead time of the last job is calculated as: where and are the deterioration coefficient and RMA frequency coefficient, respectively.
The first term is the expected total batch production time including the deterioration.The second term is the expected value of total RMA processing time.The third term is the expected delivery time of the last job.Through preliminary experiments, and are set to 1.25 and 0.4.The generating conditions of each instance are summarized in Table 5.
Experimental Results in the Small Problem Instances
For small problem instances, to validate the performances of PSO, ICA, and GA, these are compared to the optimal solution.The performance of meta-heuristic algorithms is represented by the objective function value ( ) and the CPU time.All meta-heuristic algorithms are tested with 30 replications for each instance.Table 6 shows the performance of the MILP model, PSO, ICA, and GA for instances with = 0.6.If the MILP model is not able to find the optimal solution within 2 h, CPU time and .are expressed as 7200.00++and NA, respectively.
The sample means of s for PSO, ICA, and GA are 710.64,711.87, and 710.22,respectively.The sample means of CPU times for PSO, ICA, and GA are 0.29, 0.13, and 0.30, respectively.PSO, ICA, and GA all found near-optimal solutions.
Experimental Results in the Large Problem Instances
For large problem instances, the performance of PSO, ICA, and GA is measured by comparison with each other.The performance of algorithms is represented by the RDI for large problem instances and the CPU time.
All the algorithms are tested with 30 replications for each instance.Table 7 shows the performance of GA, ICA, and PSO for instances.At δ = 0.6, the sample means of RDIs for PSO, ICA, and GA for each instance are 0.91, 0.42, and 0.08, respectively.The sample means of CPU times for PSO, ICA, and GA for each instance are 95.10,56.42, and 95.54, respectively.At δ = 0.3, the sample means of RDIs for PSO, ICA, and GA are 0.93, 0.58, and 0.08, respectively.The sample means of CPU times for PSO, ICA, and GA are 95.62,56.50, and 95.70, respectively.The ranking of meta-heuristic algorithms from the best to the worst performance is GA, ICA, and PSO.PSO and GA with the same population size show similar CPU time, and ICA with a relatively smaller population size than PSO and GA shows less CPU time.We execute an additional experiment for ICA with extended CPU time.However, no significant improvement in the RDI of ICA is shown.Therefore, GA shows the best RDI among PSO, ICA, and GA under similar CPU time in large problem instances.
For analysis reasons regarding the performance differences for PSO, ICA, and GA, a convergence test is performed.An instance with | | = 300, | | = 20, | | = 20 is used for the test and repeated 10 times.Figure 5 shows the convergence graph for PSO, ICA, and GA.The objective function values of the initial solution are similar for all three algorithms, but PSO and ICA converge faster to a value with a higher objective function than GA.Therefore, GA shows better performance in terms of objective function than PSO and ICA.To verify the significant difference in RDI between algorithms, the Tukey HSD test was performed.Figure 6 shows the mean plots and Tukey HSD intervals (α = 0.05) for all instances in Table 7. Figure 6 shows that the confidence intervals between all the algorithms do not overlap.In the other words, the difference in RDI between PSO, ICA, and GA is statistically significant.
Sensitivity Analysis
To reduce total tardiness for the ISP, scheduling problems of the manufacturing and delivery process are important.It is difficult to find whether manufacturing or delivery scheduling impacts the proposed total tardiness.To find this impact, the total manufacturing completion time ( ) and the total delivery time ( ) are presented as a performance measure.and are defined as follows: where and are the manufacturing completion time and lead time of job j, respectively.
Several problem parameters affect the scheduling of ISPs.Each problem parameter related to the manufacturing and delivery process affects and , respectively.For example, obviously, decreases when increases, or decreases and decreases when | | or increases or ℎ decreases.However, the effects of parameters related to the manufacturing process on and the effects of parameters related to the delivery process on are not obvious.To find these effects, an additional experiment is conducted by only using GA with the best performance shown., and the parameters related to manufacturing affect .This is because if the time on one side decreases, the flexibility of decision making on the other side increases.One of the reasons is that the difference in manufacturing completion time between jobs decreases and various decisions in the delivery process become possible as decreases.
Conclusions
In this research, the ISP with a batching machine, time-dependent batch deterioration, and RMAs is considered.A MILP model was formulated to solve small problem instances.Meta-heuristic algorithms were proposed to solve the large problem instances.The solution structure of meta-heuristics consists of two one-dimensional arrays for manufacturing and delivery.For small problem instances, we found the optimal solution using the developed MILP model.Additionally, we verified the performance of meta-heuristic algorithms by showing the near-optimal solution and comparing it with the MILP model in small problem instances.Three meta-heuristic algorithms, GA, ICA, and PSO, The total transportation time are proposed and relatively compared by using the relative deviation index (RDI) in large problem instances.The ranking of meta-heuristic algorithms from the best to the worst performance was GA, ICA, and PSO.Sensitivity analysis was conducted for GA with the best performance shown.We found that as the time for either manufacturing or delivery was reduced, the time for the other also decreased in this analysis.
However, this study has several limitations.For example, the real enterprise data considered for solving real industry problems are not used, and the objective function is simply set to total tardiness.The objective function can be modeled as a cost including setup, inventory, and tardiness costs.As for further work to extend this study, the problem could apply VRP to our delivery method.In addition, the problem can be extended to optimization problems resolving the conflict of stakeholders between manufacturing and third-party logistics (3PL).Finally, matheuristic and simheuristic approaches can be considered.
Figure 1 .
Figure 1.A Gantt chart example of schedules for the presented ISP.
Figure 3 .
Figure 3.An example of decoding procedure for delivery.
Figure 4 .
Figure 4.The mean S/N ratio and RDI plot for each GA parameter.(a) S/N ratio, (b) RDI.
Figure 7 Figure 7 .
Figure 7 shows the mean plots and Tukey HSD intervals (α = 0.05) for | |, | |, | |, and δ groups.For | |, | |, | |, and δ groups, the performance is ranked in the order of GA, ICA, and PSO, and the difference in RDI between PSO, ICA, and GA is statistically significant.In particular, GA provides the best RDI and robustness among all algorithms.
Graphs (a), (b), and (c) in Figure 8 show the change in according to the change in the parameters | |, , and ℎ , respectively.In graphs (a), (b), and (c), decreases when | | or increases or ℎ decreases.Meanwhile, the graphs (d) and (e) in Figure 8 show the change in according to the change in the parameters and .According to the graphs (d) and (e), decreases when increases or decreases.In summary, the parameters related to the delivery affect
Figure 8 .
Figure 8.Total manufacturing and delivery time under different | |, ℎ , , , and .(a) Total manufacturing time under different | |.(b) Total manufacturing time under different ℎ .(c) Total manufacturing time under different .(d) Total delivery time under different .(e) Total delivery time under different .
Table 2 .
The value of each level of GA parameters.
Table 4 .
ANOVA result for S/N ratio of GA parameters.
Table 7 .
Computational results for large problem instances.The convergence graph for PSO, ICA, and GA. | 2022-11-09T16:32:11.837Z | 2022-11-04T00:00:00.000 | {
"year": 2022,
"sha1": "988fa3bb1088a11467307259c3f28b9138dbdedd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/10/21/4127/pdf?version=1667956080",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fbf2727bc54809c262ea9c94db4f3089f3a572f7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
227157830 | pes2o/s2orc | v3-fos-license | Customized optical mapping by CRISPR–Cas9 mediated DNA labeling with multiple sgRNAs
Abstract Whole-genome mapping technologies have been developed as a complementary tool to provide scaffolds for genome assembly and structural variation analysis (1,2). We recently introduced a novel DNA labeling strategy based on a CRISPR–Cas9 genome editing system, which can target any 20bp sequences. The labeling strategy is specifically useful in targeting repetitive sequences, and sequences not accessible to other labeling methods. In this report, we present customized mapping strategies that extend the applications of CRISPR–Cas9 DNA labeling. We first design a CRISPR–Cas9 labeling strategy to interrogate and differentiate the single allele differences in NGG protospacer adjacent motifs (PAM sequence). Combined with sequence motif labeling, we can pinpoint the single-base differences in highly conserved sequences. In the second strategy, we design mapping patterns across a genome by selecting sets of specific single-guide RNAs (sgRNAs) for labeling multiple loci of a genomic region or a whole genome. By developing and optimizing a single tube synthesis of multiple sgRNAs, we demonstrate the utility of CRISPR–Cas9 mapping with 162 sgRNAs targeting the 2Mb Haemophilus influenzae chromosome. These CRISPR–Cas9 mapping approaches could be particularly useful for applications in defining long-distance haplotypes and pinpointing the breakpoints in large structural variants in complex genomes and microbial mixtures.
INTRODUCTION
Restriction mapping has been applied in human genomics for physical mapping of genome fragments based on restriction enzyme cutting and was used extensively during the Human Genome Project to guide genome assembly (3)(4)(5). However, traditional restriction mapping is highly laborintensive and requires large amounts of sample. More importantly, a traditional restriction map provides a 'fingerprint' of the genomic DNA, rather than an ordered sequence of restriction sites. A solution to the sequence assembly challenge that overcomes the drawbacks of traditional restriction mapping is optical mapping (6). The optical mapping method has been used to construct ordered restriction maps for whole genomes (7)(8)(9) and continues to be very useful in providing scaffolds for shotgun sequence assembly and validating sequence assemblies (10)(11)(12). More recently, a similar optical mapping technique has been introduced by combining sequence-specific labeling, along with consistent linearization of extremely long DNA molecules in nanochannel arrays (1). This provides an accurate, high-throughput, and robust whole-genome mapping technique, and optical mapping has been widely applied in assisting genome assembly, the detection and characterization of complex structural variants, and microbial comparative genomics (13)(14)(15).
The primary genome mapping strategy is based on measuring distances between short (6-8 bp) sequence motifs across the genome (16,17). However, the distribution of motifs is fixed for any given genome, which results in uneven spacing for different genomic regions. Often, there are no appropriate motifs within repetitive genomic regions, which results in large unmappable genomic intervals (14,18). Another challenge resides in detecting and typing specific structural variants for clinical diagnostic applications. Targeted variant-specific labeling is required to obtain PAGE 2 OF 11 accurate breakpoints, but this cannot be achieved by motifmapping alone (19). In microbial comparative analyses, sequences with high similarities are often involved (20). Sequence motif mapping generally results in similar patterns in these regions. In principle, another important application of optical mapping could be long-range inference of haplotype structure, but use of motif-specific labels restricts analysis to polymorphisms that happen to impact genomic motifs.
Recently, we introduced a novel labeling strategy based on a CRISPR-Cas9 genome editing system using the nicking Cas9-D10A protein to address the above issues. This labeling strategy can target almost any 20 bp sequence (21). The method is especially powerful in targeting repetitive sequences or other sequences that rely on the distribution of restriction site motifs in the DNA. Since its introduction, the method has found many applications, including singlemolecule telomere length measurements via fluorescent tagging of telomere repeats with guide RNA (gRNA) (22,23), global characterization of repeat-rich human subtelomere regions (24,25), mapping and identifying large-scale structural variants such as at acrocentric chromosomes (14), and identifying antibiotic resistance encoding plasmids present in bacterial isolates (26).
In this report, we present a couple of customized mapping strategies by CRISPR-Cas9 mediated DNA labeling. We demonstrate the overall effectiveness of the new mapping strategies using the bacterium Haemophilus influenzae strains as a model system, the standard lab strain Rd KW20 (RR722, NC 000907), and a marked derivative of clinical isolate 86-028NP (RR3131, NC 007416.2) (27,28). In the first strategy, we enable the CRISPR-Cas9 labeling to interrogate and differentiate the single allele difference in the NGG protospacer adjacent motif (PAM sequence). The same 20 base locus in two strains was either labeled or not depending on if an alternative allele other than G is present in PAM. Combining with sequence motif labeling, we can pinpoint the single-base differences in highly conserved sequences.
In the second strategy, we use a custom panel of sequencespecific sgRNAs to label multiple loci of a genomic region or a whole genome. Since nearly any 20mer sequence can be targeted (constrained only by the need for a 3 -NGG protospacer adjacent motif, or PAM) (21), one can design mapping patterns across a genome by selecting sets of specific single-guide RNAs (sgRNAs) for features of interest. To make this strategy viable and expand its utility requires: (a) reduced cost of sgRNA synthesis (a sgRNA still costs ∼$100 through commercial sources); (b) optimization of the total number of sgRNAs that can be combined into a single labeling reaction. By addressing the above issues, we first optimized and developed a sgRNA synthesis method based on T7 RNA transcription to generate a mixture of multiple sgRNAs in a single tube reaction (29), reducing synthesis costs by an order magnitude or greater. We then designed 162 sgRNAs targeting different chromosomal sites in the lab strain Rd and in vitro synthesized all of them in a single tube reaction. This mixture was then used to label two bacterial strains, followed by single-molecule optical mapping. Optical reads were successfully aligned to the customdesigned sgRNA map.
These CRISPR-Cas9 mapping approaches could be particularly powerful in defining long-distance haplotypes and pinpointing breakpoints of large structural variants in complex genomes (30,31), and they may enable microbial comparative analyses (17,32).
High-molecular-weight DNA extraction
Two Haemophilus influenzae strains with complete genome sequences were used: the standard lab strain Rd KW20 (RR722, NC 000907) and a marked derivative of clinical isolate 86-028NP (RR3131, NC 007416.2, carrying novobiocin and nalidixic acid resistance alleles, Nov R and Nal R ) (27,33,34). Bacterial culture followed standard protocols; cells were grown to stationary phase (OD 600 nm = 1.2) in supplemented brain-heart infusion (10 g/ml hemin 2 g/ml NAD) shaking at 37 • C, and then cells were harvested by centrifugation at 4000 rpm for 5 min before DNA extractions (35,36). Purification of ultra-high MW DNA fragments followed the Bionano Prep Cell Culture DNA Isolation Protocol. Briefly, cells were: (a) resuspended in cell buffer (∼5 × 10 9 CFU/ml); (b) embedded in 2% lowmelt agarose (BioRad) plugs to minimize shearing forces; (c) lysed using Bionano cell lysis buffer supplemented with 167 l Proteinase K (Qiagen) rocking overnight at 50 • C; (d) RNase treatment by adding 50 l of RNase A solution and incubating the plugs for 1 h at 37 • C (Qiagen) and (e) washing in TE buffer with intermittent mixing. Finally, DNA was purified from low-melt agarose plugs by drop dialysis. Plugs were melted at 72 • C, then incubated with 2 l Agarase (Thermo Fisher Scientific) for 45 min. Melted plugs were dialyzed into TE buffer using 0.1 m Millipore membrane filters for 45 min at a ratio of 15 ml buffer per ∼200 l sample. DNA was allowed to homogenize overnight at room temperature before fluorometric quantification using the Qbit dsDNA BR kit (Thermo Fisher Scientific). dsDNA synthesis sgRNA oligos. sgRNAs were encoded on 55 nt DNA oligos with a 5 T7 promoter sequence (5 -TTCTAATACGACTCACTATAG-3 ), followed by the target 20mer sequence, complementary to the target gDNA sequence, and finally an overlap sequence (5 -GTTTTAGAGCTAGA-3 ). Individually synthesized sgRNA oligos were then pooled into an equimolar mixture. sgRNA complementary oligo: An 80 nt long oligo was designed with the 3 end complementary to the overlap sequence and remainder encoded the Cas9 binding sequence (5 -AAAAGCACCGACTCGGTGCCACTTTTTCAAG TTGATAACGGACTAGCCTTATTTTAACTTGCTAT TTCTAGCTCTAAAAC-3 ). All oligos were obtained from Integrated DNA Technology. The sgRNA oligo mix was hybridized to the sgRNA complementary oligo (at 10 M each) in 1× NEBuffer2 (New England BioLabs, NEB) with 2 mM dNTPs at 90 • C for 15 s followed by 43 • C for 5 min. To complete dsDNA synthesis, the hybridization mixture was incubated at 37 • C for 1hr with 5 U of Klenow Fragment 3 →5 exo-(NEB). To degrade linear ssDNA remaining, the dsDNA was then treated with Exonuclease I PAGE 3 OF 11 Nucleic Acids Research, 2021, Vol. 49, No. 2 e8 in 1× Exonuclease I reaction buffer (NEB) for 1 h at 37 • C. Finally, dsDNA was purified using QIAquick Nucleotide Removal Kit (Qiagen) and eluted in 30 ul elution buffer. Quality and concentration were assessed using agarose gel electrophoresis and the Synergy H1Hybrid Multi-Mode Reader (Bio Tek).
sgRNA synthesis
sgRNA was synthesized using HiScribe T7 High Yield RNA Synthesis Kit (NEB) following the Standard RNA Synthesis protocol. In summary, 1 g dsDNA was incubated with 1× reaction buffer, 10 mM NTPs and T7 RNA polymerase enzyme mix at 37 • C for 2 h followed by DNase I treatment at 37 • C for 15 min to remove dsDNA from the reaction. sgRNA was then purified using RNA Clean & Concentrator Kits (Zymo Research). The concentration of the purified sgRNA was assessed using Synergy H1Hybrid Multi-Mode Reader (Bio Tek).
CRISPR-Cas9 labeling of chromosomal DNA
For DNA nicking using the 48 and 162 sgRNA mix (supplementary Tables S1 and S2),1.25 M of the synthesized sgRNA was first incubated with 125 nM of Cas9 D10A (NEB) in 1× NEBuffer 3.1 (NEB) at 37 • C for 15 min to form a sgRNA-Cas9 complex. 300 ng of the DNA sample was then added to the sgRNA-Cas9 complex mixture and incubated at 37 • C for 60 min. For DNA nicking with both Cas9 and Nt.BspQI, 2.5 M gRNA was first incubated with 63 nM of Cas9 D10A in 1X NEBuffer 3.1 at 37 • C for 15min. After that, 300 ng of DNA and 5 U of Nt.BspQI (NEB) were added to the sample mixture and incubated at 37 • C for 2 h. The nicked DNA samples were then labeled using 5 U Taq DNA Polymerase (NEB), 1× thermopol buffer (NEB), 266 nM free nucleotides mix (dATP, dCTP, dGTP (NEB) and Atto-532-dUTP (Jena Bioscience)) at 72 • C for 60 min. The labeled sample was then treated with Proteinase K at 56 • C for 30min and 1uM IrysPrep stop solution (BioNano Genomics) was added to the reaction.
DNA loading and imaging
Labeled DNA samples were stained and prepared for loading on an Irys Chip (BioNano Genomics) following manufacturer instructions. The sample was then linearized and imaged. The stained samples were loaded and imaged inside the nanochannels following the established protocol. Each Irys Chip contains two nanochannel devices, which can generate data from >60 Gb of long chromosomal DNA fragments (>150 kb). The image analysis was done using BioNano Genomics commercial software (IrysView 2.5) for segmenting and detecting DNA backbone YOYO-1 staining, similar to early optical mapping methods, and localizing the green labels by fitting the point-spread functions.
Data analysis
Single-molecule maps were de novo assembled and aligned to the reference as described in previous work (37). Briefly, the assembler is a custom implementation of the overlaplayout-consensus paradigm with a maximum likelihood model. An overlap graph was generated based on the pairwise comparison of all molecules as input. Redundant and spurious edges were removed. The assembler outputs the longest path in the graph and consensus maps were derived. Consensus maps are further refined by mapping singlemolecule maps to the consensus maps and label positions are recalculated. Refined consensus maps are extended by mapping single molecules to the ends of the consensus and calculating label positions beyond the initial maps. After the merging of overlapping maps, a final set of consensus maps was output and used for subsequent analysis. RefAligner works similarly but compares molecules directly to an in silico nicked reference instead of first forming contigs. These maps were then opened in Irsyview visualization software from BioNano Genomics.
Using CRISPR-Cas9 labeling to interrogate individual base, and tag specific genomic region of interest
The main strategy for long-range optical mapping is based on measuring the distances between the short sequence motifs recognized by nicking endonucleases (6-8 bp) on single long DNA molecules. The key information is the pattern of distances between motifs. Current labeling strategies can only detect single-base differences at polymorphisms that happen to coincide with nickase motifs, which has limited the potential applications of optical mapping. For example, the H. influenzae strains RR722 and RR3131 share a 100 kb region (819-916 kb of RR722, NC 000907, and 884-981 kb of RR3131, NC 007416) with 99% sequence similarity. The Nt.BspQI sequence motif maps for the two strains are almost identical for this region, except for one extra nick of the RR3131 genome, due to an adenine single-nucleotide difference from RR722, thus the nicking enzyme labels the RR3131's allele but not RR722's allele ( Figure 1).
We devised a strategy to use multiplexed CRISPR-Cas9 labeling to distinguish single-nucleotide variants affecting 3 -NGG PAM sites since the editing system has a strong requirement for the PAM immediately following the 20 bp recognition sequences. Genetic variation impacting PAM sites (i.e. if one of the G bases of a PAM in one genome is variant in another) is expected to strongly impact labeling, even if they share the 20 bp recognition sequence. Thus, we predicted that strong differential labeling at gRNA-guided PAM variants could reliably differentiate the single base difference between two genomes over long distances.
To demonstrate single-base resolution of multiplexed CRISPR-Cas9 labels at variation affecting PAM sites, we designed gRNAs targeting three distinct 20mer recognition sequences, but for each one of the two H. influenzae strains lacked a 3 -NGG PAM signal due to single nucleotide variation (Table 1). Labeling by both Nt.BspQI and CRISPR-Cas9 were performed in a single tube reaction, and the results of optical mapping are shown in Figure 1.
Single-base variation away from either G in the PAM nearly eliminated the corresponding labeling. At 'locus 1' (NTHI0914-hypothetical protein of RR3131 and HI 0755conserved hypothetical protein of RR722), the two strains share the same 20 bp recognition sequence (5 -AAAAATT GCTGCATCTTCTT-3 ) as the gRNA, but RR3131 has Table 1. a 3 -TGG PAM sequence, while RR722 has a TGA sequence instead. CRISPR-Cas9-mediated optical mapping clearly shows high-efficiency labeling at position 885289 in RR3131 (∼90% labeling), whereas RR722 molecules totally lacked labels (0%) at position 819899 (red arrow at 'locus 1' in Figure 1). Similarly, at 'locus 3' (NTHI0947-50S ribosomal protein L29 of RR3131), the labeling difference between two strains can only be explained by the presence of alternative alleles in the two strains, in which RR3131 becomes labeled at 98698 with a perfect AGG PAM sequence; RR722 is not labeled at the syntenic position because of an ACG variant non-PAM sequence. At 'locus 2' (ribB), the sgRNA matches RR722 at 828196 with a CGG PAM sequence, and correspondingly, over 90% of molecules spanning the position were labeled (red arrow at 'locus 2' in Figure 1). In RR3131, no labeling was seen at the bestmatching genomic position (893590), but in addition to a non-PAM 3 -end (CTG), the first and third positions were also mismatched. In summary, labeling efficiency was over 90% for gRNAs with an NGG PAM sequence, whereas almost none of the molecules were labeled if there is an alternative allele in the PAM sequences. This is in contrast to the variable labeling efficiencies seen for different mismatches from the 20 nt recognition sequences in the sgRNA experiments below. These results suggest that a customized optical mapping using gRNAs to target many of these polymorphisms (or 'PAM SNPs') could be an effective means to define longdistance haplotype structure in human genomes. It could also be applicable in other sample types, particularly mixed microbial specimens. The new DLE labeling strategy (6bp motif) from BioNano genomics provides 50% more labeling sites than Nt.BspqI labeling (7 bp motif) in the human genome, which may resolve some haplotype features. However, the density of 1 SNP per megabase in these motifs is still insufficient to construct whole-genome haplotypes, even given long average DNA read length of 300 kb.
We performed an in silico analysis of whole genomes from the 1000 genomes project (38,39) to determine the potential number and distribution of heterozygous PAM SNPs in the human genome, Out of 161 million NGG sites in hg38, on average, there are 220 000 heterozygous PAM SNPs in a single diploid human genome. In addition, there are on average 40 000 heterozygous indels (>4 bp) within potential CRISPR-Cas9 recognition sequences (20 bp + NGG); >2 bp heterozygous indels within the 20 bp gRNA recognition sequence preferentially target the matching allele. Together, the genomic density of these sites is ideal to generate long-distance haplotypes using CRISPR-Cas9 labeling of PAM sites with single molecules in these experiments longer than 100 kb.
Multiplexed sgRNA preparation in a single tube reaction
We adapted the previously described method to synthesize multiple sgRNAs in a single tube reaction (29). Figure 2 shows the synthesis scheme and workflow. The key difference between our approach and the available commercial kit (EnGen ® sgRNA Synthesis Kit, S. pyogenes from NEB) is that we have a separate step to generate the dsDNA before the RNA transcription reaction. The mixture of multiple sgRNA oligos and the sgRNA complementary oligo was first mixed at a 1:1 ratio in reaction buffer. After Klenow exo-extension to generate dsDNA, the reaction was treated with Exonuclease I to remove extra ssDNA. The purity and size of dsDNA were further confirmed with gel electrophoresis before purification with PCR cleanup column. We typically get 5 g dsDNA at 0.2 g/l concentration. After sgRNA synthesis using T7 RNA polymerase, the sample was treated with DNaseI to remove dsDNA and purified with an RNA cleanup column. We normally obtain 40g sgRNA at 2 g/l concentration. This is enough to run ∼230 CRISPR-Cas9 labeling reactions with 300 ng target DNA sample each time. The purity and correct size of the dsDNA are critical to the synthesis of multiple sgRNAs. We successfully synthesized 162 sgRNAs in a single tube reaction.
Multiplexed sgRNA optical mapping
In the second customized mapping strategy, we customized the mapping patterns across a genome by selecting sets of specific single-guide RNAs (sgRNAs) for features of interest. This is particularly useful in designing different patterns to differentiate similar genomes or conserved sequences between strains or haplotypes. In designing the patterns, it is critical to avoid evenly distributed sgRNAs, because only long molecules across the entire pattern can be uniquely aligned. To test this, we first designed two custom optical mapping patterns using the different H. influenzae bacterial strains, lab strain Rd KW20 (RR722), and a marked derivative of clinical isolate 86-028NP (RR3131) as the model systems.
48 sgRNAs were designed to target a 300 kb region of RR722 (0-350 kb of NC 000907), which shares high sequence similarity with RR3131 strain (0-315 kb NC 007416). Each sgRNA was designed to have a single perfect match of 20 bases upstream of PAM NGGs based on the Rd reference genome (Supplementary Table S1). These 48 sgRNAs are evenly distributed across the 300 kb region of RR722 (RR722 reference map in Figure 3A). Dark lines on the blue bar indicate predicted sgRNA locations. Out of 48 sgRNAs, 33 sgRNAs also have a single perfect match of 20 bases upstream of a PAM NGG on the RR3131 strain. However, the predicted targeting locations of these 33 sgRNAs form an unevenly distributed mapping pattern (RR3131 reference map in Figure 3B), indicative of structural variation between the genomes.
We then generated a single mixture of 48 sgRNAs, which was used to label and map targeted regions in both the RR722 and RR3131 genomes. The individual molecules are indicated as yellow lines that are aligned to blue references in Figure 3. The two data sets show similar characteristics with an average molecule length of 255 and 249 kb for Table S1). The lines in the blue bar (designed reference map of RR722) represent the locations of the 48 sgRNAs on RR722. The yellow lines below the reference are labels with dark green dots representing where labels matched to the reference and light green dots representing labels not found in the reference. (B) Mapping results of RR3131 molecules labeled with the set of 48 sgRNAs (Supplementary Table S1). The lines in the blue bar (designed reference map of RR3131) represent the locations of the 48 sgRNAs on RR3131. The yellow lines below the reference are labels with dark green dots representing where labels matched to the reference map and light green dots representing labels not found in the reference map. The red arrows indicate the off-target labeling.
RR722 and RR3131 respectively. But with the same amount of raw data, three times more molecules could be uniquely aligned to the RR3131 strain than the RR722 strain, even though RR3131 has fewer perfectly matched sgRNAs (Figure 3A and B, respectively). This is due to the fact that the shorter molecules will generate ambiguous alignments to the evenly distributed patterns. Longer molecules are needed to map across the whole evenly distributed reference, which results in fewer molecules aligned to RR722 sgRNA map. This clearly shows that an unevenly distributed mapping pattern could result in better mapping.
Main sources of off-target labeling
CRIPSR-Cas9 tagging is prone to off-target labeling. It is important to reduce off-target labeling as much as possible, especially when trying to use custom-target mapping to map sequences with high similarity. We aligned the 48 sgRNAs (20 base recognition sequence) against the RR3131 reference. Fifteen sgRNAs out of the above 48 sgRNAs that have imperfect matches to the RR3131 genome. Some of them result in off-target labeling in RR3131. In Figure 3B, many single molecules show off-target labels (light green dots) at six different locations, which are present in the RR722 genome, but not present in RR3131, therefore absent from the reference map.
Seven of these 15 sgRNAs show several partial matches (<8 bases) across the 300kb region, but without a PAM NGG next to the best match, which could not be labeled. These seven sgRNAs are designated as 'N/A' in Supplementary Table S1 and are unlikely to contribute to off-target labeling. Six of the remaining eight sgRNAs were matched the RR3131 reference at off-target loci with a PAM motif and a single mismatch in the 20 recognition sequences. These six were likely contributing to off-target labeling and designated as 'off-target' in Table 2. The final two sgRNAs Target sequence RR722 21722 GCTTTTTAGGATATCGTCCCNGG RR3131 21698 off target GCTTTTTAAGATATCGTCCCAGG RR722 59529 GCGGTATCCACCCCCACTGCNGG RR3131 60913 off target GCAGTATCCACCCCCACTGCAGG RR722 86065 GTTACATTACACACAAACTTNGG RR3131 86656 off target GTTACATTACACACAAATTTTGG RR722 94393 GGGGCGTAAATTCTTAACATNGG RR3131 151264 off target GGAGCGTAAATTCTTAACATTGG RR722 253327 CGAAGGGATAAATATTGCGANGG RR3131 316470 off target TGAAGGGATAAATATTGCGATGG RR722 270963 TAGCACTTAAAAGAGGAATGNGG RR3131 334078 off target TGGCACTTAAAAGAGGAATGGGG RR722 219206 TTGTTTTACGATATAATACGNGG RR3131 281336 no label TTGTTTTGCGATATAATACGAGG RR722 296956 TAATCAAGCATTAGATAGCTNGG RR3131 359914 no label GCGTAAAGCATTAGATAGCTTGG Two rows are shown for each of eight probes that did not have a perfect hit in the RR3131 genome. The second row is the designed probe named for its hit location on the RR722 genome. The upper row is the sequence found in the RR3131 strain, and named for its location. Bold indicates a PAM sequence motif (NGG). Red indicates a base that does not match the designed probe. The last two probes did not have a label seen consistently in the aligned data.
of the 15 did not produce a label in RR3131 and are listed as 'No label'. Of the two, the sgRNA at 219206 of RR722 (TTGTTTTACGATATAATACGNGG) also shows a single base mismatch on RR3371 strain, but did not result in off-target labeling. The sgRNA at 323878 of RR722 (TAATCAAGCATTAGATAGCTNGG) has several mismatches close to the 5 end and also did not result in offtarget labeling. All six sgRNAs that caused high-frequency off-target labeling had a single mismatch to the target sequences of RR3131. Five of six had the single mismatch close to the 5 end, distal from the PAM sequences, except the sgRNA at 86065 of RR722 (GTTACATTACACACAAACTTNGG) with the single mismatch at the 3 rd base upstream of PAM. For example, the sgRNA at 21722 of RR722 (GCTTTTTA GGATATCGTCCCNGG) is designed to target the RR722 genome at coordinate 21722, but it also matches a synthetic position in RR3131 (at coordinate 21698) with a single mismatch (G/A) at the ninth base from the 5 end. The offtarget labeling of the RR3131 chromosome around 21698 was likely caused by this sgRNA. For the same reason, the sgRNA at 59529 of RR722 (GCGGTATCCACCCC CACTGCNGG) likely generated the off-target labeling on RR3131 around 60913 with a single mismatch at the third base. Notably, the off-target labeling on RR3131 is more efficient with sgRNA designed for RR722 at 59529 locus than the sgRNA of RR722 at 21722 locus, which may reflect that its mismatch is closer to the 5 end.
Overall, these results are consistent with the observation that the last 8-10 seed bases of sgRNA upstream of the PAM are more important for reducing the off-target labeling (40)(41)(42)(43), and that multiple mismatches also reduce offtarget labeling.
Customized optical mapping of a whole bacterial genome
Based on our off-target labeling results and the reports that eight seeding bases immediately upstream of the PAM sequence (NGG) have higher discrimination (40,41), we optimized the design pipeline to select a set of sgRNAs span-ning the full RR722 genome in a series of four stepwise filters: a) We first collected all possible sgRNAs with a single perfect match to the RR722 reference (all 20mers followed by a 3 PAM NGG that occur only once in RR722); 40 870 such possible sgRNAs were available. (b) From those, we collected only the 8-base seeding sequences proximal to the PAM with single perfect hits to the reference. If an 8-base seed had multiple perfect hits to the reference, it was discarded since these had a high chance of contributing to offtarget labeling. The remaining sgRNAs (15 339) all had a single perfect hit of 20 bases and a single perfect hit of the 8-base seeding sequences. (c) Since all eight base-seeding sequences have multiple hits with a single mismatch, we then applied a third filter to minimize the number of hits in the 8-base seeding sequences with single mismatches to RR722. This resulted in 1507 gRNAs with <5 singly mismatched hits in all 8-base seeding sequences. (d) From this dataset, we further tried to minimize off-target nicks by keeping the sgRNAs with one more mismatch in the first 12 bases from the 5 end (415 remains). The sgRNA design flow chart is summarized in Figure 4. The final set of sgRNAs have only one perfect hit across the RR722 reference sequence in their 20-base recognition sequences and less than 5 hits with a mismatch in the 8-base PAM-proximal seeding sequence and another mismatch in 12 bases from the 5 end respectively. After the four filters to minimize off-target labeling, a final manual adjustment was made to avoid evenly distributed mapping patterns. This resulted in a final set of 162 gRNAs (Supplementary Table S2) with an average density of 9 predicted labels per 100 kb on RR722. The labeling density is similar to Nt.BspQI labeling density used in commercial optical mapping kits (1).
This set of 162 sgRNAs was synthesized in a single-tube reaction and used to label RR722 chromosomal DNA. The resulting samples were run on the optical mapping setup described in the methods section. We collected total 0.5 Gb data with an average molecule length of 244 kb. Figure 5 shows a subset of single molecules (yellow lines) with good alignments to this custom-nicked reference (blue bars) with 100x overall coverage. As expected, no high-frequency off- target labels (>30%) were observed in this 162 set of sgR-NAs. We then aligned the same set of 162 sgRNAs to the RR3131 reference sequence, only 90 of 162 perfect hits remained, and these form the RR3131 reference map shown in Figure 5B. When we aligned the labeled RR722 molecules to the RR3131 reference map, only 8 molecules aligned. These are shorter molecules around 100 kb that are aligned to two highly conserved regions, 884-981 kb of RR3131 (819-916 kb of RR722, NC 000907 and 884-981 kb of RR3131, NC 007416.02) and 1211-1254 kb of RR3131 (1177-1220 kb of RR722, NC 000907 and 1211-1254 kb of RR3131, NC 007416) respectively. If we apply the normal filter of molecules longer than 150 kb as shown in Figure 5A, none of the molecules aligns to RR3131 sgRNA map. This clearly demonstrated that the custom-designed sgR-NAs can uniquely identify the genomic structure of the two strains.
CONCLUSION
Long-read sequencing technologies like Oxford nanopore and Pacific Biosciences can routinely reach an average of 10-30 kb read lengths and can provide assembled haplotype structures with N50 exceeding a Megabase and at single-base resolution (44). Recent advancements with DNA preparation and Oxford nanopore can have 50% of read lengths exceed 100 kb (45) and rare sequences can be more than a Megabase (45,46); in combination with Pacific Biosciences Sequel2 HiFi reads, this can generate fully contiguous human chromosome sequences (47), albeit still at a high cost. In general, obtaining accurate long sequence reads is dramatically more informative than only measuring spacings between short sequence motifs on long DNA fragments. Long-read sequencing will be the future. However, optical mapping provides a useful tool to infer extremely long-range haplotype information at a low cost, especially with the new ability to design custom labeling patterns. The average fragment length of optical reads can be ∼300 kb, and the long tail of extremely long fragments is correspondingly much larger (36). In turn, both mapping molecules to reference sequences and assembly-based inference can span much longer haplotype structures, especially in the regions containing long complex repetitive element arrays, and requiring less overall yield. This capability allows the optical mapping to detect large/complex SVs (>10 kb), which sequencing technologies would potentially miss (13). Optical mapping also provides an important independent validation tool for identifying misassemblies (47).
Here, we show for the first time that individual alleles can be differentiated at arbitrary loci by genome-scale optical mapping using CRISPR-Cas9 fluorescent labeling. Appropriately designed probe sets could provide an effective means to define long-distance haplotype structure in target regions of complex genomes like that of humans or to distinguish among strains and haplotypes in mixed microbial samples.
Traditionally, genome-scale optical mapping is based on measuring distances between short (6-8 bp) sequence motifs across the genome, which were labeled either via restriction enzyme cutting, or fluorescent tagging with nickase or methyltransferase. However, the distribution of motifs is fixed for any given genome. Here we also showed for the first time that one can customize the mapping patterns by designing a custom set of multiple sgRNAs to fluorescently tag any 20bp sequences with the CRISPR-cas9 genome editing system. This will greatly expand the applications of genome mapping in targeting specific features of interests, clinically relevant structural variants, repetitive regions, and other inaccessible regions by sequence motif labeling. Moreover, one added benefit is that our multiple sgRNAs provide more sequence information than motif mapping since they define multiple 20mers instead of the same 6-8mer. This will greatly increase the accuracy of pinpointing the breakpoints of structural variants and other specific genomic features. We have performed in silico mapping of repetitive elements such as ALU and SINE-1 in the human genome assembly, and we estimate that one sgRNA from ALU and one from LINE-1 will result in ∼90% coverage of the human genome. This coverage is similar to that of existing optical mapping schemes with Nt.Bspq1 and DLE labeling offered by Bionano Genomics, although understanding on-and off-target cutting by these gRNAs will be complicated. Thus, rational probe design might allow a small number of probes to interrogate long-range haplotype and repeat structures in complex genomes.
Since optical mapping does not rely on reading single base information, the cost of obtaining ultra-long optical reads is likely to remain low, although at the expense of full sequence information. Targeted enrichment of a single 0.3-1 Mb region for long-read sequencing remains challenging Table S2). (A) The lines in the blue bar (designed reference map of RR722) represent the locations of the 162 sgRNAs on RR722. The yellow lines below the reference are labels with dark green dots representing where labels matched to the reference and light green dots representing labels not found in the reference. (B) Alignment results to RR3131. and costly. Our custom-designed optical maps require no target enrichment to define long-distance haplotype structure across target regions while maintaining a low cost at about $500 per diploid human genome. In targeted optical mapping, the cost can be further reduced by combining sets of sgRNAs designed to haplotype different regions. Careful design of sgRNA probes to target repetitive elements can further expand coverage and reduce probe costs. Thus, this expanded flexibility of genome-scale optical mapping shows it can continue to fill an important role for dissecting complex genomes and genomic variation, as long-read sequencing technologies continue to advance.
The custom-designed genomic labeling strategies described here could find wide applications for analyzing complex genomes like humans', including determining longrange haplotype structure, higher precision breakpoint calling for complex structural variants, and improved resolution of complex repeat arrays. These strategies may also find applications in microbial comparative or community analyses since one can design gRNAs to identify characteristic markers on large genomic fragments of different microorganisms (e.g. pathogenic species) and virulence genes (e.g. antibiotic resistance genes and alleles) (15).
DATA AVAILABILITY
All additional data is available in the supplementary section.
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online. | 2020-10-29T09:02:48.195Z | 2020-11-24T00:00:00.000 | {
"year": 2020,
"sha1": "8a697bd67a19fc7d92ddc2af3f4cf5db6fb62fe6",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/49/2/e8/36084788/gkaa1088.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29bede6e1feb4ea845967ae67defe9cda096417e",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
134774113 | pes2o/s2orc | v3-fos-license | Jurassic–Cretaceous radiolarian-bearing strata from the Gresten Klippen Zone and the St. Veit Klippen Zone (Wienerwald, Eastern Alps, Austria): Implications for stratigraphy and paleogeography
Abstract Two sections of the klippen zones in the Wienerwald area have been investigated for their stratigraphy: (1) The Gern section of the Main Klippen Zone, a part of the Gresten Klippen Zone, and (2) the St. Veit Klippen Zone in the Lainz Tunnel and the neighboring outcrops in western Vienna. New biostratigraphic data are based on radiolaria from siliceous intervals and a few findings of calcareous nannofossils from marlstones. In the Gresten Klippen Zone, radiolarian assemblages from limestones of the Gern locality indicate a middle Oxfordian to early Kimmeridgian age of the Scheibbsbach Formation. Radiolarian and nannofossil data from the St. Veit Klippen Zone in the Lainz railway tunnel locality, as well as correlated outcrops from the Lainzer Tiergarten and the Gemeindeberg in the southwest of Vienna, indicate the presence of mainly Bajocian to lower Oxfordian red radiolarites and cherts (Rotenberg Formation). Siliceous, grey limestones and cherts of the Fasselgraben Formation range from the upper Oxfordian–Kimmeridgian to the Valanginian–Barremian. The Main Klippen Zone was derived from the European margin to the north, and this zone is regarded as a Helvetic paleogeographic unit. The St. Veit Klippen Zone in the Lainz Tunnel section contains no ophiolitic material and shows a tectonic contact with the surrounding Rhenodanubian nappe system, which indicates no primary sedimentary contact of the St. Veit Klippen Zone with the Flysch units, as well as demonstrating the presence of two structurally separated Alpine tectonic units. Thus, a direct correlation with the Ybbsitz Zone is not supported, and an original paleogeographic position in the transition from the Penninic Ocean to the Austroalpine continental fragment is proposed.
In this paper, we present new stratigraphic data from both the MKZ and the SVK, based on radiolarian and a few nannofossil data. The paleontological research based on radiolarian assemblages allowed more precise chronostratigraphic assignments. These data serve, on the one hand, for correlations with other klippen parts within the Eastern Alps and the Western Carpathians; on the other hand, we use these data for refining paleogeographic and tectonic models for the study area.
Geological setting
The two investigated klippen zones are part of the orogenic fold-and-thrust belt in the Wienerwald area, which forms the easternmost part of the Alps near Vienna (Fig. 1). Klippen zone rocks are poorly exposed and, in structurally complex positions, intermingled with successions of the Rhenodanubian nappe system (Schnabel, 1997(Schnabel, , 2002Egger and Wessely, 2014;Egger, in Egger and Ćorić, 2017). Several phases of thrusting, especially late orogenic out-of-sequence thrusting (Trautwein et al., 2001;Mattern and Wang, 2008) and strike-slip movements (Peresson and Decker, 1996), resulted in the structurally complex situation of the Wienerwald area. Therefore, original paleogeographic positions, provenance and settings of the involved tectonic units, from klippen to flysch, are still strongly debated, and several, partly contradictory models and paleogeographic reconstructions do exist (e.g. Faupl and Wagreich, 1992;Schnabel, 1992Schnabel, , 2002Faupl, 1996;Trautwein et al., 2001;Mattern and Wang, 2008;Egger and Wessely, 2014).
Gresten Klippen Zone: MKZ
The MKZ in the area of research (Figs. 1 and 2) represents a narrow, strongly tectonised zone within the SE part of the Rhenodanubian nappe system of the Wienerwald area, mainly between the Greifenstein and Laab nappes. It is regarded as a tectonic slice ('Schuppe') of the Ultrahelvetic Zone (Tollmann, 1985;Schnabel, 1992, Faupl and Wagreich, 2000, which was originally situated along the passive continental margin to the north of the Penninic (Rhenodanubian Flysch) ocean basin (Faupl and Wagreich, 2000), termed Alpine Tethys by, for instance, Stampfli et al. (2002) and Handy et al. (2010).
In the studied part, the MKZ is built up mainly by variegated coloured (red and grey) Upper Cretaceous marlstone and claystone (Buntmergelserie), as well as Schnabel, 1997Schnabel, , 2002. the well-known abandoned Gern quarry (Gottschling, 1966;Schnabel, 1993), from which Tithonian-Lower Cretaceous whitish limestones and traces of Upper Cretaceous variegated marls have been already described. These strata are overthrusted onto the Greifenstein Formation (lower Eocene) of the Greifenstein Nappe towards the North and are tectonically covered by the overthrusting Kaumberg Formation (Cenomanianlower Campanian) of the Laab Nappe, both representing flysch-type successions of the Rhenodanubian nappe system. A more complete succession of the MKZ is exposed nearby in several separated outcrops between Stollberg and Bernreith, to the west of Gern (Gottschling, 1966). The oldest part of this succession is represented by the Lower Jurassic Gresten Formation, sandstones with thin coal intercalations found only in the western part near Bernreith, as well as grey and brown sandy marls and marly limestones lacking precise stratigraphic data, probably representing Lower-to-Middle Jurassic strata. More frequent are massive and platy, mainly red and green, limestones with radiolaria, alternating with thinner-bedded greenish and reddish calcareous marls. Their deposition took place from the Bathonian to the Tithonian. Upper Jurassic (Oxfordian or Kimmeridgian) is represented by a block of dark grey cherts, found also south of the Gern quarry. A 2-m-thick fragment of green and red radiolarites, which was found only in one place, is also attributed to the Upper Jurassic (Oxfordian-lower Tithonian). Generally, younger strata are represented by light grey and spotted Tithonian Aptychus limestones (Stollberg Beds). In the uppermost part of these limestones, intercalations more indurated blocks of Lower Jurassic-to-Lower Cretaceous rocks (Küpper, 1962, Gottschling, 1966, Prey, 1979, Prey, in Schnabel, 1993. Cžjžek (1842) was the first to describe the fine-grained carbonates from the area of Stollhofen, named 'Stollberg Beds' by Hauer and Richthofen (1859) and 'Blassenstein Formation' by Trauth (1950). The Gern locality south of Laaben was reported to contain blocks of white Tithonian-Valanginian limestones with Aptychus, described by Prey (in Schnabel, 1993) and Egger and Wessely (2014).
During mapping for the Austrian Geological Survey, a new exposure of variegated limestones and marls of the MKZ was found in the year 2006 (Ślączka, 2007). The outcrop ( Fig. 3) is situated along a small southern tributary of the Gernbach stream (49° 05′ 04″ N, 15° 50′ 50″ E) near Decker, 1987Decker, , 1990Schnabel, 1992Schnabel, , 2002 and the St. Veit Klippen Zone (based on Janoschek et al., 1956;Prey, 1975;Pfersmann, 2013). the Kahlenberg and Hütteldorf formations), with a tectonic, strongly sheared contact between SVK and mainly the Hütteldorf Formation. These flysch formations form part of the Kahlenberg Nappe sensu Prey (1979) and Schnabel (2002). According to Egger (2013) and Egger and Wessely (2014), this unit above the SVK is attributed to the Satzberg slice, which, together with the Kahlenberg slice, forms part of the Greifenstein Nappe. In addition, to the southeast of the SVK outcrop belt, a thin unit of the same flysch sediments covers the SVK and separates this unit from the bordering Northern Calcareous Alps to the south (Fig. 1). Based on the Lainz Tunnel structural data, the SVK forms a complex anticline covered structurally by the Upper Cretaceous flysch units of the Satzberg slice (Pfersmann, 2013).
Radiolaria
The following radiolarian assemblages were analysed both in thin sections and after extraction of skeletons from the indurated rock samples: (1) 10 samples from the Gern succession of the MKZ and (2) 14 samples from the Lainz Tunnel as well as from outcrops of the SVK at Lainzer Tiergarten (Saulackenmais, Wildpretwiese; e.g. Trauth, 1930Trauth, , 1950 and neighboring outcrops (Gemeindeberg, Antonshöhe, e.g. Prey, 1975Prey, , 1991. Standard extraction methods used included treatment with diluted hydrochloric acid (HCl) and lowconcentration hydrofluoric acid (HF 4%). The residuum was washed, with a final sieve size of 63 µm. Taxa were identified using both transmission light microscopy and scanning electron microscopy. Taxonomic concepts applied during this study mainly follow those in Baumgartner et al. (1995), as well as De Wever et al. (2001) and Bąk et al. (2018a).
Calcareous nannofossils
Smear slides of calcareous nannofossils were prepared using sediment powder and distilled water; air-dried samples were fixed on glass slides. The samples were examined qualitatively for biostratigraphic markers using a polarised-light microscope (1000× magnification). We used Perch-Nielsen (1985) and Bown and Cooper (1998) for nannofossil taxonomy.
Lithology
The succession of the klippen (Figs. 2 and 4), which was well visible after a flooding event in 2006 in a creek from Thomasberg (Fig. 5), starts with a layer of grey and reddish pelitic radiolarian limestone 30 cm in thickness (sample 1S; Fig. 4A and Fig. 6A). It is followed by a 1-m succession of thin-and medium-bedded greenish-grey and red radiolaritic limestones, partly laminated, and a of Lower Cretaceous ('Neocomian') black marls appear up to a thickness of 20 m. The uppermost part of the Lower Cretaceous (Albian 'Gault') is represented by green limestones and green spotted marly shales, light red dotted marls (upper Albian, Gottschling, 1966), as well as by red and greenish marlstones of Cenomanian age (Wagreich, 2008). Upper Cretaceous deposits are mainly represented by variegated marlstone ('Buntmergelserie'). Another Upper Cretaceous lithofacies is represented by thick-bedded grey marly limestones, exposed to the east of Gern, in a right tributary of the Laaben River near Wöllersdorf, south of the village Laaben. In the upper part of the same creek, green-grey marly shales and brown mudstones of the Eocene, including Bartonian, are exposed (Ślączka, 2012).
St. Veit Klippen Zone (SVK)
The first information on limestones of probable Jurassic age in the area of St. Veit in the (now) southwestern districts of Vienna was reported by Cžjžek (1849, 'St. Veiter Klippenhügel'). Subsequently, pioneering works were published by Griesbach (1868Griesbach ( , 1869 and Hochstetter (1897), who defined the overall stratigraphic succession. Neumayr (1886) and Uhlig (1890), based on their knowledge of the Carpathians, were the first authors who directly correlated the SVK with the PKB ('pieninische Klippenzone') based on the tectono-geomorphological style and the similarities of some rocks. Trauth (1930Trauth ( , 1950 was the first author who gave more detailed stratigraphic information on the rocks, followed by Janoschek et al. (1956), Prey (1975Prey ( , 1979, Schnabel (1997Schnabel ( , 2002 and Pfersmann (2013).
The original paleogeographic position of the SVK is still debated, and interpretations may include a Helvetic (= European), Penninic (= Penninic Ocean in a wider sense) or Austroalpine derivation (e.g. Trauth, 1950;Prey, 1975Prey, , 1979Schnabel, 1992Schnabel, , 2002Faupl andWagreich, 1992, 2000). Most recent compilations correlate the SVK with the Ybbsitz (Klippen) Zone (Decker, 1990;Schnabel, 1979Schnabel, , 1992Egger and Wessely, 2014), an oceanic/ophiolitic unit interpreted as a (southern Penninic) part of the primary base of the Rhenodanubian nappe system (e.g. Schnabel, 1992Schnabel, , 2002. Our study is based on SVK rocks sampled during the construction work of the Lainz Tunnel (Lainzer Tunnel, years 2007Pfersmann, 2013), including some outcrop samples from the Lainzer Tiergarten and the nearby outcrops at Gemeindeberg, Rotenberg and Antonshöhe in Vienna ( Figs. 1 and 2). The route of the Lainz Tunnel, built by ÖBB Infrastruktur GmbH, passes through the southwestern outskirts of Vienna, below the protected park area of the Lainzer Tiergarten, to link the southern and western rail routes with Vienna Central Railway Station, crossing Rhenodanubian units (Pfersmann and Wagreich, 2009), the SVK and Neogene strata.
The structural position of the SVK in the study area is below the flysch sediments (mainly Upper Cretaceous of limestones with thin intercalations of marlstones. The sequence continues with 2 m of predominantly red, pelitic radiolaritic limestones of thickness up to 50 cm, as well as a 7-m-thick interval of pelitic, light grey and greenish radiolarian limestones that are sporadically dotted and locally laminated (samples 4S and 5S) and 0.5-m layer of red, medium-bedded, pelitic limestones with green lenses (sample 2S; Fig. 4B). Further up, a 1-m-thick complex of greenish-grey, medium-bedded pelitic limestones with sporadic red lenses is exposed, covered by a 20-cm-thick layer of white pelitic limestone and 1 m of light grey and white, medium-bedded, pelitic Figure 5: Outcrop photographs of the lithofacies within the Main Klippen Zone exposed along the southern Gernbach tributary from Thomasberg. All photographs are oriented to display upsection towards the right. (A) Medium-and thin-bedded, platy, planar, greenish radiolaritic limestone with red lenses and lamination within the higher part of the beds (Oxfordian, lowermost part of the profile; sample 1S). (B) Thin-and medium-bedded greenish-grey limestones and red lenses with intercalation of medium-bedded whitish limestones (middle part of profile, sample 3S; scale is 1 m). (C) Medium-and thick-bedded light grey and red limestones (middle part of the profile). (D) Tectonic contact between thin-bedded, greenish radiolaritic limestone (on the left side of the picture) and higher part of the profile, which begins with pebbly mudstone (on the right part of the picture) and lightgrey, marly limestones with clasts up to 20 cm in diameter and sporadic Aptychus. Lower surface is erosive (sample 5A, scale is 1 m). (E) Red, yellow and whitish marly limestones, as well as thin-and medium-bedded marls with intercalations of thin, red and marly shales in the upper part of the profile. (F) Red and green laminated pelitic marly limestones and marls. On the left side, slump structures and elongated whitish clasts are present, which decrease to the right (samples 8-11, scale is 1 m). sandy limestones, as well as layers of red marly limestones with sporadic lenses of whitish limestones ( Fig. 4E) and greenish marlstones. These nodular limestones often display slumping structures increasing upwards in frequency (left side of Fig. 4F). Next, there is a 7-m-thick complex of red and green marly limestones and marls with sporadic layers of grey limestones (right side of Fig. 4F). After a covered interval of a few meters, green spotted marls and marly limestones with radiolaria similar to the previous one are exposed. Twenty which are intercalated by medium-bedded limestones and thin radiolarites in the uppermost part (Fig. 5C). This succession is terminated by a fault. Above the fault, there is a thin layer of red and light-grey radiolarian limestones similar to those below.
Further up in the section, another succession starts with pebbly mudstone that displays an erosional lower boundary. The mudstone contains sporadic Aptychus sp. and passes upwards into bioclastic limestones (Fig. 4D), thin pelitic limestones (Fig. 4E), bioclastic which are commonly present in higher paleolatitudes (e.g. Pessagno and Blome 1986;Kiessling 1999) and which are a characteristic component of radiolarian microfauna in the epicontinental seas that bordered the Tethys to the north (e.g. Górka and Bąk 2000). The studied assemblages rather represent the Northern Tethyan province (according to the paleogeographic model of Pessagno et al., 1984) based on the lack of pantanellids and 'Ristola-type' parvicingulids. Among the 10 samples that yielded identifiable radiolarian specimens, five samples (2S, 3S, 4S, 10AS and 11AS) can be correlated with the Unitary Association Zones (UAZs) defined by Baumgartner et al. (1995). In these samples, the representatives of the genera Parvicingula, Podobursa, Sethocapsa, Stichocapsa and Williriedellum are most common ( Table 1). The radiolarian assemblage in samples 2S and 3S can be assigned to UAZ9 of Baumgartner et al. (1995) (middle-upper Oxfordian), based on the co-occurrence of Orbiculiforma mclaughlini Pessagno and Parvicingula boesii (Parona). The species O. mclaughlini has a very restricted stratigraphical range and has its final occurrence during UAZ9. This event coincides with the lowermost occurrence of P. boesii, which took place within UAZ9 in Tethyan settings (e.g. Baumgartner et al., 1995;Bąk et al., 2018a).
The radiolarian assemblage in sample 4S is the best preserved and thus the most diversified among all samples studied. The stratigraphical range of this sample can be assigned to UAZ10 (lower Kimmeridgian) based on the co-occurrence of Obesacapsula cetia (Foreman), Tetraditryma corralitosensis s.l. (Pessagno) and Tritrabs casmaliaensis (Pessagno). Obesacapsula cetia starts in UAZ10, while T. corralitosensis and T. casmaliaensis have their uppermost occurrence in this zone.
The poorly preserved radiolarian assemblage in samples 10AS and 11AS shows high abundances; however, a few identifiable specimens allow correlating both samples still with UAZ10. Tetraditryma corralitosensis and T. casmaliaensis are not present herein, but the correlation is possible based on the co-occurrence of O. cetia (Foreman) with Higumastra inflata Baumgartner, a species that has its lowermost occurrence in the lower Kimmeridgian.
Taxa identified in samples 1S, 5S, 6S, 13AS and 300S are very scarce, and thus these samples can be only widely correlated with the UAZs of Baumgartner et al. (1995). Sample 1S yielded very common radiolarians. However, only three specimens could be identified. This sample can be correlated with UAZ10-13 based on the co-occurrence of Podobursa spinosa (Ožvoldová) and O. cetia and, thus, be correlated with the lower Kimmeridgian to the Tithonian. Sample 5S contains seven identifiable species. Based on the co-occurrence of O. cetia, Pseudoeucyrtis reticularis Matsuoka and Yao, Williriedellum carpathicum Dumitrică and Williriedellum cristallinum Dumitrică, this sample can be assigned to the lower Kimmeridgian-Tithonian. The radiolarian assemblages in samples 6S meters upstream, there is a road bridge; small outcrops 50 m above the bridge contain grey micaceous, laminated, calcareous sandstones intercalated with grey marly mudstones and shales, probably representing Upper Cretaceous Buntmergelserie of the MKZ. Ten meters further upstream, typical Kaumberg Formation (Schnabel, 1992) is exposed, represented by green shales and thin-and medium-bedded greenish and grey sandstones. This interval already belongs to the Laab Nappe of the Rhenodanubian nappe system; the contact with the MKZ is not exposed.
The relation of the described succession to the light grey limestones and spotted limestones of Tithonian-Lower Cretaceous exposed in the old quarry at Gern (Egger and Wessely, 2014) is uncertain; probably these rocks are positioned above the spotted red and green marly limestones (Fig. 3). On top of those Tithonian-Lower Cretaceous limestones, strongly disturbed red shales and marls of Maastrichtian-Paleocene age (Ślączka, 2007), refolded with green and black shales, are exposed. Unfortunately, strong deformations did not allow the reconstruction of a reliable relation between all these shales of the Buntmergelserie, but the base seems to be unconformably above the limestones. However, Prey (in Plöchinger and Prey, 1993) supposed a sedimentary contact between the limestones and the Buntmergelserie.
Radiolarian assemblage
The radiolarian fauna at Gern comprises a major component of the siliceous limestones studied. Radiolarian species are present in 10 thin sections and 10 extracted rock samples. Isolated specimens are common, but, in general, very poorly preserved, with small numbers of identifiable taxa. Some specimens underwent intensive recrystallization and/or substitution by calcite. All recognized specimens are classified into 27 species including two taxa with open nomenclature. Taxa identified are listed in Table 1, and selected species are presented in Figure 7. Nassellarians are the main components of the radiolarian assemblage (about 70% of all specimens). Most of the recognizable specimens belong to the families Sethocapsidae, Syringocapsidae and Williriedellidae. Spumellarians represent mostly three-and four-armed morphotypes from the families Emiluviidae, Hagiastridae and Patulibracchiidae.
The radiolarian assemblage represents a lowlatitude Tethyan fauna (e.g. Baumgartner et al., 1995) well recognized from Middle Jurassic-Lower Cretaceous deposits in many regions of the western part of the Tethys (e.g. in the Western Carpathians: Ožvoldová and Frantová, 1997;Michalik et al., 2008;Bąk et al. 2017;Ultrahelvetic Zone: Ślączka et al, 2009). The assemblage is characterized by the total absence of orbiculiformids, structure. Tectonic blocks of indurated hard klippencore rocks, such as limestones, sandstones and cherts, show sizes ranging from centimeters to several tens of meters. The matrix consists of strongly deformed and sheared fine-grained pelitic rocks, such as Jurassic and Lower Cretaceous shales and marls; however, Upper Cretaceous pelitic rocks of Rhenodanubian nappe system origin (Hütteldorf Formation, Kahlenberg Formation) are also tectonically mixed into the matrix. No primary sedimentary contact of the Rhenodanubian nappe system with the SVK could be found.
The composite SVK succession recorded and correlated with the reported outcrops (Janoschek et al., 1956;Prey, 1975) includes the following stratigraphic units: (1) coarse quartz-rich sandstones (Keuper of probably Norian age); (2) fossiliferous grey limestones (Rhaetian, Kössen Formation); (3) sandy-silty grey marlstones and limestones with crinoids (Lower/Middle Jurassic, partly Hohenauer Wiese and 300S range from the middle Callovian to the Tithonian. In both samples, the correlation is based on the cooccurrence of W. cristallinum and Parvicingula mashitaensis Mizutani. The radiolarian assemblage in sample 13AS is assigned to UAZ9-10 (middle Oxfordian to lower Kimmeridgian) based on the co-occurrence of Podobursa quadriaculeata (Steiger) -which first appeared in UAZ9 -and species that have their highest appearance within UAZ10 such as H. inflata, Paronaella kotura Baumgartner and T. casmaliaensis.
Lithology
Rocks attributed to the SVK were found in a 1097-m-long section within the Lainz Tunnel. The rocks exposed in the tunnel comprise a 'klippen-type' block-in-matrix
The chronostratigraphic position of the radiolarian assemblages is again discussed in terms of the UAZ zones (Baumgartner et al., 1995). The ranges of the radiolarian species are juxtaposed in Tables 2 and 3. Red radiolarites and cherts of sample RS/1 from the tunnel shaft Veitingergasse yielded a radiolarian assemblage with Paronaella skowkonaensis Carter, Parvicingula dhimenaensis dhimenaensis Baumgartner, Parvicingula schoolhonsensis gr. Pessagno and Whalen, Quinquecapsularia megasphaerica Dumitrică and Baumgartner, Hiscocapsa funatoensis (Aita) and Yamatoum spinosum Takemura. Based on the co-occurrence of these specimens, the deposits could be assigned to the UAZ3 interval of Baumgartner et al. (1995), attributed to the lower/middle Bajocian.
Calcareous nannofossils
Only one sample of grey, sandy-silty, marly limestones from the SVK of the Lainz Tunnel (LT33/2666m, poor preservation, one nannofossil found in every two fields of view) yielded a biostratigraphically significant nannofossil assemblage, which can be assigned to the Jurassic (possible range: upper Toarcian-middle Oxfordian): Lotharingius cf. sigillatus (upper Pliensbachian?/lower Toarcian-middle Oxfordian), Discorhabdus sp. aff. D. striatus (Toarcian-upper Oxfordian) and Watznaueria sp. (first occurrence in upper Toarcian). This nannofossil assemblage indicates probably roughly a Middle Jurassic age. Two other samples from the grey marly limestones are also assigned tentatively to the Jurassic due to the occurrence of Discorhabdus sp. (LT33/2731m), Schizosphaerella sp.
near Scheibbs and Gresten (Decker, 1987;Widder, 1988). The upper part of the profile, represented by variegated coloured marly limestones with sporadic intercalation of thin bioclastic limestones, contains radiolarian assemblages (samples 5, 6 and 30), which suggest a general Kimmeridgian to Tithonian age. However, the occurrence of radiolarian assemblages from the upper Oxfordianlower Kimmeridgian in the overlying units (samples 10 and 11) also indicates that the upper part is not younger than lower Kimmeridgian. This interval corresponds to the appearance of sporadic whitish bioclastic limestones, implying a correlation with the carbonate breccias of the Konradsheim Formation (mainly Oxfordian-Tithonian-Berriasian; Decker, 1987), as well as the transition to the Tithonian light grey limestones representing the Blassenstein Formation (Decker, 1987;Egger and Wessely, 2014) as exposed in the Gern quarry.
Gresten/Main Klippen Zone
Biostratigraphic investigations of the limestone package in the Gernbach stream profile indicate mainly chronostratigraphic positions of the lower part of the Upper Jurassic. The composition of the radiolarian assemblages (samples 2 and 3) shows that the lower and middle parts of the succession of radiolarian limestones with intercalations of thin radiolarites represent the middle-upper Oxfordian, whereas the upper part (sample 4) ranges already into the lower Kimmeridgian (Fig. 3). Probably, the dark cherts described by Gottschling (1966) south of Gern also belong to the same interval. This part of the succession can be correlated with the Scheibbsbach Formation (sandstone turbidites with siliceous limestones and marlstones) from the type area of the Gresten Klippen Zone Table 3: Radiolaria assemblages from the SVK of a side shaft of the Lainz Tunnel at Veitingergasse near Gemeindeberg (samples RS), the Gemeindeberg outcrop (GEM), the Lainzer Tiergarten (outcrop samples MM08/65 and MM08/70) and Antonshöhe (MM08/28).
Lower-to-Middle Jurassic sandy-silty and marly grey limestones are both known from outcrops in the Lainzer Tiergarten as well as from the Lainz Tunnel section. A Middle Jurassic age could be ascertained herein by calcareous nannofossils, and tentatively, this interval is correlated with the Hohenauer Wiese Formation (Schnabel, 1997(Schnabel, , 2002. However, the Bajocian/Bathonian positionas indicated by ammonites from the grey limestones of the type locality, the Hohenauer Wiese outcrop in the Lainzer Tiergarten -is in conflict with the early-tomiddle Bajocian radiolarian age indicated for parts of the siliceous rocks from the Lainz Tunnel. These red radiolarites, cherts and shales, lithologically attributed to the Rotenberg Formation (Trauth, 1950;Decker 1987Decker , 1990, partly display a lower-to-middle Bajocian age based on radiolaria; this may indicate the presence of blocks of more than one facies type, both siliceous (deeper water) and carbonate (shallower water depths) types, within the SVK. This is a common feature of Klippen units, which is attributed to a tectonic melange derived from several geological units.
Above this interval, Callovian to Oxfordian siliceous limestones and cherts occur, followed by Upper Jurassic to Lower Cretaceous up to Barremian/Aptian mainly grey siliceous limestones with minor cherts, attributed to the Fasselgraben Formation (Trauth, 1950;Decker 1987Decker , 1990, a siliceous equivalent of the Blassenstein Formation of the MKZ. In contrast to former interpretations of the Triassic-Lower Cretaceous rocks of the SVK as the sedimentary substrate of the Upper Cretaceous flysch deposits of the Kahlenberg Nappe sensu Prey (1975) and Schnabel (2002), we interpret the Lainz Tunnel section as evidence for the presence of two independent structural, tectonically separated units: (1) the SVK, comprising a succession of Triassic-Lower Cretaceous rocks; and (2) Upper Cretaceous rocks of the overlying Rhenodanubian nappe system (Kahlenberg Nappe or Satzberg slice of Greifenstein Nappe). The relation of the Rhenodanubian nappe system to the SVK is clearly a tectonic superposition according to data from the Lainz Tunnel (Pfersmann, 2013). No evidence for a sedimentary contact between these two units could be detected in the tunnel section. Furthermore, the sampled rocks do not indicate any unknown intervening units, as speculated by Prey (1985: 'Klippen-Hüllflysch' or 'Buntmergelserie'). In contrast, a remarkable stratigraphic gap is ascertained between the youngest age of the Fasselgraben Formation of probably Barremian age (upper boundary ca. 125 Ma) to the oldest age of the surrounding formations of the Rhenodanubian nappe system (Hütteldorf Formation, probably Upper Albian to Cenomanian, lower boundary ca. 107 Ma; Wagreich, Wessely, 2014), and they show similarities to the Scheibbs Facies (Schnabel, 1979) of the Gresten Klippen Zone and the MKZ. However, based on the lack of pure radiolarites and on the merely sporadic occurrence of cherts, slightly shallower water depths are interpreted for these siliceous limestones. The appearance of re-deposited sediments in the higher part of the section implies a period of tectonic instability, which triggered slump movements along the slope of the sedimentary basin. The composition of the radiolarian assemblages within these limestones suggests that the depositional area was a part of the Northern Tethyan province. Black and variegated shales, which are in sharp contact with the underlying limestone, may correspond to the Cenomanian-Turonian sediments of the Buntmergelserie and to the red marlstone containing Upper Cretaceous foraminifera.
The succession described is composed of separate exposed blocks from a few up to tens of meters in diameter, which derive from the northern passive continental margin of the Penninic basin. These blocks may either be interpreted as tectonically detached slices or as re-deposited components in debris flow deposits. However, an alternative hypothesis that is becoming increasingly widespread is also worth mentioning: similar sedimentary bodies with scattered blocks are interpreted to represent a part of large re-deposited sedimentary units such as olistostromes (Festa et al., 2010), also in the northern Carpathians (e.g. Cieszkowski et al., 2009;Golonka et al., 2017). Naturally, in the poorly exposed Gern locality, there is not enough data available to solve this problem.
Another suggestion is the lithological and stratigraphic correlation of parts of the MKZ with the Hluk Formation (Stranik et al., 1995;Svabenická et al., 1997), from the inner part of the Magura Nappe (Bile Karpaty Unit). Consequently, following this interpretation, the MKZ has to be regarded as the remnant of sediments of a submarine swell within the Rhenodanubian Basin. However, one ought to remember that sandstones of the Gresten Formation, typical for the Alpine foreland and sourced from the European side outside the Alpine orogenic belt (Faupl, 1975;Nehyba and Opletal, 2016), are also present in the MKZ, indicating an external primary position for both the Gresten Klippen Zone and the MKZ and argue against an original position south of the Rhenodanubian Basin and the Penninic Ocean.
St. Veit Klippen Zone
Based on the stratigraphic results from the Lainz Tunnel and correlated outcrops, the SVK in the Lainz Tunnel displays the following stratigraphic units as derived from blocks within a sheared matrix: (1) Upper Triassic sandstones (Keuper; not biostratigraphically dated) and limestones ('Kössen Formation'); (2) Lower-Middle Jurassic grey sandy marlstones and limestones dated by ammonites (Trauth, 1930;Schnabel, 1997;Pfersmann, 2013) and calcareous nannofossils as Sinemurian to Bajocian/Bathonian; (3) Middle-Upper 2008). This suggests a stratigraphic gap of nearly 20 Ma. Taking this into account, a continuous succession of the SVK into the Rhenodanubian nappe system becomes strongly doubtful, and thus, an attribution of the SVK as a primary basement of parts of the Rhenodanubian Flysch units including the Kahlenberg Nappe (in the sense of Prey, 1979, andSchnabel, 2002) is highly uncertain.
In paleogeographic reconstructions and tectonic classifications (Schuster et al., 2014;Egger and Wessely, 2014), the SVK is correlated with the Ybbsitz (Klippen) Zone, a (South) Penninic ophiolite/deep-water unit similar to the Arosa Zone in the western part of the Eastern Alps. The complete lack of ophiolitic rocks, especially serpentinites as remnants of ultrabasic rocks, in the Lainz Tunnel (as compared to the Ybbsitz Zone), as well as the presence of Upper Triassic continental rocks of Keuper facies strongly argue against a primary position within the Rhenodanubian Basin as part of the Penninic Ocean (or Alpine Tethys). Consequently, a transitional position at the southern continental margin adjacent to the Penninic Ocean, i.e. a northern Austroalpine microplate position (corresponding in parts to the Lower Austroalpine facies belt of Tollmann, 1976), is inferred. The correlation of the SVK with the Drietoma Unit (Hók et al., 2009) of the peri-klippen part of the PKB (for details, see Wagreich et al., 2012;Pfersmann, 2013) further supports this hypothesis.
However, today's structural position of the Rhenodanubian nappe system of Penninic derivation on top of the SVK with a proposed marginal Austroalpine derivation has to be explained by large-scale out-of-sequence thrusting during Alpine orogenic wedge deformation, a feature that is presumed in several models of Alpine flysch wedge deformation (e.g. Mattern and Wang, 2008;Beidinger and Decker, 2014).
Lithostratigraphy
These new biostratigraphic data and arguments concerning the derivation of units has also implications for lithostratigraphic nomenclature. Both the Rotenberg Formation (type locality at Roter Berg, Ober St. Veit, southwest of Vienna; Trauth, 1930Trauth, , 1950 and the Fasselgraben Formation (klippe in the Fasselgraben creek in the Lainzer Tiergarten; Trauth, 1950) have their type localities within the SVK in the southwestern part of Vienna. Nowadays, these names are used for lithostratigraphic units within the Ybbsitz Klippen Zone (Ožvoldová and Faupl, 1993;Schnabel, 2002). Referring to our data, neither the age nor the primary paleogeographic and tectonic positions of the siliceous rocks in the Ybbsitz Klippen Zone and the SVK coincide. Thus, using lithostratigraphic names originally defined in the SVK (Trauth, 1950) for units occurring in the Ybbsitz Klippen Zone is strongly misleading and should be abandoned in the future.
Conclusions
Biostratigraphic data from radiolarians and calcareous nannofossils constrain siliceous and radiolaria-bearing intervals in the MKZ (a part of the Gresten Klippen Zone) and the SVK Zone of the Wienerwald area to the west of Vienna.
The radiolarian limestones of the Gern locality indicate a middle Oxfordian to lower Kimmeridgian chronostratigraphic position and can be correlated with the siliceous, but more turbiditic, Scheibbsbach Formation of the Gresten Klippen Zone. A northern and orogen-external derivation from the European passive margin is probable for this unit, in accordance with the paleogeographic position inferred for the Gresten Klippen Zone.
Radiolarian and nannofossil data for the SVK Zone from the Lainz railway tunnel and correlated outcrops from the Lainzer Tiergarten and other outcrops in the southwest of Vienna indicate an Early Jurassic age for the grey limestones, as well as an early-to-middle Bajocian to early Oxfordian age for the red radiolarites and cherts. This constrains the age of the Rotenberg Formation at the type area in Vienna. Siliceous limestones and cherts are present upwards, from the Upper Jurassic (upper Oxfordian-Kimmeridgian) up to the typical grey pelagic limestones of the Fasselgraben Formation of the Lower Cretaceous, indicating Valanginian-Hauterivian to Barremian/lower Aptian ages mainly.
As no remnants of ophiolitic rocks could be found in the tunnel, and no continuous succession into the tectonically overlying Kahlenberg Nappe (or Satzberg slice) could be determined either, a direct correlation of the SVK Zone with the Ybbsitz Zone is not supported. The SVK and the units of the Rhenodanubian nappe system thus comprise two structurally separated units of the Alpine orogenic wedge. However, a transitional position of the SVK Zone from the Penninic Ocean to the Austroalpine continental fragment at the northern margin of the Austroalpine microplate is inferred, which is supported by the occurrence of lower-middle Bajocian radiolarites, similar to stratigraphic data reported for the Northern Calcareous Alps. Thus, the lithostratigraphic units such as Rotenberg Formation and Fasselgraben Formation, which were originally defined in was supported by a cooperation project OMV AG-University of Vienna (FA 536006: 'Stratigraphy and lithofacies inventory of the St. Veit Klippen Zone and the Flysch units from the Lainzer Tunnel, Vienna') and by cooperation with ÖBB Infrastruktur GmbH and geological bureaus (bf:gh zt-gmbh Dr. Bechthold, BGG Consult Dr. Peter Waibel ZT-GmbH). We thank A. Lukeneder for cephalopod determinations, as well as Hans Egger and Roman Aubrecht for their critical and detailed reviews. the SVK Zone, cannot be used for units occurring in the Ybbsitz Klippen Zone. | 2019-04-27T13:13:36.621Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "4864b39f8e7bc085608a3fc177ee2bcb97bb6172",
"oa_license": null,
"oa_url": "https://doi.org/10.17738/ajes.2018.0013",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f9ea14748bcedfd7d7eab0f44c41d5814dd57601",
"s2fieldsofstudy": [
"Geography",
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
247772985 | pes2o/s2orc | v3-fos-license | Increased Intra-Individual Variability as a Marker of Executive Dysfunction in Generalized Anxiety Disorder
Recent studies suggest that individual difference in intra-individual variability (IIV) of reaction times is an important indicator of attentional executive control. However, there are few existing studies on the executive control of high trait-anxious individuals assessed by using reaction time variability. This study assessed whether executive functions are impaired among clinical and non-clinical trait-anxious individuals indicated by IIV. The cross-reliability and discriminative power of three IIV parameters (raw intra-individual standard deviation, SD; reaction time coefficient of variation, RTCV; and mean absolute deviation, MAD) were compared. Twenty-five non-clinical individuals with low trait anxiety (LTA), 31 non-clinical individuals with high trait anxiety (HTA), and 19 clinical patients diagnosed with generalized anxiety disorder (GAD) finished self-reported measures, an emotional spatial-cuing task, and a non-emotional arrow flanker task. In the emotional task, GAD patients had significantly slower response speed, lower accuracy, and greater IIV parameters than the LTA and HTA groups. In the non-emotional task, the GAD group exhibited poorer processing efficiency, greater SD and RTCV, and intact performance effectiveness. RTCV is suggested to be a better marker of executive dysfunction than SD and MAD due to its good discriminative power and reliability as well as less affected by reaction times.
INTRODCTION
Intra-individual variability (IIV) of reaction times (RTs) refers to the short-term changes or fluctuations in an individual's performance during a task (1). Recent studies suggest that individual difference in IIV is an important indicator of attention and cognitive control (2). It is well-known that individual sensitivity is periodically strengthened and weakened, i.e., attentional fluctuations. Apart from being affected by fatigue and distracting stimuli, attentional fluctuations are more affected by intrinsic neuronal activity, which is considered to be the main factor contributing to IIV (3). As a psychological noise, IIV of RTs reflects individuals' ability of sustained attention. Larger IIV means that task manipulation is characterized by poorer response efficiency, more attention errors, inability to maintain attention to a given task for a long time, or more maladaptive behaviors (4)(5)(6). This instability in cognitive control can show up over time in the changes of RTs across trials (7). It has been found that IIV is superior to mean RT, operational errors, omission errors, and success inhibition ratio (4,8) in measuring executive control function. Researchers have examined the psychometric characteristics of IIV to test its stability and reliability. Ode et al. found that the correlation coefficient of IIV was 0.68 with an intervening time interval of at least 2 weeks (5). In the same year, Saville et al. found that IIV of RTs showed good test-retest and odd-even reliability (9). These studies provide strong evidence that IIV is a relatively stable trait. Furthermore, it is also found that IIV represents a unitary construct, and it exhibits good consistency across not only different cognitive tasks, but also different sensory modalities (10).
The study on IIV was first initiated among individuals with attention deficit hyperactivity disorder (ADHD). In a metaanalytic review based on 319 studies (11), researchers found that children and adults with ADHD exhibit increased IIV relative to non-clinical groups. Furthermore, IIV reflects a stable feature of ADHD and other clinical disorders, which is robust to systematic differences across a wide range of tasks. An agerelated decrease in the efficiency of executive control can result in an increase in performance variability in task conditions requiring the recruitment of executive control processes (12). Increased trial-to-trial IIV was highlighted among different types of clinical samples, such as ADHD (4), depressive disorder (13), and post-traumatic stress disorder (PTSD) (14). Swick et al. found veterans diagnosed with PTSD exhibited greater response variability due to their deficits in the sustained attention and topdown cognitive control processes, which, in turn, strengthened the maintenance of PTSD symptoms (7). Individuals with head injuries also demonstrated greater variability than the healthy controls (15,16).
At present, increased IIV of RTs is also found among non-clinical examples, such as individuals with greater decline in perceptual speed and ideational fluency (17), cognitive aging (12,18), and poorer physical performance (19). IIV measures were revealed to be closely related to falls and gait in the elderly (20). A study on community-dwelling adults showed that older and more depressed adults exhibited greater IIV, but not for mean RT in the visual search task (21). Based on the perceptual speed study, Papenberg et al. insisted that increased IIV can predict cognitive impairment in the elderly (22). Five empirical studies conducted by Ode et al. among college students revealed that individuals with higher variability exhibited less effective cognitive control and less controlled behaviors and were more prone to negative emotional experiences and depressive symptoms (5). Moreover, IIV is considered to be a unique cognitive function indicator independent of accuracy, and it is closely related to children's externalizing problems (6).
Although the close relationship between IIV and cognitive impairments is widely known, it still remains unknown whether this association exists in high-anxious individuals. Increasing evidence lends support to the notion that anxiety is associated with performance impairments on numerous tasks. According to attentional control theory (23,24), anxiety impairs the efficiency of executive functions with performance effectiveness (the quality of performance generally indexed by response accuracy) and poor processing efficiency, which can be inferred from differences in reaction time (23). Furthermore, compared with performance effectiveness, processing efficiency may be more susceptible to anxious symptoms (24). Individuals with high trait anxiety may use more top-down control resources to compensate for their reduced processing efficiency to achieve the same level of behavioral performance as non-clinical individuals. Accordingly, this study examined the characteristics of attention executive control indicated by IIV among individuals with different levels of trait anxiety based on clinical and nonclinical samples.
Currently, there are several measures of IIV, such as the raw intra-individual SD, RTCV, MAD, residualized intra-individual SDs (ISD) using a regression procedure, the µ parameter of the ex-Gaussian distribution, the mean reciprocal RT (RecipMRT), and the µ parameter of the LATER model. However, it still remains inconsistent concerning which measure of IIV is most stable and reliable. Among these parameters, raw SD represents the discrete degree of individual RT, which is easy to calculate and understand. RTCV adjusts the raw intraindividual SD by intra-individual mean RT (20), which avoids the influence of mean RT. MAD is the mean deviation from the mean RT, which can better reflect the actual deviation of individual performance. These three parameters measure the spread of observations around the mean, irrespective of the direction of the deviation (9). Lövdén et al. report that RTCV produced the most stable and robust solutions (17), and Saville et al. find that SD and MAD were superior to other measures (9). Therefore, in this study, we compared these measures (SD, MAD, and RTCV) to explore the most stable and reliable indicator.
Previous studies show that individuals reporting more anxious symptoms performed significantly worse in threatrelated tasks than those who had less anxious symptoms (25), whereas high-anxious individuals usually exhibited a comparable level of performance as low-anxious individuals in the neutral stimuli tasks or low cognitive load (23,26). Accordingly, to examine whether there are differences in the performance for individuals with different levels of trait anxiety, an emotional spatial-cuing task with threat-related stimuli and an arrow flanker task with neutral stimuli were conducted in our study. Another focus of this study is whether IIV can distinguish high-anxious individuals from high-anxious ones due to its relative independence of accuracy. We speculate that high-anxious individuals may exhibit more increased IIV measures than the controls in both the emotional and the non-emotional tasks. Taken together, the aims of the present study are, therefore, two-fold. First, the executive control functions indicated by IIV among individuals with different levels of trait anxiety based on clinical and nonclinical samples (patients diagnosed with GAD from hospitals, healthy persons with low and high trait anxiety) were assessed in emotional and non-emotional tasks. Second, the acrosstask reliabilities and independence from RTs of different IIV parameters (SD, MAD, and RTCV) were compared in cognitive tasks.
Participants and Procedures
This study was approved by the Ethics Committee of Army Medical University, Chongqing, China. Experiments were conducted in three groups: non-clinical persons with LTA (the LTA group), non-clinical persons with HTA (the HTA group), and high trait-anxious patients diagnosed with GAD (the GAD group). The LTA and HTA groups were recruited from the local community by advertisement. Patients with GAD were recruited from the outpatient clinic of Xinqiao Hospital and Daping Hospital in Chongqing, China.
The classification criteria of trait anxiety depended on the results of pre-experiments in 1,539 healthy persons (27). Nonclinical participants with lower trait anxiety scores (≤33) were assigned to the LTA group, and the ones with higher trait anxiety scores (≥40) were assigned to the HTA group. All participants met criteria as follows: (1) able to read and understand the questionnaire; (2) had normal or corrected normal vision; (3) no evidence of substance abuse or dependence in the past 3 months. Besides that, the LTA and HTA individuals had no mental and cognitive disorders or brain injury. Patients with GAD were first-episode outpatients diagnosed by two licensed clinical psychologists. Psychiatric diagnoses were confirmed by using the Chinese version of the Mini-International Neuropsychiatric Inventory (28,29). All GAD individuals had not taken psychiatric drugs in the last 3 months and had no treatment of rTMS or electroconvulsive therapy in the past 6 months. Importantly, given past findings that changes in IIV are associated with depressive symptoms and the high comorbidity of GAD and depression, clinically depressed patients and individuals with history of depression were excluded in the current study. Our study recruited 25 LTA individuals, 31 HTA individuals, and 19 patients with GAD. All participants completed written informed consent after a detailed explanation of aims and procedures of this study. All participants in the current study are Han nationality. Non-clinical participants completed questionnaires in the laboratory or classrooms, and GAD patients completed questionnaires in the hospital. Then, we analyzed the data and selected qualified participants. Individuals who met the inclusion criteria were invited to enter the laboratory and completed experimental tasks on computers within 1 week after qualification screening.
Materials and Tasks
Self-Report Measures Age, gender, highest level of education (1 = "less than high school, " 2 = "completed high school, " 3 = "college or bachelor's degree, " 4 = "master's or doctoral degree"), and past history of disease were reported. The trait subscale of Spielberger's State-Trait Anxiety Inventory (STAI_T) (30, 31) was used to measure the level of trait anxiety. This subscale consists of 20 items that can indicate individuals' tendency to perceive stressful situations as dangerous or threatening. Internal consistency was Cronbach's alpha = 0.938 for STAI_T in the current study.
Emotional Cognitive Task
The emotional spatial-cuing paradigm (32,33) was adopted in our cognitive tasks. The target stimuli were presented on the left or right side of a fixation point. In half of the trials, a cue precedes the target at the same location ("valid cues"), whereas in the other half of trials, the cue is presented on the opposite side from the target ("invalid cues"). Half of the clues are neutral pictures (for example, a marble sculpture) and the others are negative ones (for example, a snake). There are 12 neutral and 12 negative pictures chosen from the standardized native Chinese Affective Picture System (CAPS) (34). Scores of pleasure, arousal, and dominance for all pictures rated on a 1-9 rating scale (1 = "not at all" and 9 = "very high") can be obtained from the CAPS. In this study, pleasure scores for all negative pictures were <2.5, and pleasure scores for neutral pictures were between 4.5 and 5.5. Meanwhile, the scores of arousal degree were matched between neutral and negative pictures. Neutral pictures had significantly higher levels of pleasure [t (22) = 24.17, p < 0.001] and dominance [t (22) = 11.09, p < 0.001] than negative pictures, and no significant difference was found in arousal between neutral and negative pictures [t (22) = 0.21, p = 0.391]. The sequence of events of the trial in this study was set according to the previous literature (32,33). At the beginning of each trial, a fixation cross was presented in the center of the display together with two peripheral boxes, one to the left side and one to the right side. After 500 ms, a negative or neural cue picture showed up 400 ms in the left or right peripheral box (50% probability). No response was needed to the cue picture. Subsequently, to reduce expectation effect, there was a gap of either 50 or 800 ms that appeared at the center in one of the two peripheral boxes. Participants were instructed to press the "f " key as soon as they spotted the target at the left side and the "j" key at the other side. Stimuli were presented in a pseudo-random order and remained on the screen until the individual responded. The interstimulus interval was 1,000 ms. In total, the experiment had 176 trials, including 16 practice trials and 160 experimental trials.
Non-emotional Cognitive Task
Another cognitive task in the present study was the nonemotional arrow flanker task (35). Stimuli were presented in white against a black background on a computer screen. In each trial, the target arrow (1.05 × 1.37 • ) was surrounded by two horizontally arranged arrows on right and left sides: < < < < <, > > < > >, > > > > >, or < < > < < (25% probability, respectively). Participants were instructed to respond to the central target arrow by pressing a spatially compatible key on the computer keyboard ("f " or "j") with their left or right index finger, respectively. At the beginning of each trial, a fixation cross displayed on the center of the screen for 500 ms. After that, the fixation cross was replaced by the stimulus. The entire stimulus array remained on the screen until the individual responded. Participants were encouraged to respond to the stimuli as quickly and accurately as possible. A varying interstimulus interval was set between 800 and 1500 ms. In total, the experiment had 217 trials, including 25 practice and 192 experimental trials. All the trials were presented in a pseudo-random order. The emotional and non-emotional cognitive tasks were performed in a standard ABBA sequence among participants.
Data Analysis
The data were statistically analyzed using SPSS software version 20. First, the accuracy was calculated under different experimental conditions. Then, trials with wrong responses were removed from the data. Mean and standard deviation of RTs (MRT, SD, respectively) were computed. Accordingly, MAD and RTCV were also obtained. Specifically, MAD is the mean value of absolute deviation from the mean across all trials. Log unit scores of RT (LSRTs) were also computed across all trials for each individual, and then RTCV was obtained by dividing standard deviation of LSRTs with their mean value (5). SD, MAD, and RTCV for all trials with correct responses in the emotional and non-emotional cognitive tasks were computed separately for each individual.
To compare executive control function among the LTA, HTA, and GAD groups, a series of univariate analyses were conducted with the group as an independent variable, mean RTs, accurate rates, and three IIV parameters served as dependent variables, respectively. To compare the across-task reliabilities and the independence from RTs of three IIV parameters, after controlling for sociodemographic variables, partial correlation analyses were performed among mean RT, accuracy, SD, MAD, and RTCV in the emotional and non-emotional tasks. The significance level was set at p < 0.05 in our study.
Demographics and Self-Report Data
The ages of the 75 participants ranged from 18 to 45 years (mean = 24.17, SD = 6.25). Among them, 72% were men, and 28% were women. As listed in Table 1 Further analyses indicated that the GAD group reported significantly higher levels of trait anxiety than the LTA and HTA groups (ps < 0.001). In addition, the HTA group exhibited significantly higher levels of trait anxiety than the LTA group (p < 0.001).
Comparisons in Executive Control Function Among Three Study Groups
Controlling for age, gender, and education level, univariate analyses were conducted with group as the independent variable and mean RT, accuracy, MAD, RTCV, and SD as the dependent variables for the emotional and non-emotional tasks, respectively. Descriptions of mean RT, accuracy, MAD, RTCV, and SD in each cognitive task are presented in Table 2. In the emotional cognitive task, results show that there were significant differences among the three groups in RT [F (2,68) = 3.512, p =.035, η p 2 = 0.094] and in correct rate [F (2,68) = 14.418, p < 0.001, η p 2 = 0.295]. Post hoc analysis suggests that the GAD group had significantly slower response speed and lower accuracy than the LTA and HTA groups (ps < 0.001), and there was no difference between the other two groups (ps > 0.05). In the non-emotional cognitive task, a significant difference was found among these three groups in RT [F (2,68) = 4.454, p = 0.0145, η p 2 = 0.116], but not in accuracy [F (2,68) = 0.445, p = 0.643, η p 2 = 0.013]. Post hoc analysis also found that the GAD group had significantly slower response speed than the LTA and HTA groups (ps < 0.001).
Comparisons in Across-Task Reliabilities Among MAD, RTCV, and SD
Controlling for age, gender, and education level, partial correlation analyses were performed among MAD, RTCV, and SD in the emotional and non-emotional cognitive tasks to examine across-task reliabilities of three IIV parameters. As listed in Table 3, SD had the best across-task reliability (r = 0.62, p < 0.01), followed by RTCV (r = 0.53, p < 0.01), and the worst is MAD (r = 0.39, p < 0.01). Partial correlation analyses were also conducted among MAD, RTCV, SD, and mean RT. It was found that RT had stronger correlations with SD (r = 0.84 for the emotional task, r = 0.88 for the non-emotional task) and MAD (r = 0.87 for the emotional task, r = 0.76 for the nonemotional task) than with RTCV (r = 0.51 for the emotional task, r = 0.64 for the non-emotional task), which revealed that RTCV was more independent from RT than MAD and SD. Besides that, controlling for sociodemographic variables, trait anxiety was significantly related to MAD, RTCV, and SD (ps < 0.05) both in the emotional and non-emotional tasks.
DISCUSSION
In the present study, patients with GAD demonstrated longer RT for both the emotional (presented in the emotional spatialcuing task) and non-emotional conditions (presented in the non-emotional arrow flanker task) compared with non-clinical groups, indicating that GAD showed impaired processing efficiency indexed by RT (36). This is well-consistent with the previous findings that anxiety can impair top-down executive control to ignore task-irrelevant information (37,38). In contrast, this phenomenon is not found in the HTA group from non-clinical populations. A recent study by Yu et al. (27) also found that there are significant differences in processing efficiency between the HTA and GAD groups. Therefore, it is necessary to further examine the existing opinion that features RT_T1, mean reaction time for all trials in emotional cognitive task; RT_T2, mean reaction time for all trials in non-emotional cognitive task; Accuracy_T1, accuracy for all trials in emotional cognitive task; Accuracy_T2, accuracy for all trials in non-emotional cognitive task; MAD_T1, mean absolute deviation for all trials in emotional cognitive task; MAD_T2, mean absolute deviation for all trials in non-emotional cognitive task; RTCV_T1, reaction time coefficient of variation for all trials in emotional cognitive task; RTCV_T2, reaction time coefficient of variation for all trials in non-emotional cognitive task; SD_T1, standard deviation of reaction times for all trials in emotional cognitive task; SD_T2, standard deviation of reaction times for all trials in non-emotional cognitive task. Sociodemographic variables (age, gender, and education level) were included as covariates for data analysis.
of executive control in subclinical samples can be extended to the corresponding clinical disorder (39). For performance effectiveness (i.e., the quality of performance), patients with GAD exhibited lower correct rates than the other two groups in the emotional rather than the non-emotional task. This likely implied that the influence of trait anxiety on executive function is more pronounced under threat-related conditions. These results also support attentional control theory, suggesting that elevated anxiety may not impact cognitive functions in the absence of threat or substantial cognitive load (23,26). It should be noted that the mean correct rates for all groups in arrow flanker task are more than 99%. Therefore, the inference about intact performance effectiveness for patients with GAD could only be applied to tasks with neutral stimuli and low cognitive load (40). Nevertheless, our findings also lent to strong support for the recent viewpoint that patients with GAD indeed exhibited obvious impaired executive control compared with non-clinical individuals (27).
In this study, analyses of variance revealed that patients with GAD and non-clinical persons could overall be effectively discriminated by MAD, RTCV, and SD in the emotional spatialcuing task, and they could be effectively discriminated by RTCV and SD in the non-emotional arrow flanker task. Our results indicate that increased IIV in GAD is primarily related to inefficient prefrontal neural processing (41,42). This is also consistent with the finding that higher levels of executive control result in lower levels of IIV (43). Discriminative power of MAD is unstable, which may lay in the fact that executive control functions of different anxious groups are more difficult to distinguish under the low cognitive load condition. The applicability of the current results needs further verification in difficult tasks requiring more cognitive resources.
Our study demonstrates, for the first time, a link between behavioral IIV and GAD. These data are in support of the viewpoint proposed by Bellgrove et al. that abnormally increased variability is an important index of disorders of executive/attentional control (44). The results based on patients with GAD in our study are also quite in accordance with the findings across studies that indicate IIV is associated with cognitive impairment in other psychiatric disorders (13,14,36). These findings may help to understand the associations of anxiety and attentional control. Furthermore, our results demonstrate that SD and RTCV have better across-task reliabilities than MAD. Partial correlation analyses among MAD, RTCV, SD, and mean RT suggest that compared to RTCV, SD and MAD are more easily affected by mean RT. Accordingly, RTCV is recommended as an excellent measure of IIV in this study. However, our results are inconsistent with the viewpoint by Saville et al. that , mean absolute deviation for all trials in emotional cognitive task; MAD_T2, mean absolute deviation for all trials in non-emotional cognitive task; RTCV_T1, reaction time coefficient of variation in emotional cognitive task; RTCV_T2, reaction time coefficient of variation in non-emotional cognitive task; SD_T1, standard deviation of reaction times in emotional cognitive task; SD_T2, standard deviation of reaction times in non-emotional cognitive task; RT_T1, mean reaction time for all trials in emotional cognitive task; RT_T2, mean reaction time for all trials in non-emotional cognitive task; ACC_T1, accuracy in emotional cognitive task; ACC_T2, accuracy in non-emotional cognitive task.
SD or MAD is the best choice of parameter for measuring IIV (9). A possible explanation may lie in the fact that age, fluid intelligence, functional changes of frontal brain regions, and other factors have great impacts on cognitive performance (45). Ignoring the influences of these factors may lead to completely different conclusions. From the calculation method of the three parameters, MAD is most vulnerable to the influence of reaction time, followed by SD. Therefore, we still believe that RTCV is more reliable although it needs more evidence. Nevertheless, our research replicates the findings from Lövdén et al. (17), which state that cognitive variability may serve as an early warning of imminent cognitive decline. Haynes and colleagues also assert that measures of RT variability have considerable potential in clinical contexts as they may aid identification and diagnosis of a range of neurobiological disorders (46). Our results provide new evidence for this view. Compared with the LTA and HTA groups, patients with GAD showed increased IIV in the emotional spatial-cuing task and the non-emotional arrow flanker task. These findings afford empirical evidence for existing accounts suggesting that the increased IIV of RT is a stable feature of clinical disorders observed across diverse tasks and methods (11). Developing new tools for assessing executive control is important given the clinical relevance of this psychological construct in the development and maintenance of psychopathology.
It is worth noting that there are some shortcomings in this study. First, difficult tasks requiring more cognitive resources should be adopted to further test whether trait-anxious individuals from non-clinical persons exhibit abnormal increase IIV. Second, only three parameters (MAD, RTCV, and SD) were compared in the present study. More parameters should be considered to optimize the measurement method of IIV. Third, considering that IIV is a relatively stable factor, only the impact of trait anxiety on participants' performance was examined in our study. Whether and how state anxiety affects participants' performance was not explored in our study. This is an interesting direction for future studies. Fourth, four participants of the GAD group completed less than high school education and none of the LTA and HTA groups. There might be sampling bias in this study. Finally, all the participants in the current study are Han nationality, and the small sample size may undermine the significance of group effects. Accordingly, a larger sampling is needed to replicate these results in other cultural or ethnic contexts in future studies.
Notwithstanding these limitations, several key implications can be drawn to better understand the characteristics of executive/attentional control of high anxious individuals. Due to the increased variability in patients with GAD in the present study, combined with previous studies on ADHD, PTSD, depressive disorder, etc., it indicates that increased variability appears a crucial marker of clinical disorders observed across diverse tasks and methods. IIV represented by RTCV is found to be objective, reliable, feasible to operate, and less influenced by RTs. Therefore, it is expected to serve as a supplement for existing clinical assessment to distinguish individuals with psychiatric diseases. Also, our study could shed light on that IIV might be an indicator of treatment effects for psychiatric disorders. In future research, it is necessary to develop the standard operating procedures for measurement of IIV with detailed guidelines and norms to interpret results in specific groups of people.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of Army Medical University, Chongqing, China. The participants provided their written informed consent to participate in this study. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
YY and ML designed research. YY, HX, YX, and FL performed experiments and collected data. YY, HX, and YX analyzed data. YY, FL, and ML wrote and revised the paper. All authors contributed to the article and approved the submitted version. | 2022-03-29T15:12:43.841Z | 2022-03-22T00:00:00.000 | {
"year": 2022,
"sha1": "5b05bb516ac50c05c7f640bd00f84d8d64c058c3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2022.532778/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "31b08e297501cb7ece193e8528a43510303685af",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229684925 | pes2o/s2orc | v3-fos-license | Batch and Fixed-Bed Column Studies on Palladium Recovery from Acidic Solution by Modified MgSiO3
Effective recovery of palladium ions from acidic waste solutions is important due to palladium’s intensive usage as a catalyst for different industrial processes and to the high price paid for its production from natural resources. In this paper, we test the ability of a new adsorbent, MgSiO3 functionalized by impregnation with DL-cysteine (cys), for palladium ion recovery from waste solutions. The Brunauer–Emmett–Teller (BET) surface area analysis, Barrett–Joyner–Halenda (BJH) pore size and volume analysis, scanning electron microscopy (SEM), energy dispersive X-ray (EDX) spectroscopy and Fourier-Transformed Infrared (FTIR) spectroscopy have been performed to characterize this material. Firstly, the maximum adsorption capacity of the new obtained material, MgSiO3-cys, in batch, was studied. To establish the adsorption mechanism, the obtained experimental data were fitted using the Langmuir, Freundlich and Sips adsorption isotherms. Studies on the adsorption of palladium ions on the synthesized material were performed in a dynamic regime, in a fixed-bed column. The Pd(II) recovery mechanism in the dynamic column regime was established based on Bohart–Adams, Yoon–Nelson, Thomas, and Clark models. The obtained equilibrium adsorption capacity was 9.3 (mg g−1) in static regime (batch) and 3 (mg g−1) in dynamic regime (column). The models that best describe the Pd(II) recovery process for batch and column adsorption are Sips and Clark, respectively.
Introduction
The noble metals platinum, palladium and rhodium have a wide application range based on their distinct physical and chemical properties [1][2][3][4][5]. One of the first historical uses of the precious metals was as currency, internationally recognized under ISO 4217. Palladium and its alloys are currently used by the telecommunication and automotive industries (as catalytic converters), the metallurgy and chemical industries, for jewelry manufacturing and in the medical field (dental alloy production) [5][6][7][8][9].
Its growing popularity led, in 2010, to official recognition as the fourth most precious metal, after gold, silver and platinum, a statute that requires the marking of each jewel. White gold contains variable amounts of palladium (up to 20%); furthermore, dental alloys can contain up to 10% palladium [5,[10][11][12][13][14][15].
The purpose of this study was to develop an ecological strategy for Pd(II) recovery using (florisil) MgSiO3 functionalized by impregnation with DL-cysteine (cys) [49]. This new material has -SH, -NH2, and -COOH pendant groups derived from the amino acid DL-cysteine [54,55].
The first objective was to test the new MgSiO3 cys material's ability to recover palladium ions from waste solutions by adsorption. The second goal of this research was to compare the static adsorption process with the dynamic one.
Adsorbent Synthesis and Characterization
Functionalized MgSiO3 using DL-cysteine (DL-cysteine-hydrochloride monohydrate 99.0%, Fluka, Buchs, Switzerland) as extractant was obtained using 0.1 g of DL-cysteine. This amount of extractant was dissolved in 25 mL of deionized water. The obtained solution was mixed with 1 g of support (MgSiO3, 60-100 mesh, Merck, Darmstadt, Germany), corresponding to a ratio support: extractant of 1:0.1 and they were brought into contact for 24 h in stand-by (SIR, solvent impregnated resin-dry method) [49,51]. After that, the obtained material was dried in the oven (Pol-Eko SLW 53 STD, POL-EKO-APARATURA, Wodzisław Śląski, Poland) for 24 h at 323 K. The appearance of the adsorbent material can be seen in Figure 1. The specific surface area, cumulative pore volume, and pore size of the adsorbent material were measured with a Micromeritics ASAP 2020 instrument (Brunauer-Emmett-Teller, BET, surface area analysis and Barrett-Joyner-Halenda, BJH, pore size and volume analysis, at liquid nitrogen temperature, −196 °C) from Micromeritics Instrument, Norcross, GA, USA. The point of zero charge (pHPZC) and density were determined using the solid addition method and pycnometer method, respectively. Furthermore, the adsorbent was analyzed by scanning electron microscopy (SEM) and energy dispersive X-ray (EDX) spectroscopy, using the FEI Quanta FEG 250 instrument (FEI, Eindhoven, The Netherlands), and Fourier-Transformed Infrared (FTIR) spectroscopy using a Bruker Platinum ATR-QL Diamond apparatus (Bruker Optik GmbH, Ettlingen, Germany) in the range of 4000-400 cm −1 .
Batch Adsorption Experiments
The effect of the initial concentration of Pd(II) upon the adsorption capacity of the materials was studied using Pd(II) solutions of different concentrations (5,10,20,30,40, and 50 mg L −1 ), prepared by the appropriate dilution of a stock solution of palladium (II) chloride (5 wt% in 10 wt% HCl, Sigma-Aldrich, St. Louis, MO, USA). Adsorptions were carried out at pH = 2 for one hour at 298 K. The equilibrium concentration was determined using thermostatic Julabo SW23 water bath and shaken at a rotation speed of 200 rpm. The adsorption mechanism was established by modeling the The specific surface area, cumulative pore volume, and pore size of the adsorbent material were measured with a Micromeritics ASAP 2020 instrument (Brunauer-Emmett-Teller, BET, surface area analysis and Barrett-Joyner-Halenda, BJH, pore size and volume analysis, at liquid nitrogen temperature, −196 • C) from Micromeritics Instrument, Norcross, GA, USA. The point of zero charge (pH PZC ) and density were determined using the solid addition method and pycnometer method, respectively. Furthermore, the adsorbent was analyzed by scanning electron microscopy (SEM) and energy dispersive X-ray (EDX) spectroscopy, using the FEI Quanta FEG 250 instrument (FEI, Eindhoven, The Netherlands), and Fourier-Transformed Infrared (FTIR) spectroscopy using a Bruker Platinum ATR-QL Diamond apparatus (Bruker Optik GmbH, Ettlingen, Germany) in the range of 4000-400 cm −1 .
Batch Adsorption Experiments
The effect of the initial concentration of Pd(II) upon the adsorption capacity of the materials was studied using Pd(II) solutions of different concentrations (5,10,20,30,40, and 50 mg L −1 ), prepared by the appropriate dilution of a stock solution of palladium (II) chloride (5 wt% in 10 wt% HCl, Sigma-Aldrich, St. Louis, MO, USA). Adsorptions were carried out at pH = 2 for one hour at 298 K. The equilibrium concentration was determined using thermostatic Julabo SW23 water bath and shaken at a rotation speed of 200 rpm. The adsorption mechanism was established by modeling the experiment using three specific isotherms in non-linear form: Langmuir, Freundlich and Sips, according to equations used in scientific literature [56][57][58][59].
Langmuir q e = q m · K L · C e 1 + K L · C e (1) Sips q e = q m · K S · C 1/n S e where q e is the maximum absorption capacity (mg g −1 ), q m is maximum adsorption capacity (mg g −1 ), K L is the Langmuir constant, C e is the equilibrium concentration of Pd(II) in solution (mg L −1 ), K F is the Freundlich constant, 1/n F is the heterogeneity factor, K S is Sips constant, and 1/n S is the Sips model exponent. Three independent replicates were performed for each batch adsorption experiment.
The experimental setup was made using a glass column (diameter 20 mm and height 300 mm) loaded with three different amounts of adsorbent material (10, 5, and 3 g), corresponding to three layers' heights (70,35, and 21 mm, respectively) ( Figure 2). The Pd(II) solution was transferred in the experimental column using a peristaltic pump (Heidolph SP quick, Heidolph Instruments, Schwabach, Germany) at a flow rate of 7 (mL min −1 ). The studies were made using samples sequences of 25 mL. The retention times of the solution in the adsorption column, corresponding to the amounts of adsorbent mentioned above, were approximately 3, 1.5, and 1 min, respectively. The residual concentration of Pd(II) was measured using an atomic absorption spectrometer type Varian AAS 280 FS (Varian Inc., Mulgrave, Australia).
where qe is the maximum absorption capacity (mg g −1 ), qm is maximum adsorption capacity (mg g −1 ), KL is the Langmuir constant, Ce is the equilibrium concentration of Pd(II) in solution (mg L −1 ), KF is the Freundlich constant, 1/nF is the heterogeneity factor, KS is Sips constant, and 1/nS is the Sips model exponent.
Three independent replicates were performed for each batch adsorption experiment.
Column Adsorption Experiments
The obtained sorbent, MgSiO3 functionalized using DL-cysteine (MgSiO3-cys), was used for dynamic studies in a fixed-bed column. Pd(II) solutions' initial concentration was 60 (mg L −1 ), prepared using appropriate dilution of a stock solution of palladium (II) chloride (5 wt% in 10 wt% HCl, Sigma-Aldrich, St. Louis, MO, USA).
The experimental setup was made using a glass column (diameter 20 mm and height 300 mm) loaded with three different amounts of adsorbent material (10, 5, and 3 g), corresponding to three layers' heights (70,35, and 21 mm, respectively) ( Figure 2). The Pd(II) solution was transferred in the experimental column using a peristaltic pump (Heidolph SP quick, Heidolph Instruments, Schwabach, Germany) at a flow rate of 7 (mL min −1 ). The studies were made using samples sequences of 25 mL. The retention times of the solution in the adsorption column, corresponding to the amounts of adsorbent mentioned above, were approximately 3, 1.5, and 1 min, respectively. The residual concentration of Pd(II) was measured using an atomic absorption spectrometer type Varian AAS 280 FS (Varian Inc., Mulgrave, Australia).
Bohart-Adams ln
Clark ln where C 0 is the influent concentration (mg L −1 ); C t is the solution concentration at time t in the effluent (mg L −1 ); t is time (min); k BA is the kinetic constant of the Bohart-Adam model (L mg −1 min −1 ); F is the linear velocity calculated by dividing the flow rate by the column section area; Z is the bed depth of column (cm); N 0 is the saturation concentration (mg L −1 ); k Th is the Thomas rate constant (L mi −1 mg −1 ); q Th is the equilibrium compound uptake per g of the resin (mg g −1 ); m is the mass of sorbent resin (g); Q is the flow rate (mL min −1 ); k YN is the rate constant (min −1 ); τ is the time required for 50% adsorbate breakthrough (min); n is the Freundlich constant determined experimentally in batch; r is the Clark model constant (min −1 ); and A is the Clark model constant.
Characterization of the MgSiO 3 -Cys
BET analysis showed that the specific surface area was S BET = 166 (m 2 g −1 ). The average pore size and cumulative pore volume calculated using BJH method were 25.24 nm and 0.43 (cm 3 g −1 ), respectively. The value of pH PZC was six and the density was about 4 (g cm −3 ). Figure 3 shows the main morphological changes of the surface of the adsorbent material that appeared after impregnation. The presence of functional groups in MgSiO 3 -cys was investigated using energy dispersive X-ray spectroscopy (EDX) ( Figure 4). The EDX spectra of MgSiO 3 -cys show both magnesium silicate peaks (O, Mg, Si) and N, S, and C characteristic peaks, confirming the presence of specific peaks for functionalized sorbent. Infrared spectroscopy (FTIR) was used to confirm the MgSiO 3 functionalization. The FTIR spectra for commercial magnesium silicate together with functionalized MgSiO 3 -cys are presented in Figure 5. Magnesium silicate-specific peaks can be observed on both spectra: a large peak at 1052 cm −1 and another peak at 600 cm −1 , corresponding to Si-O stretching vibrations, and the peak at 800 cm −1 , assigned to Si-O-Si bending vibrations. The MgSiO 3 spectrum show a band at~3500 cm −1 and a peak at 1637 cm −1 , specific to the -OH bond from H 2 O molecules. The FTIR spectrum of magnesium silicate functionalized with DL-cysteine shows specific peaks for the -SH bond at~2620 cm −1 , the -NH 2 bond at~2245 cm −1 , the -COOH bond at~1325 cm −1 . The intensities of those peaks are lower compared to those of MgSiO 3 due to the small cysteine amount used in the functionalization process. This process leads to an attenuation of MgSiO 3 specific vibrations.
Equilibrium Adsorption Studies. Adsorption Isotherms
The maximum adsorption capacity of the MgSiO 3 -cys material was determined based on adsorption experiment data using three isotherm models: Langmuir, Freundlich, and Sips [66]. The equilibrium adsorption capacity was determined by monitoring the dependence of materials' adsorption capacity vs. initial Pd(II) concentration, illustrated in Figure 6. capacity up to an approximately constant value. The highest Pd(II) adsorption capacity (qm) on DLcysteine functionalized magnesium silicate, for a steady state concentration of 40 (mg L −1 ), was 9.23 (mg g −1 ). Figure 7 shows the equilibrium isotherms for the studied material. The parameters of the isotherm models for palladium ion adsorption on the studied functionalized material are presented in Table 1. Augmentation of the initial Pd(II) solution concentration led to an increase in the adsorption capacity up to an approximately constant value. The highest Pd(II) adsorption capacity (q m ) on DL-cysteine functionalized magnesium silicate, for a steady state concentration of 40 (mg L −1 ), was 9.23 (mg g −1 ). Figure 7 shows the equilibrium isotherms for the studied material. The parameters of the isotherm models for palladium ion adsorption on the studied functionalized material are presented in Table 1.
Equilibrium Adsorption Studies. Adsorption Isotherms
The maximum adsorption capacity of the MgSiO3-cys material was determined based on adsorption experiment data using three isotherm models: Langmuir, Freundlich, and Sips [66]. The equilibrium adsorption capacity was determined by monitoring the dependence of materials' adsorption capacity vs. initial Pd(II) concentration, illustrated in Figure 6.
Augmentation of the initial Pd(II) solution concentration led to an increase in the adsorption capacity up to an approximately constant value. The highest Pd(II) adsorption capacity (qm) on DLcysteine functionalized magnesium silicate, for a steady state concentration of 40 (mg L −1 ), was 9.23 (mg g −1 ). Figure 7 shows the equilibrium isotherms for the studied material. The parameters of the isotherm models for palladium ion adsorption on the studied functionalized material are presented in Table 1. The existing literature data [67] suggest that most metallic ion adsorption processes on the MgSiO 3 -cys material obtained by chemical modification are multilayer processes and the surface is heterogeneous. At the same time, the adsorption mechanism is controlled by chemisorption processes due to the strong chelation between metal ions and OH − groups or free electron pairs of S and/or N-containing pendant groups present on the surface of the chemical functionalized material.
Using the Sips isotherm to model the obtained experimental data leads to a parameter 1/n s value deviated from unity, suggesting the heterogeneity of the adsorbent surface [68]. Regardless of the extractant used for functionalization, the Sips model better describes the adsorption process, reflected by the highest correlation coefficient (R 2 ) values. In the case of Pd(III) adsorbed on DL-cysteine functionalized magnesium silicate, the correlation coefficient of the Sips isotherm, R 2 = 0.9953, is higher than those obtained using the Langmuir and Freundlich adsorption isotherms. In addition, the calculated equilibrium adsorption capacity of the Sips model (9.62 mg g −1 ) was consistent with that obtained experimentally (9.23 mg g −1 ).
Bed Height Column (BHC) Influence on the Pd(II) Breakthrough Curves
An important parameter in the sorption process is the bed depth. Pd(II) retention in a fixed-bed column depends, among other factors, on the sorbent quantity reflected by the bed depth of the column works. Three different heights of the MgSiO 3 -cys sorbent filling the fixed-bed column were used in the experiments: 2.1, 3.5, and 7.0 cm.
According to Figure 8, the column adsorption process is highlighted by establishing breakthrough curves, which represent the variation of the ratio between the residual concentration of Pd(II) and its initial concentration (C rez /C 0 ), depending on the volume of effluent passed through the column, for three distinct amounts of material. Volumes of Pd(II) of 60 (mg L −1 ) concentration were varied between 1500 and 3000 mL, depending on the amount of adsorbent material in the column. The existing literature data [67] suggest that most metallic ion adsorption processes on the MgSiO3-cys material obtained by chemical modification are multilayer processes and the surface is heterogeneous. At the same time, the adsorption mechanism is controlled by chemisorption processes due to the strong chelation between metal ions and OHgroups or free electron pairs of S and/or Ncontaining pendant groups present on the surface of the chemical functionalized material.
Using the Sips isotherm to model the obtained experimental data leads to a parameter 1/ns value deviated from unity, suggesting the heterogeneity of the adsorbent surface [68]. Regardless of the extractant used for functionalization, the Sips model better describes the adsorption process, reflected by the highest correlation coefficient (R 2 ) values. In the case of Pd(III) adsorbed on DL-cysteine functionalized magnesium silicate, the correlation coefficient of the Sips isotherm, R 2 = 0.9953, is higher than those obtained using the Langmuir and Freundlich adsorption isotherms. In addition, the calculated equilibrium adsorption capacity of the Sips model (9.62 mg g −1 ) was consistent with that obtained experimentally (9.23 mg g −1 ).
Bed Height Column (BHC) Influence on the Pd(II) Breakthrough Curves
An important parameter in the sorption process is the bed depth. Pd(II) retention in a fixed-bed column depends, among other factors, on the sorbent quantity reflected by the bed depth of the column works. Three different heights of the MgSiO3-cys sorbent filling the fixed-bed column were used in the experiments: 2.1, 3.5, and 7.0 cm.
According to Figure 8, the column adsorption process is highlighted by establishing breakthrough curves, which represent the variation of the ratio between the residual concentration of Pd (II) and its initial concentration (Crez/C0), depending on the volume of effluent passed through the column, for three distinct amounts of material. Volumes of Pd (II) of 60 (mg L −1 ) concentration were varied between 1500 and 3000 mL, depending on the amount of adsorbent material in the column. The mass transfer area is the active surface of the bed of adsorbent material where the adsorption of Pd(II) ions takes place [65]. The first part of the column adsorption process takes place rapidly, through the adsorption of Pd(II) on the surface of the material, called the primary adsorption zone. This is why, at the beginning, the collected samples do not contain Pd(II) ions. The second part of the adsorption process is slower and is characterized by the adsorption of Pd(II) ions on the adsorbent material, achieving mass transfer. The adsorption process is complete, the concentration of Pd(II) ions varies from 60 to 0 (mg L −1 ), and the saturation of the material is total.
For the three different bed depths used, as the bed depth increases (from 2.1 at 7 cm), the breakthrough point increases (from 100 to 325 min). A rational explanation of this behavior is that The mass transfer area is the active surface of the bed of adsorbent material where the adsorption of Pd(II) ions takes place [65]. The first part of the column adsorption process takes place rapidly, through the adsorption of Pd(II) on the surface of the material, called the primary adsorption zone. This is why, at the beginning, the collected samples do not contain Pd(II) ions. The second part of the adsorption process is slower and is characterized by the adsorption of Pd(II) ions on the adsorbent material, achieving mass transfer. The adsorption process is complete, the concentration of Pd(II) ions varies from 60 to 0 (mg L −1 ), and the saturation of the material is total.
For the three different bed depths used, as the bed depth increases (from 2.1 at 7 cm), the breakthrough point increases (from 100 to 325 min). A rational explanation of this behavior is that with increasing column sorbent height a greater number of binding sites become available and the quantity of the Pd(II) removed increases accordingly [69].
A higher BHC leads to a longer contact time between the waste solution and the MgSiO 3 -cys sorbent (from about 1 min to 3 min), having a positive influence on Pd(II) adsorption. Figure 8 shows an alteration of the steep concave shape to flat concave shape curves as BHC increases, which leads to an enlargement of the mass transfer area [70][71][72]. However, too high a layer of adsorbent material in the column is not recommended as it increases the flow resistance [73].
Modeling for Adsorption Behaviors of Pd(II) on MgSiO 3 -Cys
Various practical parameters such as sorbent capacity, contact time between adsorbent and adsorbed, column operating life span, regeneration time, and prediction of the time necessarily have a significant influence upon the operation of the column. Knowing these parameters is important to model the adsorption process in a fixed-bed column.
The four models tested (Bohart-Adams, Yoon-Nelson, Thomas, and Clark) provide detailed conclusions about the process mechanism. The adsorption column is subjected to axial dispersion, external film strength, and intraparticle diffusion resistance [74].
The Bohart-Adams model is used for one-component systems and provides information on the saturation concentration of the material. This model characterizes the beginning of the column penetration, gives information about the adsorbent material used, and shows the maximum concentration at which the column is instantly broken through [60]. The Yoon-Nelson model is used to model a one-component system and provides information about the time by which half of the column is broken through. It is a purely theoretical model which does not focus on the properties of the adsorbent, the type of adsorbent, or the physical characteristics of the fixed bed [74]. The Thomas model provides information on the maximum solid phase concentration of the adsorbent and on the rate constant [62], while the Clark model describes, very well, the dynamic adsorption process [74]. Figure 9 illustrates the influence of sorbent dose (10, 5, and 3 g) corresponding to the three previously mentioned BHCs (7, 3.5, and 2.1 cm) on the ln (C t /C 0 ) vs. time curves. The graphic shows a direct influence of the sorbent amount upon maximum adsorption capacity, N 0 , and the kinetic constant, k BA , which indicates that, kinetically, the process is controlled by the mass transfer in the first part of the breakthrough process. The calculated regression coefficients have relatively low values (between 0.9717 and 0.9755); therefore, one can assume that the model is not the most suitable to describe the Pd(II) adsorption mechanism on MgSiO 3 -cys in a dynamic regime. with increasing column sorbent height a greater number of binding sites become available and the quantity of the Pd(II) removed increases accordingly [69]. A higher BHC leads to a longer contact time between the waste solution and the MgSiO3-cys sorbent (from about 1 min to 3 min), having a positive influence on Pd(II) adsorption. Figure 8 shows an alteration of the steep concave shape to flat concave shape curves as BHC increases, which leads to an enlargement of the mass transfer area [70][71][72]. However, too high a layer of adsorbent material in the column is not recommended as it increases the flow resistance [73].
Modeling for Adsorption Behaviors of Pd(II) on MgSiO3-Cys
Various practical parameters such as sorbent capacity, contact time between adsorbent and adsorbed, column operating life span, regeneration time, and prediction of the time necessarily have a significant influence upon the operation of the column. Knowing these parameters is important to model the adsorption process in a fixed-bed column.
The four models tested (Bohart-Adams, Yoon-Nelson, Thomas, and Clark) provide detailed conclusions about the process mechanism. The adsorption column is subjected to axial dispersion, external film strength, and intraparticle diffusion resistance [74].
The Bohart-Adams model is used for one-component systems and provides information on the saturation concentration of the material. This model characterizes the beginning of the column penetration, gives information about the adsorbent material used, and shows the maximum concentration at which the column is instantly broken through [60]. The Yoon-Nelson model is used to model a one-component system and provides information about the time by which half of the column is broken through. It is a purely theoretical model which does not focus on the properties of the adsorbent, the type of adsorbent, or the physical characteristics of the fixed bed [74]. The Thomas model provides information on the maximum solid phase concentration of the adsorbent and on the rate constant [62], while the Clark model describes, very well, the dynamic adsorption process [74]. Figure 9 illustrates the influence of sorbent dose (10, 5, and 3 g) corresponding to the three previously mentioned BHCs (7, 3.5, and 2.1 cm) on the ln (Ct/C0) vs. time curves. The graphic shows a direct influence of the sorbent amount upon maximum adsorption capacity, N0, and the kinetic constant, kBA, which indicates that, kinetically, the process is controlled by the mass transfer in the first part of the breakthrough process. The calculated regression coefficients have relatively low values (between 0.9717 and 0.9755); therefore, one can assume that the model is not the most suitable to describe the Pd(II) adsorption mechanism on MgSiO3-cys in a dynamic regime. Figure 10 shows the influence of sorbent dose (10, 5, and 3 g) upon the ln[(C 0 /C t ) − 1] vs. time curves. The figure illustrates that a higher sorbent amount leads to a decrease in the Thomas rate constant k Th . The reason for this behavior is the adsorption driving force given by the difference between the Pd(II) concentration in the sorbent and in the solution [75][76][77][78]. The determination coefficient R 2 values (between 0.9704 and 0.9961) indicated positive correlation, but we cannot assume that this model is best fitted for the adsorption mechanism. The adsorption capacity q Th and the kinetic constant are presented in Table 2. Figure 10 shows the influence of sorbent dose (10, 5, and 3 g) upon the ln[(C0/Ct) − 1] vs. time curves. The figure illustrates that a higher sorbent amount leads to a decrease in the Thomas rate constant kTh. The reason for this behavior is the adsorption driving force given by the difference between the Pd(II) concentration in the sorbent and in the solution [75][76][77][78]. The determination coefficient R 2 values (between 0.9704 and 0.9961) indicated positive correlation, but we cannot assume that this model is best fitted for the adsorption mechanism. The adsorption capacity qTh and the kinetic constant are presented in Table 2. Table 2. Pd(II) adsorption process parameters in a fixed-bed column. Figure 11 illustrates the relationship of ln[C t /(C 0 − C t )] vs. time for the three sorbent doses. Increasing the sorbent mass leads to an increase in the breakthrough time and also in the k YN constant, as it is presented in Table 2. The determination coefficient R 2 ranged between 0.9722 and 0.9941 but we cannot assume that it best describes the adsorption process. Figure 11 illustrates the relationship of ln[Ct/(C0 − Ct)] vs. time for the three sorbent doses. Increasing the sorbent mass leads to an increase in the breakthrough time and also in the kYN constant, as it is presented in Table 2. The determination coefficient R 2 ranged between 0.9722 and 0.9941 but we cannot assume that it best describes the adsorption process. Figure 11. Yoon-Nelson model for the adsorption of Pd(II) in a fixed-bed column at various MgSiO3cys amounts.
Clark Model
The relationship ln[(C0/Ct) n−1 − 1] vs. time for all adsorbent doses studied is shown in Figure 12, where n is the Freundlich constant, determined experimentally in batch section. The value of this parameter was 1.81. The high values of the determination coefficient R 2 (between 0.9881 and 0.9973) certify that the Clark model best describes the adsorption in a fixed-bed column. The value of r and A parameters are presented in Table 2.
Clark Model
The relationship ln[(C 0 /C t ) n−1 − 1] vs. time for all adsorbent doses studied is shown in Figure 12, where n is the Freundlich constant, determined experimentally in batch section. The value of this parameter was 1.81. The high values of the determination coefficient R 2 (between 0.9881 and 0.9973) certify that the Clark model best describes the adsorption in a fixed-bed column. The value of r and A parameters are presented in Table 2. Figure 11 illustrates the relationship of ln[Ct/(C0 − Ct)] vs. time for the three sorbent doses. Increasing the sorbent mass leads to an increase in the breakthrough time and also in the kYN constant, as it is presented in Table 2. The determination coefficient R 2 ranged between 0.9722 and 0.9941 but we cannot assume that it best describes the adsorption process. Figure 11. Yoon-Nelson model for the adsorption of Pd(II) in a fixed-bed column at various MgSiO3cys amounts.
Clark Model
The relationship ln[(C0/Ct) n−1 − 1] vs. time for all adsorbent doses studied is shown in Figure 12, where n is the Freundlich constant, determined experimentally in batch section. The value of this parameter was 1.81. The high values of the determination coefficient R 2 (between 0.9881 and 0.9973) certify that the Clark model best describes the adsorption in a fixed-bed column. The value of r and A parameters are presented in Table 2. Table 3 provides a comparison between the DL-cysteine functionalized magnesium silicate obtained in this study and other commonly used sorbents for Pd(II) removal in batch systems. The equilibrium adsorption capacity value is comparable to or even higher than those obtained in previous studies using various adsorbent materials. This behavior is based on the presence of the -SH and -NH 2 groups in functionalized material structures, suggesting the surface adsorption of Pd(II) by free electrons or by creating hydrogen bridges. In addition, the experimental conditions in which the adsorption studies were performed, for each material, are highlighted.
Conclusions
The current paper presents a new adsorbent, MgSiO 3 functionalized with DL-cysteine (cys), designed for palladium ion recovery from waste solutions.
SEM, EDX and FTIR analyses revealed morphological changes in the surface of the adsorbent material after impregnation and confirmed the functionalization of MgSiO 3 with DL-cysteine.
The modeling of the experimental data obtained in the batch system showed that the Sips isotherm best describes the adsorption process, because the correlation coefficient R 2 approaches 1 and the maximum calculated adsorption capacity (9.62 mg g −1 ) is close to the experimentally determined value (9.23 mg g −1 ). The obtained adsorption capacity is better than those reported in the literature for other adsorbent materials due to the presence of the -SH and -NH 2 groups in the structure of the functionalized material which allow the surface adsorption of Pd(II) by free electrons or by creating hydrogen bridges.
Palladium ion adsorption studies in a dynamic regime using a fixed-bed column are influenced by the adsorbent bed height (the output flow rate decreases as the fixed-bed height increases). The adsorption process is characterized by the Clark model for all the MgSiO 3 -cys material bed heights studied. | 2020-12-24T09:13:56.993Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "c93b06446010d1af76c87829f8eff4a7b17fbb4d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/24/9500/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "163ea9fc93901c717ab5ff3d91f4040ad082b141",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
37549555 | pes2o/s2orc | v3-fos-license | Insights into the key interactions between human protein phosphatase 5 and cantharidin using molecular dynamics and site-directed mutagenesis bioassays
Serine/threonine protein phosphatase 5 (PP5) is a promising novel target for anticancer therapies. This work aims to uncover the key interactions at the atomic level between PP5 and three inhibitors (cantharidin, norcantharidin and endothall). We found that, unlike previous report, Arg 100 contributes less to PP5-inhibitor binding, and the residues His 69, Asn 128, His 129, Arg 225, His 252 and Arg 250 are of importance to PP5-inhibitor binding. The hydrophobic interactions established between the residues Val 254, Phe 271 and Tyr 276, especially Glu 253, are very important to enhance the inhibitive interaction. We suggested that, to increase the inhibitory activity, the interactions of inhibitor with three negatively charged unfavorable interaction residues, Asp 99, Glu 130 and Asp 213, should be avoided. However, the interactions of inhibitor with favorable interaction residue Arg 250 could enhance the inhibitory activity. The Manganese ion 2 (MN2) unfavorably contribute to the total interaction free energies. The coordination between MN2 and chemical group of inhibitor should be eliminated. This work provides insight into how cantharidin and its analogs bind to PP5c at the atomic level and will facilitate modification of cantharidin-like chemicals to rationally develop more specific and less cytotoxic anti-cancer drugs.
Supplementary Tables
Table S2. Calculated average distances between manganese and its ligated atoms of three complexes (in Å) compared with X-ray experimental values a a The labels of residues and the names of the atoms adopted from PDB convention are given in Figure S4.
MN1
Mutation sites are indicated by lowercase. All results are presented as the mean ± S.D. for three independent experiments. Derivation of manganese-related force field parameters. The Mn (II)-Mn (II) center that derived from our three systems was bound with three His residues, two Asp residues, and one Asn residue on the protein side. For convenience, the residues bound to the manganese ion were approximated and we used three ethyl imidazole molecules to mimic the side chains of the three His residues. Two propionic acid molecules were used to mimic the side chains of two Asp residues, and a propionamide molecule was used to mimic the side chain of the Asn residue. On the ligand side, the integral structures of three small molecules (Cantharidin, Norcantharidin, and Endothall) were retained in our three systems. Cantharidin and Norcantharidin were present in the hydrolyzed form. We established three models (Model I) summarized in Figure S3 to perform QM computations. All QM computations were performed using the Gaussian 09 program package. 2 The center of PP5c contains two metal Mn (II), five electrons occupied the 3d orbitals of each Mn (II).Due to the total energy for the antiferromagnetic coupling being higher than for the ferromagnetic coupling by calculating the stable geometries of energy minimization, only the ferromagnetic coupling was considered for the Mn (II)-Mn (II) center. The parallel spins on each Mn 2+ were ferromagnetically coupled. 3 The charge of cantharidin, norcantharidin, and endothall is -2 in three models. The total charge in the three models is 0, and the total spin S is 11/2 for the ferromagnetically coupled state. Three models were optimized with the long-range corrected hybrid density-functional theory (DFT) method WB97XD. 4 The basic set 6-31G** was used for the carbon (C), hydrogen (H), oxygen (O), and nitrogen atoms (N). The basic set LANL2DZ was employed for manganese ions (Mn). After the geometry optimization, frequency analysis was performed at the same level to confirm the existence of true local minima without any imaginary frequency. The frequency analysis also produced the Cartesian Hessian matrices that were required for the following step.
The bond and angle force constants (K r and K θ ) were obtained using Seminario's method. K r and K θ are derived from the Cartesian Hessian matrices that obtained from the frequency analysis.
The derivations of manganese-related force field parameters are compatible with the AMBER force field. The energy function is given by equation (1 The four terms in equation (1) relate to the energies of bond stretching, angle bending, dihedrals and nonbonded van der Waals and electrostatic interactions, respectively. 5 In our study, we derived bond-stretching and angle-bending parameters for manganese ion. According to the Seminario's method, the bond-stretching force constants of bond A-B can be extracted the 3×3 submatrix from the Hessian matrix in Cartesian coordinates. Calculations of three models as shown in Figure 1 were performed as described above using the MTK++ program. 6 Neglecting the dihedral parameters is a common procedure for this symmetric geometry of the metal coordination center. 7 Atomic single charge calculations. Large manganese cluster models containing all atoms of a bound residue were constructed to keep the native crystallographic geometry. These models (Model II) were capped with acetyl (ACE) and N-methylamino (NME) residues ( Figure S4).
Sampling was conducted by the Merz-Singh-Kollman (MK) 8 while the restrained electrostatic potential (RESP) method was used to derive atom-centered partial charges. Electrostatic potential (ESP) charge fitting was performed by QM computations using Gaussian 09, with keywords WB97XD/6-31G** for C, H, O and N, but LANL2DZ for Mn. A Van der Waals radius of 1.69 Å was assigned to the manganese center. 9 This task was conducted based on the RESP fitting protocol implemented in the MTK++ program. This protocol restrains the backbone heavy atoms (CA, N, C, O) to those values found in the AMBER parm94 force field and has given the most impressive performance in previous reports.
Calculated vibrational frequencies.
To check the quality of the force field parameters, we were analyzed with the PTRAJ module of the AMBER package. Hydrogen bonds (H-bond) were also assigned using the PTRAJ on the basis of the following criteria: the distance between the proton donor and acceptor atoms was ≤3.5 Å, and the angle formed by the donor, hydrogen, and acceptor was ≥120º . 13 The force field parameters obtained in our study were applied to the molecular dynamics (MD) simulations of three systems in explicit solvents ( Figure S2). The RESP fitting charges were only applied to the Mn (II)-Mn (II) center of three systems indicated in Figure 2. The partial charges from the force field ff99SB set were used to assign the rest of parts on three systems. A bond was created to mimic the coordination interaction between the manganese ion and manganese-related atom both on the protein side and ligand side. These systems were soaked in a rectangular box of TIP3P water 14 with the smallest distance between the protein surface and cell boundary set to 10 Å. An appropriate number of counterions were added to neutralize the global charge of the entire system. For each system, we performed three individual 50ns MD at different seeds for production phase without any restraint was performed. This was preceded by 2000 steps energy minimization with a weak positional restraint to eliminate unfavorable contacts, 500 ps slowly heating in the canonical ensemble (NVT) from 0 K to 300 K and 500 ps of density equilibration to adjust the solvent density under 1 atm pressure in the isothermal-isobaric ensemble (NPT), and followed by a 5 ns constant pressure equilibration which was performed unrestrained at 300 K. All MD simulations were carried out using a Langevin thermostat 15 with a collision frequency of 2.0 ps -1 dynamics for temperature control. Constant pressure was controlled with an average pressure of 1 atm using a Berendsen barastat. 16 The time step was set to 2 fs. The periodic boundary condition was enabled during MD simulation. The particle mesh ewald (PME) method 17 was used to handle the long-range electrostatic interactions. The distance cutoff for the real-space nonbond interactions was set to 12 Å. All bonds with hydrogen atoms were constrained using the SHAKE algorithm. 18 The three individual 50ns MD trajectories for each system was recorded every 10 ps for subsequent analysis.
Binding free energy calculation and spectrum of free energy decomposition. The binding free energy for both systems was estimated by the Molecular Mechanics-Poisson-Boltzmann Surface Area (MM-PBSA) approach as implemented in AMBER12 using the same force field parameters as described above. MM-PBSA calculations were performed on 1000 snapshots exacted from 40~50 ns production trajectories with a time interval of 10 ps. The binding free energy ( bind G ) of each system was evaluated as follows: The molecular mechanics energy, TS is the change of the conformational entropy upon ligand binding, which was not considered here because our aim is to compare the binding affinity of three ligands in our three systems.
In order to estimate the contribution of key residues on the ligand binding, the protein-ligand interaction spectrum of each complex was decomposed based on a per-residue method 21
SASA.
The decomposition energies for each residue in the complex are further broken down into backbone, sidechain, and total energy contributions. All energy components in equation 5 were calculated using the same snapshots as the binding free energy calculation.
Computational alanine scanning mutagenesis. The computational alanine scanning mutagenesis (ASM) protocol has been widely used in structure-based drug design and protein engineering for evaluating the contributions of individual residue side chains to protein−protein or protein−ligand binding free energy and understanding the structural and energetic characteristics of the hot-spots.
It has been shown to be an effective and reliable method, and can now be applied with an accuracy of 1 kcal/mol. ASM is capable of anticipating the experimental results of site-directed mutagenesis and achieves an overall success rate of 80% and an 82% success rate in residues for which alanine mutation causes an increase in the binding free energy > 2.0 kcal/mol (warm-and hot-spots). 22 To further evaluate the impact of mutational effects on the ligands binding in our three systems, ASM was applied to estimate the relative binding free energy of different PP5c mutants to the ligands.
The binding free energies for the complex and for the alanine mutants were calculated using the MM-PBSA method. We used the same 1000 snapshots taken from the production phase MD trajectory for performing the alanine scanning. The alanine mutant trajectory was initially generated from the wild type molecular dynamics trajectory by truncating the side chains of the mutated residue at Cγ replacing them with a hydrogen atom and setting the Cβ-H direction to that of the former Cβ-Cγ. The relative binding free energy is the free energy difference between the wild-type and alanine mutants and is defined as: Site-directed mutagenesis. The sequence encoding the ORF of Human PP5 (HuPP5) was isolated from the liver cancer cell line SMMC-7721 by PCR using primers that introduced a Nde I site and 6xHis tag at the forward one, and a Xho I site at the reverse one. The PCR products were cloned into pCR2.1 (Invitrogen, Carlsbad, CA, USA) and transformed into DH5α (TaKaRa, Dalian, China) and then sequenced (AuGCT, Inc., Beijing, China). The mutants were generated using the QuikChange® site-directed mutagenesis kit (Stratagene, La Jolla, CA, USA) following the instruction manual. The mutagenic primers were designed based on the sequence of wild-type HuPP5 and are listed in Table S5. The mutants were sequenced to confirm the presence of the desired mutations and the lack of undesired mutations. Inhibition assays were carried out by adding compounds to the assay mixture 5 min before starting the reaction with the addition of p-NPP as described above. Compounds were dissolved in DMSO (dimethyl sulphoxide) to obtain 1 M stock solutions and diluted to desired concentrations with assay buffer before use. A non-enzyme reaction was used as the background control, while a non-compound reaction was taken as the full-activity control. The inhibition ratio was calculated as the percentage of A 410 values of the inhibition assay reaction divided by the full-activity control, having subtracted the background control A 410 value for both. The IC 50 values were obtained using GraphPad Prism 5.0. | 2018-04-03T04:03:22.216Z | 2015-07-20T00:00:00.000 | {
"year": 2015,
"sha1": "0fbbefe55a45c3cfe46aa0a5555d868fef36ecb0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep12359.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fbbefe55a45c3cfe46aa0a5555d868fef36ecb0",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
234359859 | pes2o/s2orc | v3-fos-license | Dermal bacterial LPS-stimulation reduces susceptibility to intradermal Trypanosoma brucei infection
Infections with Trypanosoma brucei sp. are established after the injection of metacyclic trypomastigotes into the skin dermis by the tsetse fly vector. The parasites then gain access to the local lymphatic vessels to infect the local draining lymph nodes and disseminate systemically via the bloodstream. Macrophages are considered to play an important role in host protection during the early stage of systemic trypanosome infections. Macrophages are abundant in the skin dermis, but relatively little is known of their impact on susceptibility to intradermal (ID) trypanosome infections. We show that although dermal injection of colony stimulating factor 1 (CSF1) increased the local abundance of macrophages in the skin, this did not affect susceptibility to ID T. brucei infection. However, bacterial LPS-stimulation in the dermis prior to ID trypanosome infection significantly reduced disease susceptibility. In vitro assays showed that LPS-stimulated macrophage-like RAW264.7 cells had enhanced cytotoxicity towards T. brucei, implying that dermal LPS-treatment may similarly enhance the ability of dermal macrophages to eliminate ID injected T. brucei parasites in the skin. A thorough understanding of the factors that reduce susceptibility to ID injected T. brucei infections may lead to the development of novel strategies to help reduce the transmission of African trypanosomes.
We show that although dermal injection of colony stimulating factor 1 (CSF1) increased the local abundance of macrophages in the skin, this did not affect susceptibility to ID T. brucei infection. However, bacterial LPS-stimulation in the dermis prior to ID trypanosome infection significantly reduced disease susceptibility. In vitro assays showed that LPS-stimulated macrophage-like RAW264.7 cells had enhanced cytotoxicity towards T. brucei, implying that dermal LPS-treatment may similarly enhance the ability of dermal macrophages to eliminate ID injected T. brucei parasites in the skin. A thorough understanding of the factors that reduce susceptibility to ID injected T. brucei infections may lead to the development of novel strategies to help reduce the transmission of African trypanosomes.
African trypanosomes are single-celled protozoan parasites that are transmitted between mammals throughout sub-Saharan Africa by blood-feeding tsetse flies of the genus Glossina. Human African trypanosomiasis is caused by infection with the Trypanosoma brucei rhodesiense and T. b. gambiense sub-species, whereas animal African trypanosomiasis is caused by infection with T. congolense, T. vivax and T. b. brucei. The trypanosome life-cycle within the mammalian host begins after the intradermal (ID) injection of metacyclic trypomastigote forms into the skin dermis by blood-feeding tsetse flies. Once within the skin dermis, the parasites undergo adaptation to the mammalian host and directly invade the lymphatics 1,2 to access the local draining lymph nodes, before disseminating systemically via the bloodstream. During this initial establishment phase, the parasites also undergo a morphological change into the long-slender blood-stream forms that exist entirely extracellularly within the mammalian host [2][3][4][5] .
Much of our understanding of the host-pathogen interactions that influence mammalian trypanosome infections has been obtained from the study of experimental transmissions to mice infected via the intraperitoneal or intravenous routes. However, relatively little is known of the factors that are important in controlling disease progression following the natural ID route of infection. Mononuclear phagocytes (MNP) are a heterogeneous population of monocytes, macrophages, classical dendritic cells and tissue-specific phagocytes including the Kupffer cells in the liver, the microglia in the brain and the Langerhans cells of the skin epidermis. These cells are derived from haematopoietic precursors and are dependent on stimulation from the cytokine colony stimulating factor 1 (CSF1) for their development 6 . Kupffer cells in mice can readily phagocytose antibody-opsonized T. congolense parasites 7 , and data from experimental systemic transmissions show that macrophages play an important role in protection during the early stage of trypanosome infections by helping to limit the magnitude of the first parasitaemic wave 8,9 . MNP are abundant in the skin dermis, but little is known of their impact on ID trypanosome infections. The gradual accumulation of monocytes in the dermis after ID trypanosome injection has been reported, but these cells were not considered to play a significant role in parasite clearance 10
Results
Intradermal CSF1-Fc treatment increases the local abundance of CSF1R + MNP in the skin. Systemic administration of a CSF1-Fc conjugate significantly expands the abundance of colony stimulating factor 1 receptor (CSF1R)-expressing MNP in tissues 11 . We therefore used transgenic Csf1r-EGFP reporter mice to determine the effects of ID CSF1-Fc treatment on the local abundance of CSF1R + MNP in the skin. In these mice, EGFP expression is driven by the Csf1r promoter and stably and reliably reflects the expression of CSF1R protein 12 . In tissues from Csf1r-EGFP reporter mice all the MNP populations are labelled, even under circumstances where Csf1r mRNA is depleted 12 . The only other EGFP-expressing cells in these mice are the trophoblasts in the placenta that express Csf1r from a unique promoter. To restrict the effects of CSF1-administration to the dermis, groups of four mice were injected ID into the ear pinna with CSF1-Fc, or an equivalent volume of PBS as a control, for 3 days. Approximately 24 h after the final CSF1-Fc treatment the abundance of Csf1r-EGFP + MNP in the vicinity of the injection site was then quantified microscopically (Fig. 1A). This analysis showed that the number of EGFP + MNP in the dermis was significantly increased in CSF1-Fc-treated Csf1r-EGFP + mice when compared to PBS-treated controls ( Fig. 1B; P = 0.001, Student's t-test).
Intradermal CSF1-Fc treatment does not affect susceptibility to ID T. brucei infection. Next, groups of eight non-transgenic C57BL/6J mice were injected ID into the ear pinna with CSF1-Fc, or PBS as a control, as above, and 24 h after the final CSF1-Fc treatment the mice were injected ID with 1 × 10 5 T. brucei STIB 247 parasites into the same site. Blood parasitaemias were then measured at daily intervals for 30 d postinfection using the rapid matching method 13 , which has a minimum detection limit of approximately 4 × 10 5 parasites/mL of blood. These data showed that parasite kinetics after ID trypanosome injection were similar in mice from each treatment group. All the CSF1-Fc-treated and PBS-treated mice developed a parasitaemia that was first detectable from around 5 d post-infection, with similar parasite burdens detected at the peak of the first parasitaemia wave ( Fig. 2; PBS 6 × 10 6 parasites/mL; CSF1-Fc, 3 × 10 6 parasites/mL; P = 0.1984, Student's www.nature.com/scientificreports/ t-test, n = 8). Following the initial parasitaemia wave, a similar proportion of the mice in each treatment group displayed transient relapses in their parasitaemias during the remainder of the experiment (PBS, 4/8 mice; CSF1-Fc, 3/8 mice). Together, these data show that although the local abundance of CSF1R + MNP increased significantly in the dermis after CSF1-Fc treatment, this did not affect susceptibility to ID T. brucei infection.
ID LPS treatment decreases susceptibility to ID T. brucei infection. We next determined whether
LPS-stimulation directly within the dermis would influence susceptibility to ID T. brucei infection. Groups of Csf1r-EGFP reporter mice were injected ID with LPS, or an equivalent volume of PBS as a control, and 24 h later the abundance of Csf1r-EGFP + MNP cells in the dermis local to the injection site was determined as above. This analysis suggested that ID LPS exposure did not affect the local abundance of Csf1r-EGFP + MNP as similar numbers were observed in the dermis of PBS-treated control mice (Fig. 3A).
To determine the effects of ID LPS-stimulation on susceptibility to ID trypanosome infection, groups of C57BL/6J mice were first injected ID into the ear pinna with LPS (or PBS as a control) and 24 h later injected ID with 1 × 10 5 T. brucei STIB 247 parasites into the same site. The blood parasitaemia was then monitored for 30 d. Whereas all of the PBS-treated mice developed a detectable parasitaemia, only 5/8 LPS-treated mice had detectable parasitaemias during the 30 d observation period (Fig. 3B). The magnitude of the mean parasite burdens at the peak of the first wave in those mice that developed a detectable parasitaemia also differed significantly between treatment groups (PBS, 6 × 10 6 parasites/mL; LPS, 8 × 10 5 parasites/mL; P = 0.0365, Student's t-test). These data demonstrate that LPS-stimulation directly with the skin dermis significantly reduces susceptibility to ID trypanosome infection.
Figure 2.
Intradermal CSF1-Fc treatment does not affect susceptibility to ID T. brucei infection. Groups of C57BL/6J mice were injected ID with CSF1-Fc, or PBS as a control, for three consecutive days and 24 h after the final injection the mice were injected ID with a 1 × 10 5 dose of T. brucei STIB 247 parasites (n = 8 mice/group). Heatmap shows the log 10 number of trypanosomes/mL of blood on each day after ID infection. Each row represents an individual mouse. UD = below detection limit (~ 5.4 log 10 parasites/mL blood). www.nature.com/scientificreports/ LPS treatment enhances the ability of macrophage-like RAW264.7 cells to kill T. brucei in vitro. MNP have been suggested to play an important role in trypanosome infections by phagocytosing and destroying the parasites and via the production of cytotoxic/cytostatic mediators 14,15 . Bacterial LPS is a potent pathogen-associated molecular pattern that elicits a strong immune response in MNP via Toll-like receptor (TLR)-4 signalling 16 . Thus, although LPS treatment did not affect the abundance of CSF1R + MNP in the dermis, it is plausible that the pro-inflammatory properties of LPS-stimulated MNP enhanced their ability to clear the parasites from the injection site and thus reduce the initial parasitaemic peak. To test this hypothesis, murine macrophage-like RAW264.7 cells 17 were stimulated with LPS (or PBS as a control) and co-cultivated in vitro with T. brucei STIB 247 parasites. The number of viable parasites was then determined 24 h later. Stimulation with LPS enhanced the ability of RAW264.7 cells to kill T. brucei as the number of viable trypanosomes detected after co-cultivation in the presence of these cells was significantly reduced (Fig. 4A).
Trypanosome-derived ketoacids such as indolepyruvate can reduce the production of inflammatory mediators by LPS-stimulated macrophages including pro-inflammatory cytokines and the toxic free radical nitric oxide (NO) 18,19 . Production and excretion of these metabolites may enable the trypanosomes to evade clearance by activated MNP. Consistent with this hypothesis, high levels of nitrite (a stable NO reaction product) were detected in the wells of the LPS-stimulated RAW264.7 cells, and this was reduced to background levels in the presence of indolepyruvate (Fig. 4B). Furthermore, treatment with indolepyruvate limited the ability of LPS-stimulated RAW264.7 cells to kill T. brucei (Fig. 4C). These data imply that ID LPS-treatment may have induced the production of pro-inflammatory mediators such as NO in local CSF1R + MNP, and these may have enhanced their ability to clear the parasites from the infection site 20 . However, it is plausible that the production of trypanosome-derived metabolites such as indolepyruvate may offer some protection against this.
Discussion
Macrophages are considered to play an important role in protection during the early stages of African trypanosome infections in mammalian hosts by phagocytosing antibody-opsonized parasites, and producing cytotoxic and inflammatory mediators [21][22][23] . For example, the release of parasite-derived CpG DNA and soluble glycosylphosphatidyl inositol (GPI)-anchored variant surface glycoprotein stimulates the macrophages in a TLRdependent manner to adopt a pro-inflammatory phenotype that is important for controlling the early stages of infection 21,[24][25][26] . Although MNP are abundant in the skin, little is known of their impact on ID trypanosome infections 21 . We therefore studied the effects of increased MNP abundance or activation in the skin on susceptibility to ID trypanosome infection. We show that the CSF1-mediated increase in the abundance of MNP at the site of ID trypanosome infection did not affect the infection kinetics. However, bacterial LPS-stimulation in the dermis 24 h before ID trypanosome infection significantly reduced the parasitaemia, with only a proportion of the mice developing a detectable parasitaemia throughout the 30 d observation period. The stimulation of macrophages with bacterial LPS rapidly induces a pro-inflammatory transcriptional response 27,28 , and in vitro assays showed that LPS-stimulated RAW264.7 cells had enhanced cytotoxicity towards T. brucei. Together, our data suggest that the stimulation of dermal MNP with LPS may enhance their ability to eliminate ID injected T. brucei parasites in the skin.
Further studies are now required to identify the mechanism responsible for the LPS-mediated increased protection against ID trypanosome infection. Although only a small number of Csf1r-EGFP + MNP were available for analysis, our data suggested that LPS stimulation did not affect the abundance of CSF1R + MNP in the dermis. Data from an ex vivo human skin organ culture model show that LPS-treatment induces the expression of high levels of pro-inflammatory cytokines, including interleukin (IL)-1β, IL-6, IL-8 and tumour necrosis factor-α 29 . Expression of these pro-inflammatory mediators may stimulate the activation of skin-resident innate immune cell populations, or aid the recruitment of neutrophils and other inflammatory MNP. Neutrophils, but not macrophages, express CD300f. highly in murine skin, and interactions between ceramide and CD300f. can suppress neutrophil accumulation and oedema in LPS-induced skin inflammation 30 . This implies that increased oedema and neutrophil accumulation are unlikely to be significant factors that contribute to the decreased susceptibility to ID trypanosome infection observed in LPS-treated mice. This is consistent with data suggesting that neutrophils do not contribute to the control of ID trypanosome infections 10 . Although neutrophils were rapidly recruited to the tsetse fly bite site after T. brucei infection 10,31 , these cells rarely engulfed the parasites and disease susceptibility was not affected by their absence 10 . Studies using mice with TLR-4 expression conditionally ablated in dermal MNP would help resolve whether the effects of LPS-treatment on ID T. brucei infection are specifically mediated through these cells.
Bacterial LPS is potent stimulator of the L-arginine NO pathway 32 , and in MNP this leads to the synthesis of high levels of the cytotoxic free radical NO via expression of the enzyme inducible NO synthase (iNOS) 33,34 . The production of high levels of NO by inflammatory MNP is important for the clearance of many intracellular pathogens. The macrophage response during the early stages of trypanosome infections in mice and humans is also accompanied by elevated NO production 35,36 . In vitro studies show that macrophage-derived NO can similarly mediate toxicity to extracellular trypansomes 37 . However, NO can bind to haemoglobin with high affinity, suggesting that in the extracellular environment of the mammalian host's bloodstream, its toxicity against trypanosomes is reduced 15 . Macrophage-derived NO may also trigger iron loss within trypanosomes 37 , whereas another study has shown that it may react with serum albumin to produce S-nitroso-albumin that also has antiparasitic effects 14 . Therefore, although NO may be inefficient at killing extracellular trypanosomes within the bloodstream, it could be highly effective within the microenvironment of the extravascular tissue spaces in the skin during the early stages following ID infection. In the current study, the enhanced ability of LPS-stimulated RAW264.7 cells to kill T. brucei in vitro also coincided with their ability to produce high levels of NO. Further in vivo studies, for example using iNOS-deficient mice or iNOS inhibitors, are clearly necessary to determine www.nature.com/scientificreports/ whether the effects of LPS-stimulation on susceptibility to ID trypanosome infection are mediated through enhanced NO production by MNP in the skin. The duration of the effect of LPS-treatment on the MNP at the injection site was not determined. Soon after injection into the dermis by tsetse flies the trypanosomes can be visualised migrating towards and within the lymphatics in the skin 1 and can be detected within the draining lymph nodes by 18 h 2 . In our study the mice were injected with a single dose of LPS into the dermis of the ear pinna. This would suggest that the effects of LPS stimulation were most likely restricted to the early stages of the infection within the skin dermis, and would have had little influence once the parasites had disseminated beyond this site.
While LPS-stimulated MNP may be effective in killing T. brucei, metabolites produced by the trypanosomes such as aromatic ketoacids could help to protect them against this activity 18,19 . For example, trypanosome-derived indolepyruvate inhibits the LPS-induced glycolytic shift in macrophages 18 . This leads to reduced expression of the transcription factor hypoxia-inducible factor-1α and decreased expression of pro-inflammatory mediators including NO 18,19 . Consistent with this potential immune evasion property, the treatment of LPS-stimulated RAW264.7 cells with 1 mM indolepyruvate reduced their ability to kill T. brucei and produce NO. However, since susceptibility to ID T. brucei infection was significantly reduced in the LPS-treated mice it is uncertain whether trypanosome-derived indolepyruvate had any influence during the early stages of the infection in the dermis. McGettrick and colleagues 18 reported that the levels of aromatic ketoacids in the serum of T. brucei infected rats were typically in the range of 0.2-0.5 mM when their parasitaemias reached ~ 10 8 parasites/mL. This would imply that the levels of indolepyruvate produced by the trypanosomes in the dermis soon after injection may be insufficient to have a significant impact on the responses of the local MNP. However, the concentrations and kinetics of such secreted molecules in extravascular microenvironments like the skin dermis would be an interesting aspect for further study.
Dermal CSF1-Fc treatment, in contrast, did not affect disease susceptibility despite increasing the abundance of MNP in the skin in the vicinity of the infection site. The stimulation of macrophages with CSF1 induces their polarisation towards an "alternatively-activated" or M2 phenotype, which includes the production of high levels of anti-inflammatory cytokines such as IL-10 and transforming growth factor-β 38,39 . These alternativelyactivated macrophages can also dampen pro-inflammatory host responses towards certain pathogens 40 , and can secrete factors that mediate fibrosis or help repair the tissue damage caused by chronic infections with helminth parasites such as schistosomes 41,42 . In alternatively activated MNP, expression of the enzyme arginase competes with iNOS for the substrate arginine 43 , producing ornithine and urea instead of NO and citrulline. CSF1Rsignalling in MNP similarly inhibits iNOS activity in MNP and skews the cells towards arginase expression 44 . A pro-inflammatory macrophage response is considered important for initial control during the early stages of a trypanosome infection 9 . Thus, it is plausible that CSF1-Fc treatment polarised the MNP in the dermis towards an alternatively-activated phenotype that may be less effective at clearing the ID-injected parasites from the skin. Despite this, disease susceptibility was not increased by CSF1-Fc treatment. As discussed above 1,2 , it is likely that there is only a short period between injection and the initial dissemination during which the MNP in the skin will have opportunity to significantly impact on disease pathogenesis.
Animal African trypanosome infections continue to inflict significant economic strain on sub-Saharan African livestock industries. Although a small number of trypanocidal drugs have been developed and are used in the field, drug-resistance can significantly impede their efficacy. We show that dermal LPS-stimulation significantly reduces susceptibility to ID T. brucei infection in mice. A thorough understanding of the cellular and molecular mechanisms that mediate this protection may lead to the development of novel strategies to help reduce the transmission of African trypanosomes.
Materials and methods
Mice. Six to eight weeks old female C57BL/6 J mice (Charles River, Harlow, England) and Csf1r-EGFP 'mac green' mice 12 maintained on a C57BL/6 background were used where indicated. Mice were housed in individually ventilated cages with food and water provided ad libitum. All in vivo procedures were carried out under the authority of the appropriate project and personal licenses, in accordance with the United Kingdom Home Office regulations and the Animals (Scientific Procedures) Act 1986. Approvals for all the in vivo studies were obtained from The Roslin Institute's and University of Edinburgh's ethics committees, and these ensured they were carried out in compliance with the ARRIVE guidelines and recommendations.
Trypanosomes. Pleomorphic wild-type T. b. brucei strain STIB247 were used throughout this study. These trypanosomes were originally isolated in 1971 from a hartebeeste (Alcelaphus buselaphus) in the Serengeti National Park, Tanzania 45 . The trypanosomes were axenically cultivated in vitro as previously described 1 . Prior to their use in each in vivo experiment, approximately 1 × 10 5 axenically cultivated trypanosomes were first injected by the intraperitoneal route into C57BL/6J mice to obtain a fresh source of in vivo-adapted parasites. Blood was collected from these mice at the first peak of parasitaemia wave and used as a source of in vivo adapted trypanosomes in subsequent experiments. In these experiments, groups of female C57BL/6J mice were injected ID with approximately 1 × 10 5 T. b. brucei STIB247 in vivo adapted parasites.
Blood parasitaemias were assessed at daily intervals using the rapid matching method 13 . Briefly, the number of trypanosomes was counted in wet blood films within 5 or 20 microscope fields depending on the parasite density. These values were then matched to the number of parasites/mL of blood based on data in an established reference table derived from haemocytometer counts of trypanosomes in infected blood samples. This assay has a minimum detection limit of approximately 4 × 10 5 parasites/mL of blood. Whole-mount fluorescence multi-photon microscopy. Mouse ears were excised and immobilised on an imaging platform using tissue adhesive glue (3 M Vetbond, 3 M, St. Paul, MN, USA) and suspended in PBS before being imaged on a Zeiss LSM7MP 2-photon microscope (Carl Zeiss Ltd., Cambridge, UK). The microscope was equipped with a 20X/1.0NA water-immersion objective lens (Carl Zeiss Ltd), a coherent titaniumsapphire laser and an optical parametric oscillator (wavelength range 690 to 1400 nm). A laser output of 880 nm provided the excitation for the EGFP.
In some experiments indolepyruvate (indole-3-pyruvic acid; Sigma-Aldrich) dissolved in absolute ethanol was included at a final concentration of 1 mM. Triplicate wells were used for each treatment group. The plates were incubated for 24 h at 37 °C in a humidified 5% CO 2 /air atmosphere. The number of viable trypanosomes in each well was then counted using a haemocytometer. Experiments were repeated at least three times on separate days.
Nitrite detection. Nitrite levels in culture supernatants were determined by Griess assay using a commercial kit (Griess Reagent System; Promega, Southampton, UK). Briefly, 50 µL of culture supernatant was added to triplicate wells of a clear 96-well plate. Plates also included a standard curve generated from serial dilutions of a nitrite standard ranging from 100-1.56 µM. Sulphanilamide solution (50 µL) was added to each well and incubated at room temperature for 10 min, before addition of N-1-napthylethylenediamine dihydrochloride solution. Plates were shaken and incubated for a further 10 min before absorbance was read at 550 nm.
Statistics. Data are presented as mean ± SD. Unless indicated otherwise, statistical differences between groups were compared by Student's t-test using GraphPad Prism v.8.0 software (GraphPad Software Inc. San Diego, USA). P values ≤ 0.05 were accepted as significant. A linear mixed effects model in RStudio (rstudio.com) was used to statistically compare the quadratic (squared) and cubic curve effect of the infected mouse parasitaemia across the observation period. Mean peak parasitaemias were compared by Student's t-test. | 2021-05-12T06:16:53.399Z | 2021-05-10T00:00:00.000 | {
"year": 2021,
"sha1": "63a55e7e816d06d85cf3476a966202cfc6219073",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-89053-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23ec6835e3249051f223ac3b26c3155739730abe",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254298948 | pes2o/s2orc | v3-fos-license | OCCUPATIONAL HEALTH AND SAFETY IN UTILIZATION OF LIGNOCELLULOSIC BIO-POZZOLANS IN THE CONSTRUCTION INDUSTRY
ignocellulosic bio-pozzolans (LBPs) are either produced by open-air incineration or as industrial byproducts and can be used for soil stabilization and in making concrete for road construction. The preparation of the LPBs involves the collection of raw materials/wastes, drying, incineration, grinding, and sieving to a required size before utilization for construction works or products. The finer the LBPs, the more reactive they become, and the higher the potential binding properties. Coarse LBPs are less reactive; and therefore, exhibit low binding properties with reactions. Very fine LBPs are essentially nanomaterials and can be an occupational health and safety hazard to workers handling these materials. In this review, the LBPs production process was briefly clarified, and potential hazards and risks were investigated. Silicon Dioxide (SiO 2 ), the main chemical compound present in LBPs, was noted to potentially have a significant health hazard to humans. SiO 2 exposure is reported to induce deoxyribonucleic acid (DNA) destruction. This study recommends an airborne exposure limit of 2mg/m 3 for LBPs, and the ACGIH exposure limit recommendation for SiO 2 .
INTRODUCTION
Occupational Safety and Health (OSHA) at work are vital elements for a decent job, as contended by the International Labour Organisation (ILO). Occupational accidents have a trivial human, social and economic cost, which should be eliminated by ensuring that all workplaces are safe. According to the ILO's latest data on annual fatal casualties from a number of countries, the United States recorded the highest number of fatal casualties ( Figure 1) while Cuba recorded the highest number of calendar days during which workers were temporarily incapacitated and unable to work due to fatal injuries( Figure 2) (ILO, 2022). Norway was the least with zero days lost (2019), while the United States had 9 days lost (2014).
Significant fatal injuries and the subsequent loss of working days, have led most countries to develop and ratify occupational health and safety regulations. These include regulations for the safety and health of workers in the construction industry (Rubio et al., 2023). Occupational health and safety continuous improvement (OHSCI) in modern construction is important (Alkaissy et al., 2022;Araújo et al., 2022;Mavroulidis et al., 2022;Mohandes et al., 2022) This is especially important while undertaking civil engineering works, which involve the use of cement and other cementitious materials. Conventional cement production, transportation, and usage lead to significant production of greenhouse gases such as CO2, and now there are increased calls to reduce CO2 emissions. However, it is essential to note that efforts are being made to study and develop alternative cement with a low CO2 footprint (Abdalla et al., 2022;Caronge et al., 2022;Cormos, 2022;Kremer et al., 2022;Majchrzak-Kucęba et al., 2022;Nie et al., 2022;Santos et al., 2022;Syahida Adnan et al., 2022;Türkeli et al., 2022;Zhuang et al., 2022).
Other ways of reducing CO2 from cement manufacturing include research on a laboratory and pilot scale to study supplementary cementitious materials (SCMs) that can potentially replace the use of conventional cement in construction. The SCMs are proven environmentally friendly materials. Some of these SCMs are the lignocellulosic bio-pozzolans (LBPs) produced through open-air incineration. They include Rice Husk Ash (RHA) (Jittin and Bahurudeen, 2022;Mohamed et al., 2022) which can be used in various construction applications like soil stabilization for road works, production of concrete and associated concrete products like concrete blocks, bricks, and paving blocks.
The preparation of the LPBs involves collecting raw materials/wastes, drying, incineration, grinding, and sieving to a required size before utilization. Generally, the finer the LBPs, the more reactive they become, and the higher the pozzolanic properties. Coarse LBPs are less reactive and have low and slow reactive pozzolanic properties.
Very fine LBPs are essentially nanomaterials and hence pose a health risk to both the researchers and the user of these materials. Tetley (2007) observed that the material composition of LBPs might have negative health impacts on human organs like the lungs, brain, and liver.
It is further noted that LBPs contain substances listed as hazardous by the International Agency for Research on Cancer (IARC). There is no universal standard for monitoring the exposure of both researchers and workers to these dangerous substances. This paper briefly explains the LBPs production process, highlights the chemical composition and classification of LBPs, and identifies and discusses the main hazardous substances in LBPs. It also provides recommendations for both exposure limits and reduction of exposure to hazards contained in these materials. Data compiled in this paper will significantly contribute to safeguarding the health of researchers undertaking studies on the application of LBPs in construction works. It will also enhance the safety and health of workers that will handle and apply LBPs in actual construction works. Figure 3 shows the general LBPs production process, which requires minimal mechanization. Open-air LBPs production involves collecting raw materials which can be ordinary agricultural wastes. These are then sun-dried to ensure minimal moisture content for ease of burning. The material is burnt in the open air on a hard surface to enable the accumulation of ashes. The ashes are then left to cool. The cooling time depends on the amount of ash and environmental conditions like humidity, wind, and temperature. After cooling, the material is ground and then sieved to a recommended sieve size before being used as a pozzolanic material for making concrete or concrete products like blocks or soil stabilization.
COMPOSITION
The chemical composition of some of the LBPs from different plants and different research is detailed in Table 1 (Aluga and Kambole, 2020). The main chemical compound in LBPs is silicon dioxide (SiO2), followed by calcium oxide (CaO), as shown in Figure 4, derived from Table 1. Sodium Oxide forms about 1% of the LBPs.
SUBSTANCES
The main compounds/substances in the LBPs from Table 1 Table 2 details the occupational hazards and exposure limits to (LBPs) and suggests the exposure reduction of the above-named seven (7) substances found in the LBPs. Accordingly, the LBPs affect humans when breathed in due to occupational exposure. The following are some of the health and other effects identified when exposed to The workplace exposure to these compounds differs from one compound to another, as detailed in Table 2.
Wearing respirators, putting on protective work clothing, regular bathing after using the substance, and detailed warning information in the work area are some methods suggested for limiting exposure to LBPs. | 2022-12-07T19:02:46.731Z | 2022-11-30T00:00:00.000 | {
"year": 2022,
"sha1": "c51cc8a9e35bb870230be0f6fc9f41fcec94fcd3",
"oa_license": "CCBY",
"oa_url": "https://www.ajhse.org/index.php/ajhse/article/download/260/83",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e17ca7f0572875846cbe19730d3b17b1cd8421b0",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
235614124 | pes2o/s2orc | v3-fos-license | Case Report: Long-Term Survival With Anlotinib in a Patient With Advanced Undifferentiated Large-Cell Lung Cancer and Rare Tonsillar Metastasis
Undifferentiated large-cell lung cancer is a rare type of non-small cell lung cancer (NSCLC) with a poor prognosis. It is insensitive to chemotherapy and easily develops drug resistance. Analysis of the Surveillance, Epidemiology, and End Results (SEER) database showed that patients with stage IV undifferentiated large-cell lung cancer had a median overall survival (OS) of only 4 months and that those who received chemotherapy had a median OS of only 5 months longer than those who did not. For the first time, we report a case of advanced large-cell undifferentiated lung cancer with rare tonsil metastasis. The patient developed resistance after 3 months of platinum-based systemic chemotherapy and local treatment. Antiangiogenic therapy has been continuously progressing and has shown certain efficacy in treating many malignant tumors, such as lung cancer. However, there are no relevant studies or case reports on antiangiogenic therapy in the treatment of undifferentiated large-cell lung cancer. Anlotinib, an orally delivered small-molecule antiangiogenic tyrosine kinase inhibitor (TKI), was administered to this patient after chemotherapy resistance occurred, and the outcome was assessed as continued stable disease (SD). As of the last follow-up evaluation, the progression-free survival (PFS) of the patient was 21.5 months, and the OS was 27.5 months. Retrospective immunohistochemical analysis showed that the patient was positive for one of the targets of anlotinib (PDGFR). In general, the findings in this case suggest that anlotinib may be an option with good efficacy for patients with large-cell undifferentiated lung cancer after chemotherapy resistance that may have good efficacy and also suggest that PDGFR may be the target underlying this effect.
INTRODUCTION
Metastasis of tumors to the tonsil is extremely rare, with nearly 100 cases reported to date. The most common primary tumors are digestive tract tumors, renal cancer, and melanoma (1). Most metastatic tonsil tumors from lung cancer result from small cell lung cancer (2). No case of tonsil metastasis from undifferentiated large-cell lung cancer has been reported.
Antiangiogenic therapy has been recommended for the treatment of non-small cell lung cancer (NSCLC) (3), but there have been no clinical trials or case reports of antiangiogenic therapy for undifferentiated large-cell lung cancer. Anlotinib is an oral multitarget small-molecule tyrosine kinase inhibitor (TKI) (4). Its main targets include VEGFR and PDGFR. The Alter0303 study (5) showed that anlotinib as a third-line treatment can benefit advanced NSCLC patients with chemotherapy failure. However, patients with undifferentiated large-cell lung cancer were not included in this study; therefore, the efficacy of anlotinib in this group of patients is unknown. We present the first case of a patient with undifferentiated large-cell lung cancer with rare tonsillar metastases who achieved a long survival time with anlotinib treatment after chemotherapy failure. The following case is presented in accordance with the CARE reporting checklist.
CASE PRESENTATION
A 70-year-old man with a 20-year history of smoking presented to the otolaryngology department in January 2019 with a foreign body sensation in his throat. The patient occasionally had dry cough, without sputum, hemoptysis, chest tightness, dyspnea, and other symptoms. A physical examination showed that the trachea was centered, and there was no obvious abnormality in the respiratory sounds of either lung. No superficial lymph node enlargement was observed. The patient reported no family history of cancer. Laryngoscopy revealed a tonsil mass ( Figure 1A). Chest computed tomography (CT) (January 16, 2019) showed a mass of 78 * 51 mm in the upper left mediastinum ( Figure 1C). Cranial magnetic resonance imaging (MRI), abdominal CT, and whole body bone scan examinations showed no other metastatic lesions (Supplementary Figure 1). We performed puncture of the lung lesions and tonsillectomy for the tonsil lesions. Hematoxylin and eosin (HE) staining and Ki-67 immunohistochemistry of the tonsil lesion demonstrated malignancy. The TTF1 immunohistochemical results suggested that the lesion might originate from the lung ( Figure 1B). HE staining and Ki-67 immunohistochemistry of lung tissue indicated large-cell carcinoma. The results of assessment of neuroendocrine-related immunohistochemical indexes, including CGA and SYN, were negative, indicating that the tumor may not have neuroendocrine function ( Figure 1D). The patient was finally diagnosed with undifferentiated large-cell lung cancer of the left lung [T4NxM1b, American Joint Committee on Cancer (AJCC) 8 th edition]. Next generation sequencing (NGS, Geneseeq Technology Inc) found mutations in TP53, PIK3CA, and CD74 and no driver gene mutations that could be used for targeted therapy ( Table 1).
As shown in Figure 2 which was evaluated as stable disease (SD). The patient refused further chemotherapy regimens. Immunotherapy was refused for financial reasons. Complying with the patient's wishes, we administered oral antiangiogenic therapy with anlotinib (12 mg/d). Each cycle was defined as 2 weeks on treatment followed by 1 week off treatment. The patient was followed up at an outpatient clinic. The patient developed tolerable mild fatigue, diarrhea, and a small rash after initial use of anlotinib. Figure 1). There was also no evidence of new metastasis or progression beyond the primary lesion during follow-up. Timeline of the relevant information is shown in Table 2.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee(s) and with the Helsinki Declaration (as revised in 2013). Written informed consent was obtained from the patient for publication of this manuscript and any accompanying images.
DISCUSSION
The main pathological types of primary tonsil tumors are squamous cell carcinoma or lymphoma, and metastatic tumors of the tonsil are very rare (6). Metastatic tonsil tumors have been reported to account for only 0.8% of all tonsil tumors (7). Nearly 100 cases of metastatic malignant tumors of the tonsil have been reported, and the most common sources are digestive tract tumors, kidney cancer, and melanoma. In lung cancer, the primary pathological type of tonsil metastasis is small cell lung cancer, and this case is the first report of tonsil metastasis from undifferentiated large-cell lung cancer.
Undifferentiated large-cell lung cancer is a type of NSCLC, but its incidence is significantly lower than that of common lung adenocarcinoma, lung squamous cell carcinoma, and other subtypes (8). The prognosis of patients with advanced undifferentiated large-cell lung cancer is poor (9). We obtained survival data from the Surveillance, Epidemiology, and End Results (SEER) database for 1,129 patients diagnosed with stage IV (AJCC 7 th edition) large-cell undifferentiated lung cancer after 2010 ( Figure 3A). The results showed that the median OS was only 4 months. The median survival of those who received chemotherapy was only 5 months longer than that of those who did not or whose treatment status was unknown (7 vs 2 months, P < 0.0001). This report describes a case of stage IV undifferentiated large-cell lung cancer with rare tonsillar metastases for which genetic testing did not provide a therapeutic target. Platinum-based chemotherapy was selected as the first-line treatment. The PFS with chemotherapy was 3 months, which was similar to the database results.
The theory of cutting off the blood supply of tumors to combat them was first proposed in 1971. The first proangiogenesis factor, VEGF, was isolated in 1989. The main recognized tumor angiogenesis signaling pathways are the VEGF/VEGFR and PDGF/PDGFR pathways (10). At present, antiangiogenic therapy with bevacizumab is recommended for the treatment of lung cancer (3). However, there have been no clinical trials or case reports of antiangiogenic therapy for undifferentiated large-cell lung cancer. Anlotinib is an oral small-molecule multitarget antiangiogenic TKI, and its main targets include VEGFR, PDGFR, and other major angiogenesis-related factors. The Alter0303 study showed that the use of oral anlotinib monotherapy as third-line treatment for advanced NSCLC achieved a PFS of 4 months and an OS of 3.3 months (5). However, patients with large-cell undifferentiated carcinoma were not included in this study; therefore, the efficacy of anlotinib in patients with this type of lung cancer is unclear.
In this case, the tumor size rapidly increased after chemotherapy resistance occurred. We administered the oral antiangiogenic agent anlotinib. Early imaging during anlotinib treatment showed that the tumor remained stable and began to shrink slowly with prolonged treatment. At the last follow-up, the PFS and OS with anlotinib treatment were 21.5 and 27.5 months, respectively. There were no significant adverse reactions with this regimen. We retrospectively examined the expression of angiogenesis-related genes and anlotinib targets in this case by immunohistochemistry ( Figure 3B). VEGF expression was positive, indicating that tumor angiogenesis activity was very high. Interestingly, VEGFR expression was negative. The expression of PDGFR, another important target of anlotinib, was positive, indicating that the PDGFR signaling pathway may be important for the function of anlotinib in the treatment of this patient. PDGFR is a tyrosine kinase receptor with two structurally related forms. Binding of PDGF and PDGFR activates downstream pathways that mediate angiogenesis (11). Studies have shown that PDGFR can be significantly overexpressed in lung tumors. Animal experiments have also shown that PDGFR inhibitors can significantly inhibit angiogenesis in tumors, while normal tissue angiogenesis is not affected (12). Therefore, the angiogenesis of PDGFR-positive vascular endothelial cells alone may not be significantly affected by PDGFR inhibitors. PDGFR immunohistochemistry in this case also indicated PDGFR positivity of the tumor. These results suggest that patients with PDGFR-positive tumors may receive additional benefit from PDGFR inhibitors such as anlotinib. This case suggests that for large-cell undifferentiated lung cancer patients who exhibit chemotherapy resistance, antiangiogenic therapy with anlotinib can be attempted and may have a good effect. PDGFR may be the target underlying this effect.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
TX, CW, XZ, and BL collected the clinical information, diagnostic information, therapeutic information, and images of the patients. TX wrote the manuscript. ZW identified the case and submitted the manuscript. TX and XZ revised the manuscript. CW and ZW proofread the manuscript. TX, CW, XZ, and BL contributed equally to this work. All authors contributed to the article and approved the submitted version. | 2021-06-24T13:24:29.270Z | 2021-06-24T00:00:00.000 | {
"year": 2021,
"sha1": "712b3461c3eb50f557cffca91a333856fbc8568d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.680818/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "712b3461c3eb50f557cffca91a333856fbc8568d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252574555 | pes2o/s2orc | v3-fos-license | Morphometric study of acetabular depth and coverage and their clinical importance
Objectives: To measure acetabular parameters and to compare parameters of acetabulum between men and women, and right and left sides. Methods: The study was conducted in PNS Shifa Hospital, Karachi from January to June 2021. Anteroposterior radiographs of both hips of 70 adults aged 20-70 years were included. The study consisted of 35 men and an equal number of women. We measured acetabular depth, femoral head coverage, acetabular inclination and acetabular depth and width. The acetabular depth was measured using Wiberg’s central edge (CE) angle and Sharp angle techniques. Femoral head coverage was determined using measurements between medial edge of hip joint space, lateral edge of acetabulum and femoral head. To determine acetabular inclination, Tönnis angle was measured. Acetabular depth & width ratio (ADR) was evaluated by dividing acetabular width by depth, and multiplying by 100. Results: There was significant increase in Sharp angle and Tonnis angle in females on the left side as compared to males. Wiberg’s central edge angle and acetabular width to depth ratio showed insignificant increase in males as compared to females, whereas values of femoral head coverage were insignificantly more in women. Conclusion: The acetabular parameters were insignificantly different in gender and between right and left sides, although mean values were within normal range. The larger ADR ratio in men was most likely due to increased body weight in them as compared to women.
INTRODUCTION
The acetabulum is a cup shaped vault with a diameter of 52 mm. 1 It is formed from the ilium, ischium and pubis parts of hip bone. 2 The pubis contributes the anterior one-fifth, the ileum forms posterosuperior two-fifth and the ischium contributes to posteroinferior two-fifth of the acetabulum. It articulates with the head of the femur to form the hip joint. The acetabulum has a peripheral lunate articular surface and a central non-articular acetabular fossa on the floor. 3 The articular fossa encircles all the articular surfaces except at the six O'clock position, where the circle is completed by the transverse acetabular ligament. The ligament forms the acetabular foramen. The acetabular fossa is filled with extra synovial fat pad and synovial membrane with a thickness of 2.0-4.4 mm. 4 Different factors, like environment and ethnicity bring changes in morphology of acetabulum. The acetabular shape can be modified during intrauterine life due to interruption in its development or after birth due to harm to the cartilaginous lining of lunate articular surface. Dysplasia of acetabulum is the most frequent developmental anomaly of the hip bone, due to the maldevelopment of acetabulum, so that roof of acetabulum remains superficial, shallow and vertically oriented. This leads to reduced surface area for weight bearing, which thus bears increased force per unit area during walking and may result in early degeneration. 5 Progressive acetabular changes have been used by the anatomists and forensic scientists to estimate age. With change in age, the fossa and lunate surface of acetabulum also vary. The accuracy for age detection was seen to be 89% in 10 years and 67% in five years interval changes. Parameters used for the age estimation were groove of acetabulum, rim shape and porosity, apex cavity, and acetabular fossa outer edge activity and porosity. 6 The pelvic radiograph is the gold standard for initial assessment of patients with hip pain. Hip pathologies such as femoroacetabular impingement and hip dysplasia become symptomatic long before causing degenerative changes. In such cases hip radiographs are used for early detection. They also provide a guideline in hip preservation surgeries. 7 Acetabular depth measurement has a vital role in determining acetabular pathologies. 7 Deep acetabula are more common in women (72%) than men (42%). Increased acetabular depth is associated with unstable slipped capital femoral epiphysis. 8 Shallow acetabulum with insufficient femoral head coverage is associated with hip dysplasia. The dysplasia is caused by reduced contact between femoral head and acetabulum with resultant increase in joint load forces on the adjacent surfaces. This pressure leads to degeneration of joint and osteoarthritis. 9 Radiographic measurements of acetabular depth are used as prognostic indicators in hip arthroscopy patients. 10 There is scarcity of data regarding acetabular parameters in our population. In order to bridge this gap, this study was planned to measure different parameters of acetabulum as they are essential for hip joint replacement surgeries.
METHODS
This cross-sectional study was conducted at PNS Shifa Hospital, Karachi from January to June 2021 after approval from Ethical Review Committee of Bahria University Medical and Dental College (BUMDC) (ERC-66/2021). Seventy adults of both genders 35 men and 35 women from 20-70 years with completely fused acetabula were enrolled in the study after written informed consent, using probability non-convenience sampling technique. Patients with hip dysplasia, osteoarthritis, rheumatoid arthritis, hip surgery, pregnancy and trauma were excluded from the study. Sample size was calculated using OpenEpi version 3.
The participants were recruited from nonorthopedic patients presenting to the radiology department for X-ray of pelvis and abdomen. Anteroposterior radiographs of both hips were used for measurements. The X-ray was done using Toshiba Rotanode TM system. All measurements were ascertained by the researcher. To eliminate bias all findings were confirmed by a consultant radiologist who was blinded to the study. Images were analyzed using Synapse digital image management system (Fujifilm Medical Systems, Tokyo, Japan). The observations were recorded in the evaluation proforma. Parameters: 1. Acetabular Depth: The acetabular depth was measured using Wiberg's central-edge (CE) angle and Sharp angle/ lateral central edge angle (LCEA) techniques. The central-edge (CE) angle of Wiberg is a measure of depth of acetabulum and of the cover of formal head. Using Wiberg's technique the center of the femoral head was established by drawing a circle around the inferior and medial margins. The central-edge angle was measured between two lines drawn from the center of the circle. The first line passed vertically along the longitudinal axis of the pelvis and the second line passed to the lateral acetabular rim (Fig.1a). Sharp developed a method for quantifying acetabular development known as the lateral central edge angle or Sharp angle. It was based on the theory that underdeveloped or pathological/dysplastic hips may concentrate increased force on the weight-bearing acetabular dome, predisposing this location to early degeneration of articular cartilage leading to premature development of osteoarthritis. An angle between 25° and 40° is taken as normal with 20°-25° being borderline. It is a radiographic measurement which shows coverage of superolateral femoral head by acetabulum. It is measured between a line drawn perpendicular to the transverse axis through center of head of femur and another line connecting the lateral edge of the acetabulum to the center of the femoral head 10,11 (Fig.1b).
Acetabular Inclination:
To determine the acetabular inclination, the Tönnis angle was used. It was measured between a horizontal line from teardrop to teardrop and another line tangential to the sourcil (weight-bearing dome of the acetabulum) 12 (Fig.1c).
Femoral Head Coverage:
The femoral head coverage (FHC) was measured using three vertical lines passing through the medial part of the hip joint space (line I), lateral edge of the acetabulum (line II) and lateral outline of femoral head (line III). The FHC was determined by the formula 13 : (Distance between lines I and II) ÷ (Distance between lines I and III) × 100 (Fig.2a).
Acetabular Depth to Width Ratio:
This ratio was calculated by determining acetabular width and depth. The acetabular width was determined between the inferior margin of its teardrop and its lateral margin. The acetabular depth was determined by a perpendicular line extending from the midpoint of the acetabular width to the center of the acetabular dome (Fig.2b). The acetabular depth to width ratio was calculated by the formula 14 : acetabular depth ÷ acetabular width × 100.
Morphometric study of acetabular depth Data was analyzed using SPSS version 23. Independent-samples t-test was applied. p-value ≤ 0.05 was considered statistically significant difference.
RESULTS
There were 70 participants in the current study, 35 males and an equal number of females. Thus 70 Anteroposterior pelvic radiographs were examined, and including both hip joints, a total of 140 hip joints were examined. The acetabular depth was measured using Wiberg's central-edge angle and lateral center edge angle (LCEA) or Sharp angle. The mean value of Wiberg's central edge angle (CEA) was 33.19° in the total study population, ranging from 21° to 44° in different subjects (Table-I). Comparison of right and left sides among the genders showed an insignificant increase on both sides in males as compared to females (Table-II).
This study showed that lateral center edge angle (LCEA) or Sharp angle had a mean value of 37.1° in the total study population ( Table-I the increase was insignificant (37.51° in females and 36.20° in males) ( Table-II). Acetabular inclination was determined by measuring Tönnis angle. The mean Tonnis angle in the study participants was 4.46° (0° to 14.5° range) ( Table-I). In males the angles were more on the right than left side (4°; 3.97°). In females the left side had more inclination than the right (4.48°; 5.42°). Among the two groups, females had higher acetabular inclination (Table-II). Acetabular depth/width ratio (ADR) was 308.13mm in our study (Table-I). The left side was deeper than the right in both genders (males: 308mm Rt;310mm Lt; females: 299mm Rt; 313 Lt). The differences in the genders however were not significant ( Table-II).
Femoral head coverage was 83.94 mm in the total study population ( Table-I). Findings of 83.47mm on the right side and 83.43mm on the left side were observed in males. In females it was 84.86mm and 83.53mm on the right and left sides respectively. Variations in the genders were not significant ( Table-II).
DISCUSSION
Hip joint is not only indispensable to transmitting the weight of the human body to the femur, but also essential for movement. Structural anomalies in the acetabulum and its association with the head of femur can result in damage to the labrum as well as articular cartilage. This in turn can lead to degenerative diseases of the hip. 15 Acetabular parameters are important for the stability of the hip joint. Studies have been carried out to determine these parameters to assist Orthopedic Surgeons and Biomedical Engineers in development of suitable hip joint prostheses. 16 Also these measurements are different in different individuals and in various population groups. 17 Abnormalities develop in biomechanics of hip due to the anomalies in acetabular parameters such as, depth, width and diameter resulting in dysplasia of hip, which in turn leads to instability of the hip joint, causing enhanced degeneration of the articular cartilage and ultimately ending in osteoarthritis. About 25.5% cases of osteoarthritis of hip joint result from acetabular dysplasia. 11 Thus, measurements of acetabulum, such as depth of acetabulum, acetabular inclination and femoral head coverage are essential factors in restoration of hip mechanics back to normal in order to gain a good range of movement. 19 So, the present study was designed to get a better insight of the acetabular parameters in our population.
In the present study, the mean value of central edge angle (CE angle) was 33.19°, with lowest and highest values being 21° and 44° respectively. In males, the mean value of CE angle was more on the left side as compared to the right, which was opposite in the females. This was probably because more people use their left leg for weight bearing irrespective whether they are righthanded or left handed. 19 Concurrent results were found by Mohammad et al. when they examined pairs of hip bones and femur obtained from cadavers in the anatomy department of their institution. 20 They also assumed that this was due to more weight bearing function performed by left hip joint during limb movements irrespective of limb dominance. However, the mean value of CE angle showed an insignificant decrease on the right side and insignificant increase on the left side in males as compared to the females. Another study done on Pakistani population showed results contradictory to our study as there was significant increase in central edge angle in males as compared to females (36.3±6.4° in males, 34.6±6.8° in females), whereas data of present study revealed equivocal values in males and females. 21 Another study in which hip joints of 103 subjects (57 males and 46 females) were evaluated on CT scan of abdomen and pelvis done for various reasons other than hip related problems, showed similar results, with insignificant increase in CE angle among males as compared to females. This study showed a slightly less value of mean CE angle 14 [31.1° (29.7°-32.4°)] as compared to our study (33.19°), most likely because it was done on Japanese population who have lesser height and weight as compared to our population.
Lateral center edge angle of Wiberg (LCEA) or Sharp angle is a method to assess morphology of acetabulum. 22 Value less than 20° signifies dysplasia and more than 40° shows femoroacetabular impingement. In the present study, the mean value of Sharp angle was 37.11°, although it ranged from 27.5° to 46.5°. Results of this study are in accordance with another study 15 who found a mean value to LCEA to be 35.9° although cases ranged from 20° and 50°.
When compared among gender, our results found an increase in Sharp angle among females as compared to males, which was highly significant on the left side and insignificant on the right side. Another study investigated the Sharp angle in 91 cases of hip pain, younger than 60 years of age, including 61 women and 30 men by performing MRI. Among these, mean value of sharp angle was 35.9° in women, which was slightly lower as compared to our study, probably because these cases were suffering from different diseases as compared to ours which mainly comprised of healthy controls, whereas value was 36.4° in men which was same as in our study. This study also showed higher values of sharp angle among women 10 as compared to men, analogous to our study.
The acetabular inclination is used to evaluate the extent of femoral head coverage by the acetabulum. Roof or dome of acetabulum is the part which bears weight and anatomic reconstruction of the acetabular dome with reduction of head of femur is the eventual objective of operative as well as nonoperative treatments. 24 Upper limit of Tonnis angle is 10°. A Tönnis angle more than 10° shows dysplasia of the acetabulum. 25 In the present study, the mean acetabular inclination was 4.46°, although it ranged from 0° to 14.5°. Similar results were determined by Zacharia & Fawas (2017) 12 who observed a mean acetabular inclination of 4.1° in 800 AP pelvic radiographs in a tertiary care hospital of patients aged between 20-60 years, although angle varied from 1° to 9°. The inclination is inversely proportional to the femoral head coverage, higher the Tonnis angle, lesser is the coverage for the head of femur as in developmental dysplasia of hip. Decreased angle manifests over coverage of the head of femur and leads to femoroacetabular impingement. 15 Our results also showed significant increase in Tonnis angle on the left side in females as compared to males whereas insignificant increase in Tonnis angle in females on the right side was observed, and the mean ranged from 3.97° to 5.42°. Another study 10 demonstrated much higher values of Tonnis angle, conflicting to our results, which were 8±1.45°in women and 8.6±0.32° in men. The value of Tonnis angle was insignificantly higher in men in comparison to women, which was also in contradiction to our results.
Our study showed the mean value of femoral head coverage (FHC) as 83.94mm, although it ranged from 76.31 to 92.67. Results also showed an insignificant increase in FHC on the right side, whereas an insignificant decrease on the left side in females when compared with males. Mimura et al (2017) 14 showed a slightly lower value (81.9), although they also had insignificant increase in value of femoral head coverage in females as compared to males in their CT scans of pelvis.
Our study showed insignificant increase in the acetabular depth to width ratio among males as compared to females with a mean value of 308.13mm (385.0-198.55). This was most likely because males have a significantly increased height as compared to the females, therefore this difference was also revealed in the acetabular dimensions among them. 24 Mimura et al 14 demonstrated similar findings in CT scans of hip joints, with almost the same value of mean acetabular depth to width ratio. However, their study revealed an insignificant increase in this value in females as compared to the males. All the values were within normal ranges, as they included normal subjects who did not suffer from hip pain or any other problem related to hip joint. A value of acetabular depth to width ratio less than 250 is most likely a representation of dysplasia of hip. 8 The value of acetabular depth to width ratio had almost similar values on the right and left side in the current study. Similar findings were observed by another study who found that values of acetabular depth were close to each other in sacra of both sides. 13
Limitations:
This was a single center study. The sample size was small because of time constraint in the study.
CONCLUSION
The observations of the present study have shown that acetabular parameters vary in males and females and between the right and left sides. The value of acetabular depth to width ratio and Wiberg's central edge angle was more in men as compared to women, whereas values of Sharp angle, acetabular inclination and femoral head coverage were insignificantly higher in females than males. The mean values of these parameters were within normal range as these measurements were obtained from healthy individuals, in which all hip pathologies were excluded. The prostheses used in our population are based on measurements of the bones in western population which differ from those of ours. Thus, these measurements can form a base line for the Biomedical engineers to construct well fitted prostheses. | 2022-09-29T15:14:32.378Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "4b97643df5d39a676514c4b0876627f34008a9df",
"oa_license": "CCBY",
"oa_url": "https://pjms.org.pk/index.php/pjms/article/download/5915/1524",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b3bd062c8a40df61c9ea2ce6e9c8adc20c2bd7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265311326 | pes2o/s2orc | v3-fos-license | Cytokine Profile in Development of Glioblastoma in Relation to Healthy Individuals
Cytokines play an essential role in the control of tumor cell development and multiplication. However, the available literature provides ambiguous data on the involvement of these proteins in the formation and progression of glioblastoma (GBM). This study was designed to evaluate the inflammatory profile and to investigate its potential for the identification of molecular signatures specific to GBM. Fifty patients aged 66.0 ± 10.56 years with newly diagnosed high-grade gliomas and 40 healthy individuals aged 71.7 ± 4.9 years were included in the study. White blood cells were found to fall within the referential ranges and were significantly higher in GBM than in healthy controls. Among immune cells, neutrophils showed the greatest changes, resulting in elevated neutrophil-to-lymphocyte ratio (NLR). The neutrophil count inversely correlated with survival time expressed by Spearman’s coefficient rs = −0.359 (p = 0.010). The optimal threshold values corresponded to 2.630 × 103/µL for NLR (the area under the ROC curve AUC = 0.831, specificity 90%, sensitivity 76%, the relative risk RR = 7.875, the confidence intervals 95%CI 3.333–20.148). The most considerable changes were recorded in pro-inflammatory cytokines interleukin IL-1β, IL-6, and IL-8, which were approx. 1.5–2-fold higher, whereas tumor necrosis factor α (TNFα) and high mobility group B1 (HMGB1) were lower in GBM than healthy control (p < 0.001). The results of the ROC, AUC, and RR analysis of IL-1β, IL-6, IL-8, and IL-10 indicate their high diagnostics potential for clinical prognosis. The highest average RR was observed for IL-6 (RR = 2.923) and IL-8 (RR = 3.151), which means there is an approx. three-fold higher probability of GBM development after exceeding the cut-off values of 19.83 pg/mL for IL-6 and 10.86 pg/mL for IL-8. The high values of AUC obtained for the models NLR + IL-1β (AUC = 0.907), NLR + IL-6 (AUC = 0.908), NLR + IL-8 (AUC = 0.896), and NLR + IL-10 (AUC = 0.887) prove excellent discrimination of GBM patients from healthy individuals and may represent GBM-specific molecular signatures.
Introduction
Inflammation predisposes the development of cancer and supports all stages of tumorigenesis [1].Tumor cells, as well as surrounding stromal and immune cells, are engaged in well-orchestrated mutual interactions to form an inflammatory tumor microenvironment.Neutrophils are the first cells to infiltrate under the direction of inflammatory mediators released at the inflammatory setting [2].As the process of inflammation proceeds, lymphocytes and macrophages are stimulated and recruited to the site of inflammation.The immune cells within the tumor neutrophil microenvironment are highly plastic, and they continuously change their secretory phenotypic and functional characteristics [3].Strong perturbation of tissue homeostasis leads to the recruitment of immune cells from bone marrow and secondary lymphoid cells, which results in an increase in counts of neutrophils, monocytes, and monocyte-derived cells in the peripheral blood [3].Recently, some studies have reported combinations of immune cells, such as the neutrophil-to-lymphocyte ratio (NLR) and the lymphocyte-to-monocyte ratio (LMR), as effective prognostic indicators in patients with a variety of cancers [4][5][6][7][8][9][10].When compared to traditional molecular prognostic markers, such as IDH1 mutation [11], the NLR can appropriately assess the prognosis in glioblastoma (GBM) patients in order to guide therapeutic decisions and patient management.A poorer prognosis was observed in the patients with NLR < 4.56 × 103/µL when compared with the patients with NLR < 4.56 × 103/µL [12].Moreover, our last meta-analysis indicated the high diagnostic utility of peripheral immune-inflammatory markers for a poor prognosis in patients with GBM [13].
GBM is the most aggressive and also the most common brain tumor in adults, accounting for 80% of primary malignant brain cancers [14].Standard treatment using maximal safe resection and chemoradiotherapy results in a median survival time of 14.6 months [15].There is no standard second-line treatment and none that extends overall survival.The infiltrative pattern of growth and the inherent resistance of GBM to chemoradiotherapy leads to the disease recurring within 6 to 9 months of treatment.Survival of patients with recurrent GBM is generally less than 6 months [16,17].Approximately 90% of GBM primarily occurs in older patients, whereas in younger patients, it usually develops from lower-grade glioma.Our retrospective study showed that the survival probability decreases considerably faster in older (63-90 years) than in young patients (23-63 years) with high-grade glioma [12].As inflammation contributes to cancer initiation and progression, it can be hypothesized that age-related chronic low-grade inflammation contributes to the increase in cancer incidence and/or mortality observed during the aging process [3].The inflammaging is revealed by the release of a large number of inflammatory mediators, such as pro-inflammatory cytokines IL-1β, IL-2, IL-6, IL-8, IL-12, IL-15, IL-18, IL-22, IL-23, tumor necrosis factor α (TNFα) and interferon γ (IFNγ), and anti-inflammatory cytokines IL-1Ra, IL-4, IL-10, and IL-13 [18,19].The causal relationships between inflammaging and cancer are not straightforward, as these processes carry contrary features.Cellular senescence is an established tumor-suppressive mechanism that counteracts the proliferation of premalignant cells.However, much evidence has shown that senescent cells can also promote tumor progression in addition to other age-related pathologies via the senescence-associated secretory phenotype (SASP) [20].Indeed, over the last few years, SASP has become one of the most important features of senescence and a key factor in our understanding of its complex and ambivalent relations with aging and cancer [21].Several SASP components, including IL-1β, IL-6, IL-8, and TNFα, are all potential targets for pharmacological inhibition; some of them already exist in the clinical practice and may be exploited to mitigate the deleterious effects of SASP in the relevant clinical context [22].Interestingly, these cytokines are the key inflammatory mediators that trigger the inflammatory cycle in GBM and also promote carcinogenesis by avoiding growth suppression and apoptosis, inducing angiogenesis and metastasis, and maintaining cancer cell stemness [2, [23][24][25].The concentration of cytokines in GBM cyst fluid highly correlates with white blood cell counts, suggesting an important interaction between tumor cells and the peripheral inflammatory status [24].Cytokines released from the GBM or brain metastatic tissues and from immune cells in the tumor microenvironment can be identified in the circulation [26].However, so far, two studies of peripheral cytokines have been performed in patients with glioblastoma, yet the obtained results have been inconclusive [26,27].Cytokines are required to coordinate cancer-related processes and, therefore, represent an important subject of study for understanding GMB development and the potential identification of anti-inflammatory therapeutic targets.An immune-inflammatory profile should be characterized to be applied in clinical practice, and the profile's potential to identify GBM-specific molecular signatures should be investigated as a matter of urgency, not only to predict the outcomes in patients but also to recognize potential targets for future therapies.
The most considerable changes were recorded for pro-inflammatory cytokines IL-1β, IL-6, and IL-8, which were approx.1.5-2-fold higher, whereas TNFα and HMGB1 levels were lower in GBM than HC (p < 0.001).Spearman's test demonstrated a correlation between IL-1β, IL-6, IL-8, IL-10, and TNFα, while HMGB1 was inversely related to IFNγ in patients with GBM (Figure 2).IL-1β highly correlated with IL-1Ra (r s = 0.647, p < 0.0001), IL-6 (r s = 0.352, p < 0.01), IL-8 (r s = 0.409, p < 0.0001), and TNFα (r s = 0.409, p < 0.001).Overall, our data confirms a cytokine milieu in glioma patients that favors the recruitment of neutrophils, thereby sustaining an immunosuppressive profile, which is associated with poor prognosis.The diagnostic capacity of inflammatory variables was determined by ROC curve analysis.The results of the ROC, AUC, and RR analysis of IL-1β, IL-6, IL-8, and IL-10 indicated a potential diagnostic value for clinical prognosis in patients with glioblastoma (RR > 2).The optimal threshold values (cut-off) corresponded to 0.469 pg/mL for IL-1β, 19.83 pg/mL for IL-6, 10.86 pg/mL for IL-8, and 41.22 pg/mL for IL-10 (Table 3).The highest mean RR was observed for IL-6 (RR = 2.923) and IL-8 (RR = 3.151), which means there is an approx.three-fold higher probability of GBM development once the cutoff values for these cytokines have been exceeded.Moreover, the highest specificity (80%) and sensitivity (80%) was observed for IL-6, which, in turn, indicates a low level of false positive results during diagnostic procedure using IL-6.Among the cytokines measured, IL-6 and IL-8 emerged as the best markers of inflammation-related GBM.The cytokines IL-1Ra, IL-13, and IFNγ showed poor diagnostic usefulness for prognosis in glioblastoma patients even though their concentrations changed significantly (Tables 2 and 3).However, the predictive values increased when the models for pro-and anti-inflammatory cytokines included NLR or SII (Tables 4 and 5).For the models NLR + cytokines models, AUC measurements were considered outstanding discrimination for NLR + IL-1β (AUC = 0.907), NLR + IL-6 (AUC = 0.908), and NLR + IL-8 (AUC = 0.896), and in the other models the values of AUC > 0.8 provided excellent discrimination [29].The classifier accuracy for the models NLR + cytokines models was >80% (Table 4) and was higher than the models including SII + cytokines >70% (Table 5).A high value of a classifier renders it a reliable index to distinguish a healthy individual from a GBM patient based on the assessment of inflammatory markers.Therefore, it seems reasonable to assess the combinations of immune cells, such as the neutrophil-tolymphocyte ratio and inflammatory mediators, together.
Discussion
Chronic inflammation allows developing tumors to acquire all their characteristic abilities, including the escape from immunosurveillance.Cellular and molecular inflammatory mediators, such as growth factors and inflammatory cytokines produced by tumor cells and infiltrating immune cells, constitute the tumor microenvironment, where tumor cells constantly grow and interact.The heterogeneity and plasticity of tumorassociated neutrophils (TANs) render them crucial in the tumor microenvironment interplay.Increasing evidence suggests a dual modulatory role of neutrophils in tumor behavior and highlights the need for a reassessment of neutrophil functions in cancer initiation and progression [30].Tumor-derived cytokines induce the presence in the blood of immature neutrophils with immunosuppressive properties and neutrophils with an 'aged' phenotype that are experienced cells with an increased ability to react to inflammatory stimuli, which can thus play an anti-tumoral role [31,32].Moreover, neutrophils in the tumor tissue occur in different polarized states, i.e., N1 anti-tumoral phenotype and N2 pro-tumoral phenotype, analogous to helper T lymphocyte and monocyte polarization [33].Despite detected functional differences, no definitive surface markers have been identified to differentiate N1 and N2 TANs [34].Clinical studies have shown that most glioma patients experience strong neutrophilia, and that preoperative neutrophil count is correlated with GBM grade, but the mechanism of neutrophil recruitment and their role in tumor growth is yet to be defined [35].We observed that among immune cells, neutrophils showed the most considerable changes, especially in patients with Grade 3 and Grade 4 tumors, whereas lymphocytes, monocytes, and platelets did not exhibit significant changes compared to reference levels or Grade 1 group [12].The present study showed that the neutrophil count exceeded the reference values and was elevated in patients with GBM compared to the control group.Neutrophil counts inversely correlated with survival time r s = −0.359(p = 0.010), which clearly shows that patients with neutrophilia run a higher risk of mortality.Lymphocyte counts in our study were significantly lower, which was reflected in a high rate of NLR exceeding the reference values according to Luo et al. [28] and was four-fold higher in patients with GBM than in the control group.According to Massara et al. [33], NLR higher than 4 was associated with poor prognosis when measured before treatments and with increased TANs infiltration; however, the underlying mechanism remains unknown [36][37][38][39].The NLR value below 4 was reported to predict better outcomes but only in GBM expressing the wild-type gene IDH1, one of the genes that is most frequently mutated in malignant gliomas [40].Furthermore, the retrospective study by Mason et al. [41] recorded the link between lower NLR values and longer overall survival during focal radiotherapy and concomitant temozolomide treatment.In our previous analyses of inflammatory indices, NLR was found to exert the most substantial impact on survival time (HR 1.56; 95%CI 1.145-2.127;p = 0.005), and this index was superior to LMR, PLR, and SII as prognostic factors.With regard to age, the survival probability decreased by 50% in patients aged ≥63 years and NLR ≥ 4.56 × 103/µL [12].
Numerous reports showed how different age-related pathologies, including cancer, revealed a common inflammatory status.In fact, inflammaging is characterized by the establishment of a systemic proinflammatory state with an increased level of wellknown pro-tumorigenic cytokines, such as IL-1β, IL-6, and TNFα [19,42].In the elderly with high-grade inflammation (CRP ≥ 3 mg/L), elevated levels of pro-inflammatory cytokines IL-1β, IL-6, and TNFα were recorded when compared to older adults with low-grade inflammation (CRP < 3 mg/L) [19].The cytokine profile of our GBM patients was found to be extremely different from that of the healthy controls representing inflammaging.The cytokines IL-1β, IL-1Ra, IL-6, IL-8, IL-10, and IL-13 were higher, whereas TNFα, IFNγ, and HMGB-1 were lower in GBM compared to control older adults.The pro-inflammatory cytokines IL-1β, IL-6, and IL-8 were two-fold elevated in contrast to TNFα, which appears to be a cytokine discriminating the inflammatory response in aging and cancer.The comparison of patients with GBM according to the cut-off value of 63 years established in our previous studies showed significantly lower levels of TNFα (51.35 ± 26.97 pg/mL) in patients aged ≤63 years than in patients aged >63 years (62.53 ± 16.76 pg/mL) [12].Glioma-derived factors, such as IL-1β, IL-6, IL-8, and TNFα, are crucial inflammatory mediators that trigger the inflammatory cycle in GBM and also promote carcinogenesis by avoiding growth suppression and apoptosis, inducing angiogenesis and metastasis and maintaining cancer cell stemness [23,43].Furthermore, glioma-derived cytokines IL-1β, IL-6, and TNFα were observed to extend neutrophils lifespan from 7 h in normal conditions to 17 h in cancer, which in turn increases the number of neutrophils in peripheral blood [44].
Cytokines, such as IL-1β and TNFα, are primarily pro-inflammatory and play an important role in inflammation-driven tumor growth and progression and are found to be upregulated after radiotherapy in patients with GBM.IL-1β coordinates the progression of neuroinflammation by upregulating the expression of other pro-inflammatory cytokines, whereas TNFα induces the expression of vascular endothelial growth factor in gliomas, leading to increased angiogenesis seen in GBM [45].IL-1β binds to IL-1 receptor (IL-1R) and switches on the NF-kB pathway, leading to persistent stimulation of pro-inflammatory genes [46].This could explain the simultaneous increase in IL-1Ra, IL-6, and IL-8 in patients with GBM compared with the control group; IL-1β highly correlated with IL-1Ra, IL-6, and IL-8 (p < 0.001).IL-1β and IL-1Ra are mainly expressed by cells of the monocyte/macrophage lineage and neutrophils.The constitutive expression of both IL-1β and IL-1Ra has been reported in human glioma cells [47].The model for IL-1β, including NLR (AUC = 0.907, Cut-off 0.473, Classifier accuracy 85.6%), showed a high diagnostic potential in glioblastoma and confirmed an implication of IL-1β in neutrophils recruitment and tumor promotion.IL-1β is a well-known cytokine that upregulates mRNA expression of pro-inflammatory cyclooxygenase 2 (COX-2), which, together with prostaglandin PGE2, interacts with PGE2 receptors, and thereby enhances glioma aggressiveness by maintaining glioma cell stemness and the inflammatory microenvironment [48][49][50].TNFα acts in a similar manner to IL-1β and plays various roles in the inflammatory response, including the activation of inflammatory cytokines coded by the NF-κB signal pathway, gene expression of prostaglandin synthesis pathway enzymes, adhesion molecules and induction of nitric oxide synthase, leading to the activation of endothelium, neutrophils, and lymphocytes [51].TNFα is related to all steps in tumorigenesis, including cellular transformation, promotion, survival, proliferation, invasion, angiogenesis, and metastasis.Therefore, it was surprising that circulating TNFα concentration was reduced in the GBM group compared to healthy controls (p < 0.001), especially when we take into account the meta-analysis by Feng et al. [52], which showed that elevated circulating TNFα level was associated with an increased glioma risk.On the other hand, the same study showed that in all the cytokines tested, it was the circulating IL-6 level (HR 1.10, 95%CI 1.05-1.16,p < 0.001) that was the most significantly correlated with poor overall survival in glioma patients.Rubenich et al. [25] demonstrated that glioma-neutrophil cultures initially produced substantial amounts of TNFα, which then kept proportionally decreasing over the hours while the IL-6 level increased 10-fold from the baseline.
IL-6 is involved in many physiological and pathological processes, including inflammation, bone metabolism, C-reactive protein synthesis, hematopoiesis, leucocyte infiltration, and maturation, as well as affecting endothelial cell properties.IL-6 was also found to induce cachexia in cancer patients by altering the metabolism of lipids and proteins [17,53].IL-6 production is characteristic of glioblastoma cells, and its upregulation is associated with activation and repolarization of tumor-associated neutrophils.Some studies have also shown a relationship between the proliferative antigen Ki-67 and IL-6 in the surgical samples [54].High IL-6 gene expression in glioblastoma is associated with poor survival according to the datasets derived from The Cancer Genome Atlas (TCGA) and the Repository of Molecular Brain Neoplasia Data [55].The present work aimed at studying the prognostic value of changes in circulating IL-6 levels for glioblastoma by comparing it to other cytokines in GBM patients and healthy controls.The highest average RR was observed for IL-6 (RR = 2.923) and IL-8 (RR = 3.151), which means approx.There is a three-fold higher probability of GBM development after exceeding the cut-off values for these cytokines.Moreover, IL-6 was observed to have the highest specificity (80%) and sensitivity (80%), which, in turn, indicates a low level of false positive results during diagnostic procedures using IL-6.Among the cytokines measured, IL-6 emerged as the best marker of inflammation-related glioblastoma.The model for IL-6, including NLR, showed the highest diagnostic potential (AUC = 0.908, Cut-off 0.544, Classifier accuracy 87.8%).
Immunohistochemical studies have exposed that about 65% of primary and secondary GBM samples are directly correlated with IL-8 levels [56].IL-8 induces angiogenesis and directs migration of endothelial cells, further stimulating the production of proteolytic enzymes-matrix metalloproteinases [57].In this study, the levels of circulating IL-8 were found to be higher in GBM patients than in healthy controls (p < 0.001).Many studies reported IL-8 upregulation in gliomas and its involvement in disease promotion [56,58,59].Rubenich et al. [25] showed that glioma-neutrophil cultures produced large amounts of IL-8 and TNFα and from low to zero levels of IL-1β, IL-6, and IL-10 in the first 24 h.Then, IL-8 and IL-10 increased, while the production of TNFα was observed to decrease after 72 h.However, after 120 h, this pattern shifted, and a rise in IL-1β and IL-6 release was recorded, while IL-8 declined and IL-10 and TNFα were barely detectable.All control neutrophils, regardless of the culture duration, showed very low or no cytokine production.Overall, the data provided by Rubenich et al. [25] proved that the neutrophil-glioblastoma crosstalk was accountable for the diversity of the activities of the cytokines described as drivers of tumor progression.
IL-10 and IL-13 are classified as anti-inflammatory cytokines.Early evidence for the expression of IL-10 in glioma was reported by Huettner et al. [60], who demonstrated elevated levels of IL-10 mRNA in 87% of high-grade gliomas.In our study, IL-10 was elevated in GBM patients, as was observed in several other studies that revealed its inhibitory effect on the antitumor response while promoting the proliferation of tumor cells [60][61][62].Interleukin IL-13 is expressed in glioblastoma; it binds to two receptors, IL-13Rα1 and IL-13Rα2, and mediates a variety of different effects on various cell types, including B cells, neutrophils, monocytes, natural killer cells, endothelial cells, and fibroblasts [63,64].The overexpression of IL-13Rα2 was observed in approx.76% of GBM, but it was not detected in normal brain tissue, thereby making it a highly selective immunotherapy target [65].There is limited evidence linking IL-10 and IL-13 expression with GBM patient survival.In our GBM patients, the elevated IL-10 levels (p < 0.01) correlated with IL-13 (r s = 0.416, p < 0.01), which was associated with increased mortality according to most studies on elderly populations [66].At first glance, the strong increase in IL-10 looks paradoxical in the balance of pro-and anti-inflammatory cytokines; however, the finding confirms the previous study by Kumar et al. [67], who reported a significant increase in serum IL-10 levels in patients with anaplastic astrocytoma and glioblastoma.Bender et al. [26] used a multiplex immunoassay platform and demonstrated at least double IL-10 levels in GBM patients compared to healthy controls, whereas the levels of other cytokines, i.e., IL-6, IL-8, and TNFα, did not differ between groups.Overall, several studies have confirmed the protumor properties of IL-10 [68].Elevated levels of IL-10 and NLR are associated with increased tumor growth with poor prognosis and drug resistance [25].In our NLR + IL-10 model, the high value of AUC = 0.887 proved to offer excellent discrimination of GBM patients from healthy individuals based on the assessment of both markers.On the other hand, IL-10 inhibits tumorigenesis via downregulation of IL-1β, IL-6, IL-8, and TNFα thus playing an important role in coordinating the inflammatory response involving the activation of neutrophils, monocytes, natural killer cells, and T and B cells and in their recruitment to the sites of inflammation.Despite reducing tumor-promoting inflammation, IL-10 may play a role in the rejuvenation of exhausted tumor-resident T cells [69].The obtained cut-off value (41.22 pg/mL) could serve as a biomarker indicative of GBM development and its prognosis (AUC = 0.683, specificity 67.5%, sensitivity 72.0%, RR = 2.482, 95%CI 1.519-4.222).Similarly, IL-13 largely inhibits tumor cell growth, but recent studies revealed that it could promote the survival of certain tumors through suppression of immunosurveillance [70].These apparently contradictory results may be explained by diverse IL-13 activity in different tumors and the fact that earlier studies investigated tumor cell lines rather than primary tumor cells.In our study, IL-13 levels tended to increase in GBM patients compared to healthy individuals (p > 0.05).Although IL-13 hardly emerged as a good biomarker in glioblastoma (AUC = 0.536, specificity 85%, sensitivity 38%, RR = 0.459, 95%CI 0.213-0.888),its diagnostics usefulness increased in NLR + IL-13 model to AUC = 0.830 and classifier accuracy >80%.
IFNγ is a cytokine that consistently orchestrates both pro-tumorigenic and antitumor immunity.IFNγ acts together with granzyme B and perforin to initiate apoptosis in tumor cells but also enables the synthesis of immune checkpoint inhibitory molecules, such as PD-L1, thus stimulating other immune-suppressive mechanisms [71].IFNγ has also been shown to downregulate other IL-1β-mediated effects, such as the expression of IL-6, IL-8, and IL-10; however, the mechanism by which IFNγ mediates these inhibitory effects has not yet been determined [72].In our study, no differences in IFNγ levels were observed between groups.Actually, its level even tended to be lower in GBM patients (Table 2), and this cytokine provided a low diagnostics value for clinical prognosis in GBM patients (AUC = 0.523, specificity 70%, sensitivity 48%, RR = 0.643, 95%CI 0.370-1.056).IFNγ was inversely correlated with HMGB1, but in patients with GBM, the correlation reached a moderate level (r s = −0.294,p < 0.05).Nijaguna et al. [27] demonstrated high expression of IFNγ and 14 other cytokines, including IL-6, IL-10, and TNFα.Bender et al. [26] used a multiplex immunoassay platform and demonstrated a two-fold increase in IFNγ level in GBM patients compared to healthy controls, whereas other cytokines, i.e., IL-6, IL-8, and TNFα, showed no differences between groups [26].The differences between our findings and those from previous cytokine profiling studies [26,27] may have resulted from sampling differences, i.e., plasma vs. serum through the course of a patient's disease, and various measurement methods, i.e., multiplex vs. singleplex immunoassay.Undoubtedly, multiplex proteomic immunoassay procedures are the future of diagnostics testing but require extensive validation before being acceptable for clinical use.Regardless of different data on IFNγ concentration changes, it could be used in combination therapy in cancer, including glioblastoma [73][74][75].
HMGB1 is a pro-inflammatory cytokine that has been extensively studied for the past years as a biomarker and a novel target for cancer therapies [76].Accumulating evidence has demonstrated that hypoxia, reactive oxygen, nitrogen species, hyperglycemia, cytokines TNFα, and INFγ induce tumor cells to actively secrete HMGB1 into the extracellular matrix [77].Subsequently, extracellular HMGB1 (in its reduced form) functions as a paracrine and/or autocrine factor to activate signaling cascades by binding to its receptors, such as the receptor for advanced glycation end-products (RAGE) and toll-like receptors (TLR) [78].The RAGE-HMGB1 axis is a major aspect of immune signaling in pathogenic conditions such as glioma and inflammatory diseases [79].By binding to RAGE and TLR, HMGB1 can mediate multiple inflammatory pathways, and it induces secretion of different pro-inflammatory cytokines like IL-1β, TNFα, IL-6, and IL-8 in the culture medium of human monocytes and neutrophils [80].HMGB1 is also a DNA-binding nuclear protein that activates genes for inflammatory cytokines, such as IL-1β, TNFα, IL-6, and IFNγ [76,79].In our study, changes in circulating HMGB1 were comparable to TNFα and INFγ, i.e., HMGB1 decreased in GBM patients compared to healthy control (p < 0.001).There are no available data on extracellular or circulating HMGB1 levels from human studies [81].HMGB1 has been shown to be highly expressed in human glioma cells and to be associated with poor prognosis [82,83].The level of HMGB1 expression in gliomas is three-fold higher than in normal brains, and during inflammation, HMGB1 may be released extracellularly from neurons, glial, or endothelial cells [81,83].On balance, the extracellular release of HMGB1 may play a very important role in triggering initial inflammatory responses by stimulating multiple receptors, leading to blood-brain barrier disruption.The univariate logistic model demonstrated the average diagnostics value of circulating HMGB1 (AUC = 0.708, specificity 75%, sensitivity 66%, RR = 0.364, 95%CI 0.199-0.628)compared to other inflammatory variables.The comparison of our results with the outcomes obtained by other authors allowed us to conclude that HMGB1 measurement may be useful in the assessment of clinical management of GBM patients on condition that the management includes chemo-and radiotherapy and the ensuing inflammatory conditions and necrosis [81,84,85].Alas, patients undergoing chemotherapy or radiotherapy were excluded from our observations.Nevertheless, our understanding of the interplay between peripheral inflammation and HMGB1 in the progression and prognosis of glioblastoma is essential and requires further research.
Study Population
The study was carried out on 50 patients aged 66.0 ± 10.56 years (females n = 30, males n = 20) with newly diagnosed glioblastoma who had undergone an operation in Neurosurgery Centre University Hospital in Zielona Gora between November 2015 and May 2021 (Table 6).There were no demonstrated extracranial metastases in newly diagnosed glioblastoma during the follow-up period.The pathological diagnosis was based on the classification of CNS tumors [86].The overall survival was defined as the time between the diagnosis and death.For the patients who had not died prior to the last follow-up, the overall survival was censored at the date of the last follow-up.All patients underwent a craniotomy on GBM with either total or subtotal resection.The following exclusion criteria were used: biopsy only, age below 18 years, no definite diagnosis, incomplete baseline clinical data, adjuvant therapy like chemotherapy or radiotherapy received before the operation, malnutrition, and perioperative mortality.Importantly, every patient diagnosed with a primary brain tumor and registered in our study had a very recent diagnosis with no prior specific treatment, including glucocorticoids.The GBM group was compared to the healthy control (HC) of 40 individuals aged 71.7 ± 4.9 years (males n = 18 and females n = 22) recruited from the University of the Third Age in Zielona Gora (Poland), which is an organization associating the adults over 60 years of age to stay active by participating in many educational and sports programs.The current health status of the control group was evaluated on the basis of medical records at a routine follow-up visit to a primary care physician.On the basis of the medical interview, the following exclusion criteria were applied: uncontrolled hypertension and/or diabetes, oncologic diseases and neurodegenerative diseases, and acute infectious and autoimmunological diseases.The study protocol was approved by the Bioethics Commission at the University of Zielona Gora, Poland (No. UZ19/2021, No. UZ16/2022), in accordance with the Helsinki Declaration.
Clinical Assessment
Medical records were reviewed, and clinical data were collected, such as gender, age at operation, the location and hemisphere of tumors, and pathological diagnoses.Ki-67 proliferation index was expressed as the percentage of cells with Ki-67-positive immunostained nuclei using the Ventana BenchMark GX (Ventana Medical Systems Inc., Tucson, AZ, USA).The expression of Ki-67 was categorized into two groups: low and intermediate (Ki-67 < 30%) and high (Ki-67 ≥ 30%), according to Chen et al. [87].C-reactive protein was measured using a high-sensitivity commercial kit from DRG International (Springfield Township, Cincinnati, OH, USA) with a detection limit of 0.001 mg/L.The date on postoperative adjuvant therapies and survival time were collected through documentation analysis.
Blood Sampling
Fasting blood samples were collected from the median cubital vein in the morning between 07:00 and 09:00 using the S-Monovette system (Sarstedt AG & Co. KG, Nümbrecht, Germany).The whole blood samples were placed into tubes with anticoagulant EDTA-K2 and were immediately analyzed.For the other biochemical measures, blood samples were centrifuged at 3000 rpm for 10 min, and aliquots of serum were stored at −80 • C in a Clinic BioBank until analysis.
Statistical Analysis
Statistical analyses were performed using R 4.2.1 software [88].The variables were reported as mean values ± standard deviation (SD) and median (Me).The assumptions for the use of parametric or nonparametric tests were checked using the Shapiro-Wilk and Levene's tests to assess the normality of the distributions and the homogeneity of variances, respectively.The significant differences in mean values between the groups were evaluated by one-way ANOVA.If the normality and homogeneity assumptions were violated, the Mann-Whitney nonparametric test was used.Spearman's rank correlation (r s Spearman's rank correlation coefficient) was used to investigate the relationships between immuneinflammatory markers.The predictive value of inflammatory variables was evaluated using the receiver operating characteristic curve (ROC).Area under the ROC Curve (AUC) was used to provide an aggregate measure of performance across all possible classification thresholds.Both univariate and multivariate logistic regression models were used.The optimal threshold value for clinical stratification (cut-off value) was obtained by calculating the Youden index.Relative risk (RR) was performed for univariate analyses.Statistical significance was set at p < 0.05.
Conclusions
Glioblastoma promotes an immune-inflammatory response that could shift the tumor microenvironment into a pro-tumorigenic milieu.Acute and temporary inflammation may inhibit tumor growth by upregulating inflammatory mediators, such as interleukins IL-1β, IL-6, IL-8, TNFα, IFNγ, and HMGB1 as pro-inflammatory cytokines, and IL-1Ra, IL-10, and IL-13 as anti-inflammatory cytokines that are part of the initial inflammatory cascade and recruit other downstream targets to enhance antitumor responses.However, if inflammation becomes a chronic state, the same inflammatory processes can exhaust the immune system's ability to fight against glioblastoma and induce the release of immature neutrophils with immunosuppressive properties.The cytokine profile in our research appeared to vary independently of the primary pro-or anti-inflammatory activity, i.e., IL-1β, IL-6, IL-8, and IL-10 levels were elevated, whereas TNFα and HMGB1 were decreased in GBM patients compared to healthy control.We assume that interactions between neutrophils and glioma cells are responsible for the diverse activities of the cytokines described as drivers of tumor progression.The following cytokines, IL-1β, IL-6, IL-8, and IL-10, unlike IL-1Ra, IL-13, and IFNγ, demonstrated high diagnostic potential for clinical prognosis for GBM patients, especially in models with neutrophil-to-lymphocyte ratio.Although inflammation is highly relevant in our understanding of the pathology of GBM, the immune-inflammatory profile has not been sufficiently defined and still requires further research.
Figure 2 .
Figure 2. Heat map with correlation analysis between levels of cytokines in glioblastoma (n = 50); Spearman's rank correlation coefficient.
Figure 2 .
Figure 2. Heat map with correlation analysis between levels of cytokines in glioblastoma (n = 50); Spearman's rank correlation coefficient.
Table 1 .
White blood cell count-derived inflammation indices.
Table 3 .
The statistical characteristics of the ROC curve for the univariate logistic model for inflammatory variables and the relative risk (RR) and its confidence intervals (95% CI) for indications determined by the cut-off values calculated on the ROC curves.
Table 3 .
The statistical characteristics of the ROC curve for the univariate logistic model for inflammatory variables and the relative risk (RR) and its confidence intervals (95% CI) for indications determined by the cut-off values calculated on the ROC curves.
Table 4 .
The characteristics of the ROC curves with optimal probability thresholds and the classifier accuracy.
Table 5 .
The characteristics of the ROC curves with optimal probability thresholds and the classifier accuracy.
Table 6 .
The clinical characteristics of patients with glial tumors.
Abbreviations: Ki-67, a nuclear protein and a key marker associated with proliferating cancer cells, CRP C-reactive protein. | 2023-11-15T16:19:30.179Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "b9cfb4170728f4b84f1ce763d606ab71ff1e9f30",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b9cfb4170728f4b84f1ce763d606ab71ff1e9f30",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256960107 | pes2o/s2orc | v3-fos-license | The pharmacokinetic advantage of 5-methyltetrahydrofolate for minimization of the risk for birth defects
Despite efforts to increase folic acid (FA) intake, even within countries mandating FA fortification, there remain pregnant women with folate levels inadequate to minimize congenital disorders (e.g., of the neural tube, heart, and lip/palate). The pharmacokinetics of FA and [6S]-5-methyltetrahydrofolate (5-MTHF) were examined to find a reliable and minimal dose for rapidly rescuing folate status prior to critical periods of embryonic development. Serum total folate increased much more rapidly over the first four days in insufficient women given 7.5 mg doses of 5-MTHF than the same regimen of FA (P for trend <0.0001). Nearly all women given 7.5 mg 5-MTHF (every 12 hours, five doses total) almost immediately reached 50 nM serum total folate. Moreover, this level could be maintained by subsequent administration of 0.4 mg/d of folic acid. Thus, 5-MTHF enables repletion of folate stores more quickly and uniformly than FA and without exposure to unmetabolized FA.
is especially needed for decreasing birth defects for such women if folate insuffficient after conceiving. Evidence for a benefit of post-conceptional FA has been reported in studies in China and Hungary 2,12 . This is consistent with observations that embryos with incipient defects have not been observed earlier than Carnegie Stage 11 13 , suggesting that NTD results from an aberrant process that occurs during closure itself (between 21 and 28 days).
In elevating the folate of a pregnant woman, the most relevant measure of folate status must be considered. The study by Daly, et al. showed that risk in an Irish population was minimized for those who had red cell total folate levels greater than 906 nM (measured by microbiological assay) 14 . However, red cells do not exchange folate with other tissues. The authors of this often cited work state that "…it is clear that the fetus accesses folate through maternal plasma folate…". Since their samples were taken about 15 weeks into pregnancy, they chose to focus on red cell folate to better reflect their subject's preconceptional status. However, for the developing embryo, plasma folate is the relevant factor, regardless of red cell folate (which can lag behind changes in intake due to its slow turnover). Several studies have demonstrated that the recommended two months of administration of 400 μg/ day FA to women of child bearing age results in 44 to 55 nM total plasma or serum folate [15][16][17] . Administration of 5 mg/d of FA to pregnant women has been proposed 18 , but no studies appear to have examined if this increases serum folate levels in less than two weeks. Therefore, one goal of the current study was to find a regimen of FA or 5-methyl-6S-tetrahydrofolate (5-MTHF) administration that could quickly (within days) elevate plasma folate to 50 nM. In addition, we sought to find the minimum total dose of folate that could load tissue stores to the point where the mother could be soon transferred to a standard prenatal folate supplement (e.g., 0.4 mg/d folic acid) to maintain the newly acquired plasma folate level.
Results
Characteristics of the study population. Subjects ranged between 19 to 45 years of age, average: 28.1 ± 7 years. Self identified race was 50% white, 39% black, 1.6% Asian, 1.6% American Indian, and 8% mixed. The average total serum folate for all subjects on entry was 19.7 nM as measured by the Immulyte 2000 assay. When these samples were re-analyzed, the average total serum folate was 21.3 nM (S.D. ± 5.2) by the microbiological assay, and serum 5-MTHF was 18.1 nM (S.D. ± 4.2) by HPLC (85% of total folate). The three groups (A, B, and C) were not different by ANOVA (P = 0.81 and 0.87 for total and 5-MTHF, respectively) (Fig. 2a,b and Fig. 3 screening samples). Comparison of the screening samples for the subjects in all three groups together with the sample taken immediately before the first dose showed that serum total folate increased by 6% to 22.7 nM, and serum 5-MTHF increased 4% to 19.0 nM, perhaps due to regression to the mean, but neither increase was statistically significant (P = 0.094 and 0.21, respectively) (Fig. 2a,b and Fig. 3: day 1 samples). The standard deviations of the change between the screening sample and the first study sample were 6.5 nM for total folate and 5.0 nM for 5-MTHF.
Elevation of serum folate and repletion of body stores with three doses of 5-MTHF. In group A serum total folate and 5-MTHF continuously increased in all subjects for each sample following each of the three doses of 7.5 mg 5-MTHF administered once each morning (Fig. 2a). Twenty four hours after the first dose 45% of subjects still had serum total folate concentrations below 50 nM, though all were above 30 nM. Twenty fours hours after the second dose serum total folate was greater than 44 nM for all subjects, with only 10% below 50 nM. The closure of the neural tube begins about day 21 after conception with the successive fusion at several sites of the opposing tops of the now elevated neural fold. This is, on average, seven to eight days after women miss a menstrual cycle. Moreover, those using a high sensitivity pregnancy test (which detects the human chorionic gonadotropin triggered by implantation) can discover their pregnancy four to five days earlier still. The migration of the cranial and cardiac portion of neural crest cells (NCC) begins between 23-27 days after conception as they separate from the closing neural tube. Cardiac NCC contribute to much of the outflow tracts of the heart, and generate the septum between the pulmonary artery and the aorta. Other cephalic NCC travel to the head to participate in lip and palate formation. A cleft of either of these is the result of a failure of their respective right and left elements to join within the first 9 weeks. As with some heart defects clefts have been attributed, in part, to inadequate neural crest cell migration.
Scientific REpoRTS | (2018) 8:4096 | DOI:10.1038/s41598-018-22191-2 serum total folate, with the lowest value being 34 nM (Fig. 2b). After a week of administration of 0.4 mg/d FA, serum total folate and 5-MTHF had fallen slightly further (53.8 nM and 42.2 nM, respectively), but were maintained at about this level over yet one more week. Folate levels in the day 12 and day 19 samples were not statistically different by paired t-test (P > 0.5). During these two weeks about a third of subjects were found to have serum total folate less than 50 nM (Fig. 2c), although only one subject was observed to be less than 35 nM. Although changes in serum 5-MTHF approximately paralleled those of total folate, the ratio of 5-MTHF to total folate fell from 0.83 before any administration to 0.76 (P = 0.011 by paired t-test) by day 19.
The effect of five doses of 5-MTHF twice a day. In group B (subjects administered 7.5 mg of 5-MTHF every 12 hours) serum total folate and 5-MTHF increased up to an average of 213.8 nM (S.D. ± 64.2) and 168.9 nM (S.D. ± 53.1), respectively, by 12 hours after the fifth and last dose (Fig. 3). Of the samples taken during the three mornings following these doses of 5-MTHF, only a single subject (and only on the first morning) was below 50 nM serum total folate. Twenty four hours after the last dose of 7.5 mg 5-MTHF, serum total folate and 5-MTHF fell to an average of 126 nM (S.D. ± 29.3) and 98 nM (S.D. ± 22.3), respectively, but no subject was ever below 50 nM total folate. While samples taken 24 hours after a 7.5 mg dose of folate (groups A and C) will largely reflect a new homeostasis, those taken only after 12 hours also are influenced by incomplete clearance 19 . Importantly, in the two following weeks with administration of 0.8 mg per day of 5-MTHF although serum total folate and 5-MTHF decreased further to an average of about 90 nM and 69 nM, respectively, only a single subject was seen to have a total serum folate value (39 nM in the day 19 sample) of less than 50 nM. Total folate in the day 12 and day 19 samples were not significantly different (P = 0.2 by paired t-test). Changes in serum 5-MTHF approximately paralleled those of total folate, but the ratio of 5-MTHF to total folate decreased from 0.85 before any administration to 0.77 (P = 0.00021 by paired t-test) by day 19 (Fig. 3).
The slow increase of serum folate with three doses of folic acid. Subject group C, which was treated identically to group A, except for administration of 7.5 mg of FA once a day for three days, responded much more slowly over the first four days (Fig. 2) than group A who had been given 5-MTHF (P < 0.0001 for the trend). On the morning 24 hours after the first dose, 80% of subjects had serum total folate less than 50 nM (average 39.7 nM, S.D. ± 11.6 nM) (Fig. 2c), with 35% below 35 nM. The samples on the morning after the second dose, although now elevated further to an average serum total folate of 45.2 nM, S.D. ± 12.9 nM, still showed 75% of subjects with less than 50 nM and 25% with less than 35 nM (Fig. 2b). The samples taken 24 hours after the third and final dose of 7.5 mg of FA gave the highest serum total folate and 5-MTHF of 54.4 nM (S.D. ± 16.8) and 43.4 nM (S.D. ± 13.0), respectively. However, 50% of the subjects were still below 50 nM and 10% were below 35 nM. Over the next two weeks of administration of 0.4 mg/d of FA, serum total folate and 5-MTHF did not change significantly by paired t-test (P > 0.6), 45% of subjects remained below 50 nM, and 15% were below 35 nM. The ratio of 5-MTHF to total folate decreased from 0.86 before administration of folic acid to 0.83 by day 19, but this was not significant (P = 0.63 by paired t-test).
Discussion
Comparison of 5-MTHF with FA (7.5 mg/d for three days, groups A vs C) shows that serum total folate is 23% to 55% higher with the former over each of the four mornings after initiation of administration. In particular, 5-MTHF was more uniform in quickly increasing total folate to the putative target level. For example, 48 hours after the first dose of 5-MTHF, but just before the third dose, all 20 participants had serum total folate values ≥45 nM, and 90% were above target. On the other hand, at this same time 55% of those administered folic were still below 45 nM. After 12 days with continued administration of 0.4 mg/d of FA to both groups, serum total folate levels became indistinguishable between the two groups initially given 5-MTHF or FA. Thus, the advantage of the natural folate 5-MTHF is its ability to reliably replete body stores in folate insuficient women within a few days. The new serum status, while decreasing slightly on the fourth day can be then maintained by a typical prenatal dose of FA.
For the purpose of data analysis, a target serum level was selected based on several studies of women of childbearing age with baseline plasma total folate between about 18 nM and 21 nM (measured by microbiological assay). In these earlier studies subjects were administered FA and followed for 12 or 24 weeks. In the 24 week study, by 12 weeks those given 0.4 mg/d of FA reached a plateau of total plasma folate of about 53 to 55 nM 17 . The other two studies reported plasma levels of 44 nM (dose = 0.375 mg/d) and 51 nM (dose = 0.4 mg/d) after 12 weeks 15,16 . Based on these studies and since it now appears that consumption of 0.4 mg/d FA for several months can lower the rate of NTD to among the lowest yet observed 12 , a plasma/serum concentration of total folate of 50 nM was selected to compare the effects of 5-MTHF with FA.
In the current study only after the third and last dose of 5-MTHF given once per day (group A) was the 50 nM target level for total serum folate reached by all subjects. Moreover after switching to 0.4 mg/d FA, the average serum total folate decreased to 53. 8 nM, but about one third of subjects continued to be below target for the next two weeks. On the other hand, all of those in group B (except one subject after 24 hrs) administered five doses every 12 hours were continuously above 50 nM after the first dose. During the two subsequent weeks while taking 0.8 mg/d of 5-MTHF, all except one subject were maintained above 50 nM total serum folate. These findings suggest that five doses of 7.5 mg/d of 5-MTHF given every 12 hours can provide this level almost immediately, which can then be maintained with a lower dose prenatal vitamin. With this regimen a total of 37.5 mg of 5-MTHF is administered, and earlier studies on urinary excretion 20 suggest that the retained folate would be about 26 mg (57 μmols). This is consistent with the range of the total body stores of folate reported for humans 21,22 , and with the ability of the follow-on prenatal dose to maintain the new status.
Although FA eventually produces the equivalent serum total folate level on average as the same dose of 5-MTHF (group C vs A), a pronounced lag of at least four days was observed. This is likely due to both a higher urinary excretion of FA than 5-MTHF 20 , and its slow conversion by dihydrofolate reductase into active folate forms 23 . This results in exposure to high serum concentration of unmetabolized FA after each dose, and lower serum 5-MTHF for many hours after each dose than if administered 5-MTHF.
Several earlier studies indicate that post conceptional folate can reduce birth defects in both an animal model and in humans. Mouse embryos nullizygous for folate binding protein-1 (Folbp1 −/− ) die in utero showing high percentages of neural tube, craniofacial, and other defects. Such mice can be partially rescued by supplementing the dams with 5-methyl-tetrahydrofolate at high dose 24 . Supplementation during embryonic days seven to nine (during neural fold elevation and tube closure) is necessary to produce this result. Extending provision of folate to embryonic day one through nine, while decreasing the number of resorptions, did not further improve protection from defects. Interestingly, blood samples taken from Folbp1 −/− mice 24 hours after a dose did not show an increase in plasma total folate concentration 25 . The authors suggested that "…embryos are able to harvest and use sufficient folate cofactors for their survival and development during the peak plasma folate…" Although a role for Folbp1 has yet to be implicated in humans, these results none the less demonstrate that folate supplied during the narrow period of neural tube fusion can be effective.
In the seminal work of Berry et al., women in two regions of China were asked to consume 0.4 mg/d of FA 12 . The northern region had the highest rate of NTD among those who took no folate (4.8 NTD per 1000 registered pregnancies). Among the northern group the rate of NTD dropped to 1.0 per 1000 for those consuming the FA starting before their last menstrual period. However as seen in table 3 of this article, those who started FA during the first trimester, but sometime after their last menstrual period, also had a lower rate of NTD (1.6 per 1000) than unsupplemented women 12 . Further, the outcomes from 17,300 pregnancies in the Hungarian Case Control Surveillance of Congenital Abnormalities were analyzed according to the timing of beginning of FA administration (3 to 6 mg per day). Women who started FA only during the critical period for formation of each abnormality were significantly protected from neural tube and especially orofacial clefts and cardiovascular defects 2 . Thus, late administration of folic acid, though not in an optimal regimen, has a beneficial effect.
Many women do not recognize they are pregnant until after the neural tube closes. Still, the average length of the luteal phase is ~13 days (from the urine luteinizing hormone peak) 26,27 . Thus, about seven to eight days remain before initiation of fusion for those that become soon aware of a missed menses. Many women also use home pregnancy test strips based on human chorionic gonadotropin, and the most sensitive of these can indicate implantation five days earlier. A continuous high-dose folic acid rescue strategy has been proposed in early pregnancy 18 . The results of the current study show that a much more limited administration of the naturally occurring 5-MTHF can quickly replete body stores. It should also be considered that other folate dependent defects, such as of the heart, lip, and palate develop later than closure of the neural tube (Fig. 1).
Clearly, the best protection that can be afforded by folate would be that which increases stores prior to conception. Even in countries with FA fortification a meaningful percentage of women do not have adequate blood folate status to fully reduce risk for defects to the lowest observed rate. Moreover, rapid folate repletion will not be an option for those who do not learn of their pregnancy until after the last folate sensitive event in development. Thus, every effort still must be made to elevate folate in women of child bearing age, especially those who are at increased risk. When possible, rapid folate repletion will be more effective for increasing blood levels within the limited time than administering a typical prenatal vitamin alone. The effectiveness of post-conceptional rapid repletion with 5-MTHF can only be firmly established by comparison with the standard of care administration only of 0.4 mg/d folic acid in a randomized clinical trial. In this regard, the results of the two regimens of 5-MTHF examined provide a guide to the minimum dose of this folate needed for such a study to quickly alleviate insufficiency. In addition, a prospective observational study of plasma folate in women of child bearing age who intend to conceive in the near future and the incidence of birth defects in the resulting offspring would help to establish an optimal level.
Methods
Study design and patients. Women of child bearing age, but who had no intention of becoming pregnant within the study period, were recruited from the greater Mobile, Alabama area. Patient informed written consent was obtained, and all procedures were approved by the University of South Alabama IRB. Supplement administration and blood sampling were performed in accordance with relevant guidelines. Potential subjects were initially interviewed to determine if they were regular consumers of vitamin supplements, fortified breakfast cereals, or energy bars. Only those with insignificant intake from these products (about half of initial responders) were screened for baseline folate levels. Those with serum total folate less than 25 nM (the lowest quintile), as measured by the Immulyte 2000 immunoassay system (used only for initial screening), were included. Exclusion criteria were malabsorption conditions, such as Celiac or Crohn's disease, taking drugs that might interfere with folate uptake or metabolism (such as methotrexate or hydrochlorothiazide), history of cancer, thyroid problems, epilepsy, or having a relative who had already participated. The number of subjects entered into the study groups was based on earlier results comparing the pharmacokinetics of single 5 mg doses of folic acid and 5-MTHF 19 . Subjects were sequentially entered into the three arms of the study. Subjects served as their own control by comparing changes in folate levels between the screening sample and the first study sample taken about one to two weeks later immediately prior to the first test dose.
Procedures.
Subjects were asked to fast and consume no liquids other than water for at least eight hours prior to each visit. Subjects were administered three regimens of folate. Group A (20 subjects) was given 7. (17.0 μmols from Sigma) every 24 hrs (three doses total), followed by 0.4 mg/d FA for two weeks. Cyanocobalamin was administered orally once a day for the first three days of each group, (0.5 mg/day for groups A and C, and 1.0 mg/day for group B). During the morning of the first five days (just prior to any dose) blood was drawn into serum separator tubes, mixed by inverting, let stand at room temperature in the dark for 30 min, and then centrifuged at 1200 × g for 10 min. The resulting serum was transferred into microfuge tubes (in 300 and 500 microliter aliquots), frozen quickly on dry ice, and transferred to a −80 °C freezer. The subjects in groups B and C, and half of those in group A returned on days 12 and 19 for further blood sampling just prior to that morning's folate dose. Compliance during the two week lower maintenance dose period was assessed by counting remaining tablets.
Serum 5-MTHF was determined by reversed-phase HPLC with detection of its intrinsic fluorescence, and total folate by microbiological assay using the American Tissue Type Culture Collection 7469 strain of Lactobacillus rhamnosus and 5-MTHF as calibrator 28 . Many samples were assayed in duplicate to assure repeatability. Screening samples initially examined by the Immulyte 2000 assay for the purpose of determining eligibility were also measured using the HPLC and microbiological methods. Statistical analysis. ANOVA and the t-test were performed using Statistica v10 on square root transformed values which were determined to be normal by the Shapiro-Wilks test. The hypothesis to be tested was that the means of serum total folate or serum 5-MTHF on the four days following a dose would be the same for administration of 5-MTHF in comparison to FA (groups A and C). This was examined by repeated measures general linear ANOVA with a single between-subjects factor (type of folate). The suitability of ANOVA was examined by Levene's test for homogeneity. Multiple comparisons were corrected by the Bonferroni-Holm method. All P-values are double-sided. | 2023-02-18T14:43:01.462Z | 2018-03-06T00:00:00.000 | {
"year": 2018,
"sha1": "40bec2ea0f6e61dfe505ab0f9033f3e8eec083cf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-018-22191-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "40bec2ea0f6e61dfe505ab0f9033f3e8eec083cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
52034578 | pes2o/s2orc | v3-fos-license | Influence of the WHO framework convention on tobacco control on tobacco legislation and policies in sub-Saharan Africa
Background The World Health Organization’s Framework Convention on Tobacco Control, enforced in 2005, was a watershed international treaty that stipulated requirements for signatories to govern the production, sale, distribution, advertisement, and taxation of tobacco to reduce its impact on health. This paper describes the timelines, context, key actors, and strategies in the development and implementation of the treaty and describes how six sub-Saharan countries responded to its call for action on tobacco control. Methods A multi-country policy review using case study design was conducted in Cameroon, Kenya, Nigeria, Malawi, South Africa, and Togo. All documents related to the WHO Framework Convention on Tobacco Control and individual country implementation of tobacco policies were reviewed, and key informant interviews related to the countries’ development and implementation of tobacco policies were conducted. Results Multiple stakeholders, including academics and activists, led a concerted effort for more than 10 years to push the WHO treaty forward despite counter-marketing from the tobacco industry. Once the treaty was enacted, Cameroon, Kenya, Nigeria, Malawi, South Africa, and Togo responded in unique ways to implement tobacco policies, with differences associated with the country’s socio-economic context, priorities of country leaders, industry presence, and choice of strategies. All the study countries except Malawi have acceded to and ratified the WHO tobacco treaty and implemented tobacco control policy. Conclusions The WHO Framework Convention on Tobacco Control provided an unprecedented opportunity for global action against the public health effects of tobacco including non-communicable diseases. Reviewing how six sub-Saharan countries responded to the treaty to mobilize resources and implement tobacco control policies has provided insight for how to utilise international regulations and commitments to accelerate policy impact on the prevention of non-communicable diseases.
Background
The World Health Organization's Framework Convention on Tobacco Control (FCTC), enforced on February 27, 2005 was the first global public health treaty [1]. This treaty emerged after years of effort to spearhead an international approach to tobacco regulation that would slow the rapid growth of tobacco use. The treaty stipulated requirements for signatories to govern the production, sale, distribution, advertisement, and taxation of tobacco to reduce its impact on public health. Although the FCTC has been popular, with 180 countries currently ratifying the treaty [2], little is known about how low-and middle-income countries responded to the FCTC to modify their tobacco policies and what other contextual issues influenced the timeliness of countries' responses [3].
The FCTC solidified tobacco use as a public health epidemic [1,3,4]. As evidence continues to accumulate about the global impact of tobacco consumption on non-communicable diseases (NCDs), efforts to include tobacco control have increased. The FCTC requires all participating countries to reduce this impact through various initiatives, including national programs on tobacco control; measures to protect people from second-hand tobacco smoke in public places; health warnings on tobacco products; restrictions on tobacco advertising; and prohibition of sale of tobacco products to minors [5]. Tobacco companies joined forces to oppose the FCTC and countries' implementation of tobacco control policies, including proposing alternative language, actively lobbying against the framework, employing deception, and selectively marketing and promoting products to maintain the social acceptability of tobacco use [3,[6][7][8]. Some of the most devastating impacts of tobacco prevalence have been witnessed in sub-Saharan Africa, where developing nations are still struggling to fund a response to HIV and AIDS and other infectious diseases.
The history of the tobacco crop in sub-Saharan Africa today will illuminate its complex role. In the early twentieth century, a rise of African fire-cured tobacco production in the central region increased the number of Africans participating in share-cropping contracts with Europe [9,10]. These agreements yielded financial gains for the farmers that some African countries still rely on today. Currently, Malawi is one of only two countries in the world that depend on tobacco leaf production for most of its export earnings [10]. Tobacco industries have used this reliance to their advantage in responding to a growing number of regulations and control initiatives. For example, in response to early WHO tobacco control programs at the beginning of the twenty-first century, the International Tobacco Growers Association lamented that poor African farmers [9] would suffer if tobacco regulation was successful [6]. An additional element of tobacco use in sub-Saharan Africa is its rapid growth that challenges governments to keep up with regulations to control and tax its use in the interest of protecting the public. Between 1995 and 2000, cigarette consumption increased by 38% in Africa [11]. By 2030, it is projected that 70% of the estimated 10 million global deaths from tobacco will occur in developing countries such as those in sub-Saharan Africa [11].
The WHO's "best buys" delineate specific, low-cost, population-level interventions that, if scaled up, could reduce harmful tobacco and alcohol use, as well as unhealthy diets and physical inactivity [12]. These "best buys" hold specific promise for low and middle income countries such as many of those in sub-Saharan Africa. WHO "best buy" interventions for tobacco are, briefly, the creation of policies for tax increases on tobacco products, smoke-free indoor workplaces and public places, bans on tobacco advertising, promotion and sponsorship, and health information and warnings. Significant examinations of global tobacco policy (e.g., the International Tobacco Control Policy Project [13], have generally not included African countries, and given recent findings that policy implementation of the FCTC demand reduction measures is associated with reduction in tobacco use [14], it is especially important to assess the adoption of tobacco policies in Africa [11]. This paper describes how six sub-Saharan countries responded to the FCTC call for action on tobacco control by detailing the context, timelines, key actors, and strategies in the formulation and implementation of policies in response to the FCTC. Understanding these elements can provide insight for accelerating international mobilization to reduce and prevent NCD prevalence. In each country we present the (a) tobacco situation (production and use), (b) the FCTC adoption process (year of ratification and period taken to come up with a policy, what influenced the process), (c) the actors and industry involvement, (d) implementation of FCTC policies and WHO best buys for tobacco control.
Methods overview
The Analysis of Non-Communicable Disease Prevention Policies in Africa (ANPPA) study [15] employed a multiple-case study design [16] to assess policy and practice for all WHO "best buy" interventions on tobacco use, unhealthy diet, physical inactivity, and harmful alcohol use in six sub-Saharan countries: Kenya, Malawi, Nigeria, Cameroon, South Africa, and Togo. The ANPPA study was designed to generate evidence on how-and the extent to which-multi-sectoral action informs policy formulation and implementation of NCD prevention "best buy" interventions. Walt and Gilson framework of policy analysis [17] was used to guide ANPPA. The framework acknowledges the non-linearity of the policy process as well as the incremental nature of policy-making. Walt and Gilson's framework focuses on four factors: policy (a) content, (b) actors, (c) processes, and (d) context [17].
This paper reports only on data collected on tobacco. Each country, therefore, becomes a case with its own unique approach to ratifying the FCTC and establishing policies to support tobacco control. For each case, we apply the same methods to identify differences in each country's processes. Each case includes two primary sources of data: (1) a review of documents related to the policy formulation process and (2) key informant interviews with informants who either participated or should have participated in the policy process.
The ANPPA study was coordinated by the African Population and Health Research Center (APHRC). See Juma et al. [15] for more information on the application process and study teams.
Document reviews
Teams conducted document reviews to describe the policy context and content, identify existing policies for their consistency with WHO "best buy" interventions, and understand the policy development processes and implementation status. Policy documents included were those that focused on NCD prevention (including acts and laws, strategic plans, guidelines, and government directives), reviews and case studies of multi-sectoral action (MSA) in successful policy formulation and implementation at a national level. Examples of policy documents included are: ministry website materials such as policy documents, strategic plans, program plans, guidelines, protocols; parliamentary records, or debates; local print media for references to policy changes, often as part of speeches by government officials; meeting minutes, activity reports, and drafts of policy statements, internal and external memos, meeting agendas, and other communications; academic journal articles; and relevant donor or non-governmental organization and development partner websites for NCD program reports. Researchers extracted data from documents including the years in which relevant policy changes occurred and the events leading up to those decisions. Some key documents date back to the 1970s (e.g., national plans and reports).
Key informant interviews
Key informant interview participants were selected based on their expected or actual role in each country's NCD policy formulation and implementation. Participants were selected using a combination of purposive and snowball sampling [18]. First, a broad segment of sectors (e.g., health, education, finance) and institutions (e.g., ministry officials, directors) were identified for inclusion. Next, appropriate individuals within those sectors and institutions were identified to purposively include both government and non-government (e.g., community organization, industry) actors. After key informants were identified, researchers asked them to identify additional prospective study participants who had knowledge of policy formulation and implementation. Participants were contacted through an initial telephone or email contact, and a total of 202 were interviewed across six countries [19].
APHRC and the study teams collaboratively developed interview guides during the first methodology workshop. Interview guides were informed by Walt and Gilson's framework of policy analysis [17] and included questions for each of the four key "best buy" interventions, including the context in which the policy was developed, the policy content, actors involved in the process, and the implementation status of each policy. In addition, questions addressed how MSA was employed (or not), the processes undertaken to ensure MSA, the challenges encountered, what worked, and what did not work. During field-worker training, each team piloted the guide to obtain feedback on the questions and interview structure, and the interview guide was revised based on feedback from the pilots. Each country then used the final interview guide with minor adjustments to fit their context if necessary.
Prior to the interview, the interviewer explained the purpose of the study, risks and benefits to participating, the right to withdraw at any time without penalty, and confidentiality; participants provided verbal or written documentation of consent to participate and to be digitally recorded.
Transcripts were uploaded into the qualitative data management software NVivo coded using a codebook based on Walt and Gilson's framework of policy analysis [17] and the framework method of qualitative analysis, a type of thematic or content analysis which guides the application of a framework to analyse qualitative data [20]. Data associated with tobacco policy formulation from all six countries were analysed for this multiple case study.
Results
The results section summarizes each country's activities and policies, factors shaping policy development and implementation, key actors, and quality of policies with regard key informants' perceptions about the role of the FCTC in each country. WHO "best buy" interventions are listed in Table 1, with countries' timelines for FCTC ratification and policy adoption in Table 2.
Timeline and resulting policy
Cameroon's first tobacco control policy was formulated in 1988 [21]. After the World Health Assembly in May 2003 to adopt the FCTC, but prior to its formal ratification, the government decided to spearhead a national tobacco control program. In 2004, the Ministry of Public Health created a multi-sectoral expert platform for multiple exchanges on tobacco control called "GROUPE" by ministerial decision No. 00615/D/MSP/DPS of 11 February 2004 [22]. This group included experts from different ministerial and administrative departments [23]. In February 2006, Cameroon ratified the FCTC leading to the convening of the first national multi-sectoral meeting on tobacco under the Prime Minister's auspices with the goal to identify and develop policies for tobacco use. Although this meeting led to individual sectors implementing smoke-free zones, Cameroon did not develop a comprehensive national policy for smoke-free zones. The "GROUPE" later developed a national comprehensive law project on tobacco, using FCTC
Factors shaping policy ratification
Cameroonian government still grants large subsidies to Cameroon's tobacco farmers, and tobacco is a primary crop. Although the formulation of tobacco control policies was driven by epidemiological data on tobacco use, with an emphasis on the problem of smoking and delinquency among youths and in school settings, a strong health sector was met with low visibility by civil society organizations and a lack of political will by other government sectors to implement a unified national policy.
Key players
The major actors involved in formulating and implementing tobacco use prevention policies were ministerial departments of Health, Trade, Education, Communication, and Finance. Although one NGO, LUTOMA (Association for the Fight against Drug Addiction and Mental Illness) provided data on the adverse effects of smoking/delinquency among youths in secondary school settings, we observed a low visibility of civil society organizations, NGOs and academics. Industry was generally not engaged in formulating tobacco control policies, but rather in the raising of tobacco taxes. Where necessary, the implementation process was discussed with industry, specifically for the policy on health information and warning on tobacco packages.
Quality of the policy
Cameroon's process led to the formulation of 12 tobacco control policies which incorporated all the tobacco use prevention "best buy" interventions: tax increases on tobacco products, smoke-free indoor workplaces and public places, bans on tobacco advertising, promotion and sponsorship, and health information and warnings. Cameroon's response to the FCTC call for action on tobacco control has been positive and addresses some "best buys," but is still insufficient because a more comprehensive and integrated tobacco control law remains to be enacted. The tobacco control project law has been pending at the Presidency of the Republic since 2012, and study participants reported that this has been a strong hindrance to fully implementing tobacco control policies in the country. Furthermore, participants indicated the government's tobacco farmer subsidies, which are passed through Parliament, are likely contributing to the delay in passing the comprehensive law [24]. Cameroon has at least one policy for each of the FCTC best buys: protect people from tobacco smoke, enforce bans on tobacco advertising, warn about the dangers of tobacco, increase taxes on tobacco, ban school sales, ban school use, and prevention activities (See Table 2).
Kenya
Timeline and resulting policy In Kenya, tobacco control efforts started in 1992 when Kenya first participated in the World Tobacco Day campaigns. The first tobacco control bill was drafted in 1998, well before the FCTC, but since it was a pre-FCTC bill, it did not fully address the WHO "best buy" interventions. In 2003, the focus shifted to ratification of the FCTC. In 2004, Kenya made history by being the second country (after Norway) to ratify and sign the WHO FCTC on the same day. The signature and ratification played a major role in adopting the Tobacco Control Act applying the FCTC recommendations to Kenya, which was passed in 2007 [25].
Factors shaping policy ratification
Tobacco control policy formulation in Kenya was largely driven by the WHO's promotion of the FCTC, which catalysed the process in Kenya. However, strong local evidence on the economic cost of tobacco provided the needed impetus for actors in different sectors to advocate for the policy. Similarly, the availability of evidence in the public domain on the harmful effects of tobacco growing and production on the environment also contributed to the holistic argument for control. Noteworthy is the strong political influence that provided the needed push for the signing and ratifying the FCTC bill [25]. Therefore, a combination of strong global, local, economic, and political factors contributed to Kenya's formulation of the tobacco control policy.
Key actors
Key actors in the Kenyan tobacco policy process were the Ministry of Health, Tobacco Control Board (which consists of various sectors relevant to tobacco control including the Ministry of Health) and civil society organizations. The policy process was a consultative approach from multiple government sectors and other stakeholders, including the private sector. Tobacco industry interference has played a key role in delaying the formulation of some key elements and poor implementation.
Policy quality
Together, Kenya's policies on tobacco control are comprehensive and address all of the FCTC "best buy" interventions of tax increases; smoke-free public spaces; ban of tobacco advertising, promotion, and sponsorship; and health information and warnings. Implementation is still in the early stages. Moving forward, continuous engagement with all relevant stakeholders and allocating adequate resources (both human and financial) to support the process will be critical in the implementation stages. Kenya has at least one policy for most of the FCTC best buys: protecting people from tobacco smoke, enforcing bans on tobacco advertising, warning about the dangers of tobacco, and increasing taxes on tobacco, but does not have policies for banning school sales, banning school use, or prevention activities (See Table 2).
Timeline and resulting policy
The development of tobacco control policies and legislation in Nigeria dates to its 1951 revenue allocation document on licensing and controlling tobacco importation [26]. The first significant effort at controlling tobacco use for public health benefits was the Tobacco Smoking (Control) Decree 20 of 1990 by the military government; the decree included measures for protection from second-hand smoke, health warnings and labels, and enforcing bans on advertising and promotion, but not taxation of tobacco products. With the transition of the Nigerian government to democratic rule in year 2000, the document's name was changed to the Tobacco (Control) Act 1990 in line with political conventions that prohibited governance through decrees in democratic dispensations; the policy content remained unchanged, however, and guided tobacco control in Nigeria for more than two decades. The Act was weak and poorly implemented [26,27], and government actions on tobacco control were inconsistent. For instance, although the Act emphasizes reduction in tobacco use for public health, the government still supported increases in tobacco growth as when, in 2001, the Nigerian government signed a memorandum of understanding with British American Tobacco Nigeria to build potential for regional export and significantly increase the quantity and quality of locally grown tobacco [28]. Nigeria signed the FCTC on 20 October 2005, further driving interest in tobacco legislation. In 2009, civil society organisations and several elected officials advocated for a National Tobacco Control bill [29]; after two years of consideration, the National Tobacco Control bill was passed at the Senate and House of Assembly but the presidency failed to assent to the bill. Subsequently, another version of the bill was facilitated by the Federal Ministry of Health and passed as an Executive Bill to the Federal Executive Council and Senate for approval. It was eventually approved by Nigeria's outgoing president, Goodluck Jonathan, as the National Tobacco Control Act 2015 after mounting pressure from tobacco control advocates and despite stiff resistance from the tobacco industry. Indeed, participants indicated the outgoing president decided to sign the bill into act a few days before his exit to appease the tobacco advocacy groups and other stakeholders and avoid a reprisal from the tobacco industry [30].
Factors shaping policy ratification
Although there was agreement and strong advocacy among many civil society organisations, senators, elected officials, and others regarding the need for tobacco control legislation, counter-lobbying of the tobacco industry [30][31][32] and objections from some government ministries/agencies that felt they should lead the tobacco control efforts created dissention that impacted policy development and ratification. Indeed, the final tobacco control act was only signed by an outgoing president as he was leaving.
Key players
In 2009, civil society organisations advocated for a National Tobacco Control bill that was sponsored by Senator Olorunnimbe Mamara. Professor Babatunde Osotimehin, former Minister of Health, and Senator Jibrin Aminu spoke publicly in support of the proposed legislation [29]. The bill also received strong support from international and Nigerian civil society groups such as the Environmental Rights Action/Friends of the Earth, Nigeria; the Nigerian Tobacco Control Alliance; and the Coalition Against Tobacco.
Quality of the policy
The National Tobacco Control Act 2015 is a comprehensive, FCTC-compliant legal instrument that addresses all tobacco "best buy" interventions as well as other measures relating to reducing tobacco demand and supply as well as related matters on tobacco product specifications [33]. Several tobacco related policy documents developed after FCTC ratification are the Nigerian National Policy and Strategic Plan of Action on NCDs, developed by the Federal Ministry of Health in 2013 and reviewed in 2015, and the 2014 Standard for Tobacco and Tobacco Products-Specifications for Cigarettes, which was developed by the Standards Organization of Nigeria. These documents align with the stakeholder involvement provisions and measures outlined in the FCTC. One exception is the 2014 Standard for Tobacco and Tobacco Products-Specifications for Cigarettes, which was developed with the involvement of the tobacco industry involvement, contrary to FCTC recommendations. Nigeria has at least one policy for each of the FCTC best buys: protect people from tobacco smoke, enforce bans on tobacco advertising, warn about the dangers of tobacco, increase taxes on tobacco, ban school sales, ban school use, and prevention activities (see Table 2).
Malawi
Timeline and resulting policy [36]. Apart from the overarching NCD prevention-related policy documents [37,38], Malawi has no specific tobacco control policies that recognize tobacco as an underlying cause of NCDs or which call for tobacco control regulation as a way to improve public health. The Tobacco Act (1970) (Last Amended in 1990) and the Tobacco Industries Bill (2012) only regulate tobacco production and sales through enhancement of agricultural practices for tobacco production and through licensing of tobacco growers, transporters, sellers, and buyers [39,40].
Factors shaping policy ratification
The Malawi economy is highly reliant on tobacco, which is considered a "strategic crop" [34]. There are several barriers to the ratification of the FCTC. Tobacco farmers and industry officials oppose tobacco control. Participants indicated that there is a perception by the government that ratifying parts of the FCTC to improve health (e.g., limiting exposure to tobacco smoke) will compel them to implement all aspects of the FCTC, including Articles 17 and 18, which discourage support for tobacco farming, and that this will reduce tobacco production and negatively impact the national economy. Further, Malawi's low prevalence of smoking and high tobacco exports led some to minimize tobacco as a major public health problem warranting legislation and policies like the FCTC. Participants also cited the influence of the tobacco industry as a barrier to FCTC ratification and implementation.
Key players
There is some support for ratification of the FCTC by advocacy groups, including the Tobacco Tenants and Allied Workers Union of Malawi, that wrote the president in support of reduction of reliance on tobacco crops [38], and Drug Fight Malawi, an NGO that attends international meetings on the FCTC and advocates within Malawi for tobacco control. While some government actors (e.g., the NCDs program at the Ministry of Health) favor tobacco control for promoting public health, others (e.g., the Tobacco Control Commission) prioritize tobacco farmers' livelihood and country revenue from tobacco taxes over public health. Study participants, mostly from the health sector and some tobacco industry representatives, and the Tobacco Control Commission unanimously agreed on the need to ratify the FCTC. They noted that ratification would allow Malawi's voice to be heard and "fight [for tobacco control] from within."
Quality of the policy
Malawi has not yet acceded to or ratified the FCTC [2]. Even bills drafted after the FCTC came into force as a global treaty (e.g., the Tobacco Industries Bill of 2012) have no interventions intended to limit tobacco production or use. Thus, Malawi policies are not in line with FCTC requirements and specifically do not respect FCTC Articles 17 and 18 that require countries to focus on "provision of support for economically viable alternative activities for tobacco growers" and "protection of the environment and the health of persons," respectively [1]. Study participants confirmed that Malawi has no specific public health-related tobacco control policies. Some sectors have attempted to use other laws to lobby for implementing some of the tobacco control interventions consistent with the FCTC requirements. For example, one civil society organization attempted to compel the government to ban public smoking by citing the Environmental Management Act 1996 [37]; the courts referred it back to the government executive branch, which said it was in the process of developing a public smoking policy. However, the mentioned policy had not been seen by any of the study participants by the time of data collection. Some participants intimated that Malawi could soon ratify the FCTC and indicated this will likely be advantageous to tobacco control since the majority of the FCTC articles might be implemented without much contention; for the two contentious articles (FCTC Articles 17 and 18), Malawi could learn from experiences of other countries that have ratified the convention but are still in the transition period to reduce overreliance on tobacco crops.
South Africa Timeline and resulting policy
On World No Tobacco Day in 1993, Nelson Mandela called on the apartheid government under F.W. de Klerk to pass tobacco control legislation. In response, the Tobacco Control Act was passed in 1993 but due to the then-apartheid government's strong ties with the tobacco industry, the Act did not result in major shifts in tobacco smoking. Although the Act was passed in 1993, it was under the democratic government in 1994 that the Tobacco Control regulations were drafted [41,42]. With the introduction of democracy in 1994, the African National Congress (ANC) brought both a notable distance from tobacco industry influence and specific health champions in the form of the president Nelson Mandela and Health Minister Dr. Nkosazana Dlamini-Zuma who showed serious commitment to public health, including tobacco control. Over the next few years, the ANC called for smoke-free cabinet meetings, called out tobacco companies that failed to display health warnings clearly on cigarette packs, introduced a 50% tax on the retail price of cigarettes, and in 1999 amended the Tobacco Products Control Act that outlawed smoking in public buildings, banned tobacco advertising in all its various forms, and made it illegal to sell cigarettes without health warnings on the pack.
Factors shaping policy ratification
South Africa was on the forefront of African countries to implement strict tobacco control policies. The ANC's rise to power and its recognition of tobacco as a racial equity issue were a strong counterforce to tobacco industry efforts to reduce control policies.
Key players
The ANC produced substantial pressure to improve tobacco control policies, as a matter of racial equity. Consistent tobacco industry opposition resulted in a weaker version of the Tobacco Products Control Act in 1993, with multiple amendments to attempt to strengthen it in 1999, 2003, 2007, and 2008 [41].
Quality of the policy
So far, South Africa has put in place the legislative framework and has implemented the WHO "best buy" interventions. The Tobacco Control Act and subsequent amendments prohibit smoking at indoor working places, washrooms, or any other places frequented by employees. These provisions do not prohibit smoking in private dwellings, which is in itself a fallacy, given that many women are domestic workers in private homes and remain unprotected from secondary exposure [42]. Subsequent amendments prohibited smoking in public places and vehicles with children as passengers and advertising and promotion of tobacco products. Amendments also restricted the distribution of tobacco products and advertising at the point of sale. Tobacco manufacturers were required to have clearly printed health warnings on the cigarette packets. South Africa has at least one policy for most of the FCTC best buys: protect people from tobacco smoke, enforce bans on tobacco advertising, warn about the dangers of tobacco, increase taxes on tobacco, and ban school sales, except for banning school tobacco use and implementing prevention activities (see Table 2).
Timeline and resulting policy
The development of tobacco control policies and legislation dates to its 2009 public health law under Articles 89 to 93 related to the fight against social scourges, including harmful use of alcohol, tobacco use, substance abuse, and prostitution. This law, however, focused largely on regulating (but not banning) tobacco advertising (Article 90), warning about the dangers (Article 91), and banning smoking in public places (Article 92). The law did not address raising taxes on tobacco and was not highly enforced. Significant efforts at controlling tobacco use started with the ratification of the FCTC on 15 November 2005.
Factors shaping policy ratification
Togo is a low-income country with few tobacco retailers. Participants said that because Togo is neither a tobacco producer nor manufacturer, it was relatively easy for policy-makers to use available evidence in the public domain on tobacco's harmful effects to achieve convergence about the nature of the problem, policy, and politics and to convince the government to act for tobacco control. Togo's policies aim to improve the health of vulnerable groups, to reduce health gaps between the most and least vulnerable groups, and to flatten the social gradient in health across the entire population.
Key players
The Ministry of Health played a key role in tobacco policy implementation. Participants reported that the tobacco industry strongly opposed tax increases on tobacco products and said the industry repeatedly attempted to delay or dissuade policies' enactment and implementation.
Quality of the policy
Togo passed comprehensive national legislation, but it is not fully WHO FCTC compliant because fully compliant health warnings were removed from the version of the law parliament approved, although there is some policy for health warnings. Togo has at least one policy for most of the FCTC best buys: protect people from tobacco smoke, enforce bans on tobacco advertising, warn about the dangers of tobacco, and increase taxes on tobacco, except for banning school sales, banning school tobacco use and implementing prevention activities (see Table 2).
Discussion
Countries varied widely on their timelines for addressing tobacco control, with the earliest tobacco-related policies in the mid-twentieth century (e.g., Nigeria, 1951;Malawi, 1970) and others developing policies only around the time of the FCTC in the mid-2000s (Togo). Earlier attention to tobacco-related policies often was more about commerce and agriculture than public health and did not relate to the country's implementation of FCTC policies (e.g., Malawi has not yet ratified the FCTC). Most countries developed piecemeal legislation starting in the 1990s and early 2000s, with the FCTC providing a strong boost as countries began to ratify it in the mid to late 2000s. The process for adopting tobacco policies focused on public health and consistency with "best buy" interventions was greatly motivated by FCTC adoption. It brought significant international focus on tobacco; countries' treaty signing provided additional energy for the study to approve legislation so as not to be seen as falling behind on the world stage.
Countries demonstrated high variability in their socio-political contexts: (a) tobacco was a significant cash crop that contributed to national employment and revenue (Cameroon, Malawi), (b) leadership demonstrated connections or interests in tobacco industries that reduced political will for tobacco control (Nigeria, apartheid South Africa, Togo), (c) limited resources were available for NCD prevention, given countries' political upheaval or communicable disease challenges (Cameroon, Kenya, Malawi, Nigeria, South Africa, Togo), (d) specific high-profile champions in government advocated for tobacco control (South Africa, Kenya, Nigeria), and (e) tobacco industries and their interests were strongly against tobacco control legislation (Cameroon, Kenya, Malawi, South Africa, Nigeria). This variability in contexts also influenced the countries' success implementing FCTC and WHO "best buys" on tobacco control.
Multiple actors engaged in deliberations about tobacco policies [19]. Most countries' health ministries led the process of evaluating FCTC guidelines and preparing policies with other ministries, NGOs, civil society organisations, and academics [23,25,41,43,44]. These groups generally supported tobacco control policies. The extent to which stakeholders were involved varied from nominal involvement to providing data, testimony, or lobbying for country-specific measures.
Countries struggled with whether or how to involve the tobacco industry in their selected stakeholder meetings; for instance, in South Africa, the constitution requires stakeholder representation, and the tobacco industry sued to require the government to recognize its inclusion as a stakeholder [41]. All countries reported significant interference from the tobacco industry in enacting tobacco control policies, with some countries also having labour groups or tobacco farmers opposing tobacco control policies (Malawi, Nigeria). This is similar to industry efforts to derail the FCTC prior to enactment and in low-and middle-income countries since FCTC enactment [7,8].
Most countries have addressed all four WHO "best buy" interventions, although not necessarily to the extent the FCTC recommends. Countries have been delayed by internal political challenges, conflicts of interest among leadership, tobacco industry interference, and limited resources. This is similar to other reports of progress among low-and middle-income countries that have struggled with implementing "best buy" interventions, especially related to tobacco [4,5,29].
Overall, the WHO FCTC has been enormously useful in reducing tobacco use and its health effects. Despite significant tobacco industry opposition, it remains the only international health treaty and has demonstrated success in reducing NCDs internationally. It is highly advisable to continue to improve both science and practice on how best to implement country-level implementation of international health treaties. Recommendations include: (a) how to best implement multi-sectoral action to ensure relevant stakeholders are included and their needs considered, (b) how politicians and civil service organizations can address tobacco industry engagement and interference, (c) how to address within-country conflicts of interest such as balancing needs related to tobacco's benefits for farmers, tax revenues, and public health;, and (d) how to expand the reach of international health organizations to implement more treaties for improving global health.
Conclusions
The WHO Framework Convention on Tobacco Control provided an unprecedented opportunity for an international governing organization to influence tobacco control interventions that lead to non-communicable disease prevention worldwide. Reviewing how six sub-Saharan countries responded to the treaty to mobilize resources and implement tobacco control provided insight for how to accelerate international mobilization to prevent noncommunicable disease. | 2018-08-16T00:25:00.733Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "9f2f18daaef17cb5a873ece0f937ad7d01ea74bb",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-018-5827-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f2f18daaef17cb5a873ece0f937ad7d01ea74bb",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237699961 | pes2o/s2orc | v3-fos-license | Application of Portable Air Purifiers for Mitigating COVID-19 in Large Public Spaces
: This study investigated, using validated computational fluid dynamics techniques, the actual performance of portable air purifiers for reducing the infection risks of airborne respiratory diseases such as COVID-19, by properly installing air purifiers in complicated large public spaces of primary concern, such as restaurants and ballrooms. The research results reveal that portable air purifiers with HEPA filtration provide an effective solution to help mitigate virus-carrying particles/droplets in large spaces where the central air conditioning system with HEPA filtration may not provide adequate dilution and/or effective ventilation. Deploying portable air purifier changes the local flow directions, and thus, reduces the cross-table airflows that may enhance the possibility of cross-infection. A field experiment was further conducted in a restaurant and a ballroom to verify the on-site performance. This study indicates that each space is unique in furniture, occupant and system layouts and capacities, and thus, requires individualized investigation of appropriate purifier number, capacities, and locations. Flexible solutions such as portable air purifiers are important and low-cost supplements to more elaborate solutions installed in central air systems.
Introduction
In less than two years since the coronavirus disease 2019 (COVID-19) emerged, the pandemic has changed how we live, work, study, and interact as social distancing guidelines have led to a more virtual world, both personally and professionally. This became a more severe concern when scientific data supported the possibility that the COVID-19 virus may be transmitted by aerosols (so-called airborne transmission) [1]. It was highlighted that aerosol transmission may more likely occur in particular environments, such as indoor, crowded, and inadequately ventilated spaces, where the infected person(s) spend a relatively long period of time with others (e.g., over 30 min), such as classrooms, restaurants, choir practices, fitness classes, nightclubs, offices, and places of worship. Therefore, how to reduce the infection risks of airborne respiratory diseases in a public place becomes an urgent task for epidemiologists and public health experts.
The results in the survey study of Fabisiak et al. [2] indicated that 55% of respondents will be afraid to use public spaces after the COVID-19 lockdown. Restrictions on the use of public spaces and physical distancing have been key policy measures to reduce the transmission of COVID-19 and protect public health [3]. Despite the concerns and the difficulties faced throughout the pandemic, those who were committed to investigating the new approaches to infection prevention demonstrated that they have managed to find promising solutions in their studies. Air purifiers, as traditional air pollution purification equipment, exhibit an excellent potential to reduce the infection risk of airborne
Methodologies
The actual performance of air purifiers can be determined through either a physical test or a validated numerical experiment. While providing first-hand data, a physical test in a real space is often challenging, due to many uncontrollable variables, especially for pollutant/virus-related studies. A validated numerical experiment using computational flow dynamics (CFD) techniques provides a great alternative and is applied widely in the field. This study simulated the airflow pattern and contaminant transport in the spaces using a steady-state RANS (the Reynolds-Averaged Navier-Stokes equations) method. The RNG k-ε turbulence model in the commercial CFD code ANSYS Fluent 19.2 [10,11] was used to represent turbulence characteristics. The particle simulation assumed monodispersed non-interacting spherical particles. The momentum transfer from the particles to the air turbulence has a negligible impact on the flow [12][13][14]. Equations (1) and (2) describe the governing equations of the fluid phase [12]. The buoyancy effect of air is modeled with the Boussinesq approximation.
where v j is the velocity component in three perpendicular coordinate directions (x j , j = 1, 2, 3), m/s; ρ is the air density, kg/m 3 ; p is the air pressure, Pa; µ is the kinematic viscosity coefficient of air; v i v j is the Reynolds stress tensor. When tracking the virus-carrying particles (in solid phase), the effect of the air drag force, gravity, and buoyancy were taken into account. The discrete particle model in Fluent (CFD-DPM) was used to track the individual particles [13]. The flow-governing equations of the solid phase single particle are described by Equations (3)-(6) [14]: where → F D is the air drag force acting on the particle, N, the second term on the right side of Equation (3), represents gravity and buoyancy; m p is the mass of the particle, kg; → v p is the velocity vector of the particle, m/s; ρ p is the particle density, kg/m 3 ; v is the air velocity, m/s; v p is the particle velocity, m/s; D p is the particle diameter, m; µ is dynamic viscosity, N s/m 2 ; a 1 , a 2 , and a 3 are coefficients determined by Re p .
Validation of the CFD Model
Exhaled air of an infected person is the primary source of contagious viruses. Exhaled air comes from various respiratory-related activities, such as breathing, coughing, singing, and talking. Accurate information on the release and diffusion process of exhaled airflow and particles is, thus, critical for the precise prediction of infectious disease transmission. This study focused on the talking scenario-the main concern during the dining and ballroom events when facial masks cannot be used. Studies showed that talking may release 2600 droplets per second at a speed of 1-5 m/s [15]. The range of the total airflow rate from a mouth when speaking is about 284-759 cm 3 /s [16]. Using an average of 500 cm 3 /s and assuming a mouth opening area at 1.8 cm 2 [16] leads to an average talking airflow speed of 2.77 m/s, which meets the particle image velocimetry (PIV) test result at the order of 3.1 m/s [17].
This study first conducted a test, via both physical experiment and CFD simulation, in the calm lab environment using a single table case with four occupants, including one infected person (Manikin 1) who intermittently talks, as shown in Figures 1 and 2. Table 1 lists the test case geometries. The thermal manikins of 70 Watt each have an average head/face temperature of 27 • C and an average body (with clothes) temperature of 22 • C. The space has no active air conditioning system, and the ambient room air, wall, ceiling, floor, and table surface temperatures are kept at 19 • C. mouth (with an inner size of 1.37 cm) of Manikin 1 by using stage fog (water-glycerinmixture) as the tracer gas. The exit velocity of the fog from the mouth was controlled at around 2.77 m/s, representing a scenario where people are talking loudly. The supply time of the fog from the mouth is 20-25 s in the experiment and each test runs 2-5 min. A commercially available portable air purifier with a clean air delivery rate (CADR) of 166 m 3 /h was tested for the performance, with the most effective placement location as shown in Figures 1 and 2. The sizes of the portable air purifier are listed in Table 1. Figure 2 illustrates the created CFD model for the single table case. The virus particle details simulated in CFD are presented in Table 2. The number of the particles released from the mouth was assumed as 5000 to ensure that the deviation of the particle statistical results is less than 1% [18]. Particles larger than 10 micron tend to drop quickly, while smaller particles tend to flow with air. The mean particle size during talking was 3 micron, which was thus used in the CFD model [19]. mouth (with an inner size of 1.37 cm) of Manikin 1 by using stage fog (water-gl mixture) as the tracer gas. The exit velocity of the fog from the mouth was contro around 2.77 m/s, representing a scenario where people are talking loudly. The supp of the fog from the mouth is 20-25 s in the experiment and each test runs 2-5 commercially available portable air purifier with a clean air delivery rate (CADR m 3 /h was tested for the performance, with the most effective placement location as in Figures 1 and 2. The sizes of the portable air purifier are listed in Table 1. Figure 2 illustrates the created CFD model for the single table case. The virus details simulated in CFD are presented in Table 2. The number of the particles r from the mouth was assumed as 5000 to ensure that the deviation of the particle sta results is less than 1% [18]. Particles larger than 10 micron tend to drop quickly smaller particles tend to flow with air. The mean particle size during talking was 3 m which was thus used in the CFD model [19]. The mock-up physical experiment simulated the virus pollutant emitted from the mouth (with an inner size of 1.37 cm) of Manikin 1 by using stage fog (water-glycerinmixture) as the tracer gas. The exit velocity of the fog from the mouth was controlled at around 2.77 m/s, representing a scenario where people are talking loudly. The supply time of the fog from the mouth is 20-25 s in the experiment and each test runs 2-5 min. A commercially available portable air purifier with a clean air delivery rate (CADR) of 166 m 3 /h was tested for the performance, with the most effective placement location as shown in Figures 1 and 2. The sizes of the portable air purifier are listed in Table 1. Figure 2 illustrates the created CFD model for the single table case. The virus particle details simulated in CFD are presented in Table 2. The number of the particles released from the mouth was assumed as 5000 to ensure that the deviation of the particle statistical results is less than 1% [18]. Particles larger than 10 micron tend to drop quickly, while smaller particles tend to flow with air. The mean particle size during talking was 3 micron, which was thus used in the CFD model [19]. [16,20].
Variable Value
Open area of the talking mouth 1.8 cm 2 Airflow rate from the talking mouth 500 cm 3 /s Average talking airflow speed 2.77 m/s Temperature of the airflow from the talking mouth 27 • C Aerodynamic diameter of the particles 3 µm Density of the particles 600 kg/m 3 Number of particles released from the talking mouth 5000 The CFD simulation first performed the grid sensitivity analysis to ensure the independence of the numerical results. Three numerical grids, 410,000, 770,000, and 1,000,000, were tested and compared using the normalized root mean square error (NRMSE) of the velocity at the pole shown in Figure 2 (30 points). The NRMSE of the velocity at the pole, respectively, between 410K and 1000K grids and between 770K and 1000K grids, are calculated as shown in Figure 3a. There is generally a great improvement in the error with the 770K grid, and the computational uncertainty is overall below 15%. By balancing the computational accuracy and cost, the 770,000 grid was, thus, chosen for the simulation. Among the 770,000 cells, fine grids were allocated around the mouths of the occupants, as shown in Figure 3b. The CFD simulation first performed the grid sensitivity analysis to ensure the independence of the numerical results. Three numerical grids, 410,000, 770,000, and 1,000,000, were tested and compared using the normalized root mean square error (NRMSE) of the velocity at the pole shown in Figure 2 (30 points). The NRMSE of the velocity at the pole, respectively, between 410K and 1000K grids and between 770K and 1000K grids, are calculated as shown in Figure 3a. There is generally a great improvement in the error with the 770K grid, and the computational uncertainty is overall below 15%. By balancing the computational accuracy and cost, the 770,000 grid was, thus, chosen for the simulation. Among the 770,000 cells, fine grids were allocated around the mouths of the occupants, as shown in Figure 3b. The mock-up experiment exhibits clearly, in Figure 1, that the fog exhaled from Manikin 1 can easily arrive at the face of Manikin 4, who is right across the table, and Manikin 3, who is diagonally across the table, when no air purifiers are used. When air purifiers were used, most of the fog exhaled from Manikin 1 was attracted towards the air purifiers, effectively containing the turbulent spread of the contaminant.
The similar dispersion processes of the particles were observed in the CFD simulation with and without air purifiers, as shown in Figure 4. Both mock-up test and CFD modeling indicate that the particle concentration at Manikin 4 (across the table from the source person) is less than those near Manikins 2 and 3, confirming the effectiveness of the air purifier placed between Manikin 1 and 4. Most of the particles can be directly removed by the purifier or blown away by the upward air from the top exit of the purifier. The mock-up experiment exhibits clearly, in Figure 1, that the fog exhaled from Manikin 1 can easily arrive at the face of Manikin 4, who is right across the table, and Manikin 3, who is diagonally across the table, when no air purifiers are used. When air purifiers were used, most of the fog exhaled from Manikin 1 was attracted towards the air purifiers, effectively containing the turbulent spread of the contaminant.
The similar dispersion processes of the particles were observed in the CFD simulation with and without air purifiers, as shown in Figure 4. Both mock-up test and CFD modeling indicate that the particle concentration at Manikin 4 (across the table from the source person) is less than those near Manikins 2 and 3, confirming the effectiveness of the air purifier placed between Manikin 1 and 4. Most of the particles can be directly removed by the purifier or blown away by the upward air from the top exit of the purifier.
The comparison between simulation and experiment is quantitatively analyzed by calculating the reduction rate of the particle numbers (M reduction ) detected at the three locations of the sensors ( Figure 1). M reduction can be defined as follows: where M without air purifier, i is the number of particles sensed at the ith point (i = 1, 2, 3) for the case without air purifiers and M with air purifier, i is the number of particles sensed at the ith points (i = 1, 2, 3) for the case with air purifiers. Figure 5 compares the results obtained from the simulation and test. The comparison between simulation and experiment is quantitatively analyzed by calculating the reduction rate of the particle numbers (Mreduction) detected at the three locations of the sensors ( Figure 1). Mreduction can be defined as follows: without air purifier, with air purifier, reduction, without air purifier, where Mwithout air purifier, i is the number of particles sensed at the ith point (i = 1, 2, 3) for the case without air purifiers and Mwith air purifier, i is the number of particles sensed at the ith points (i = 1, 2, 3) for the case with air purifiers. Figure 5 compares the results obtained from the simulation and test. Both the qualitative and quantitative comparisons between the mock-up experiment and the CFD simulation verify that the simulation model can project reasonable results to track the virus-carrying particles' trajectories. Therefore, a sophisticated CFD model that is made of many similar tables and occupants should be able to predict the transportation of the virus-carrying particles in a complex large built environment.
CFD Simulation and Analysis of Large Public Spaces
This study used the validated CFD model to simulate a restaurant and a ballroom as two representative public places to evaluate the effect of using portable air purifiers to mitigate COVID-19 in large spaces. Steady-state air circulation conditions were simulated to determine the best location, air exchange rate requirement, and quantity of air purifiers. The restaurant and the ballroom are conditioned with a central HVAC system, which supplies clean air from the ceiling inlets and exhausts room air from the ceiling/upper-level outlets. To assess the actual performance of the commercial purifier on mitigating COVID-19, virus-carrying particles released from one infected occupant in the restaurant and the ballroom were simulated and tracked, using the same settings as used in the validation The comparison between simulation and experiment is quantitatively analyzed by calculating the reduction rate of the particle numbers (Mreduction) detected at the three locations of the sensors (Figure 1). Mreduction can be defined as follows: without air purifier, with air purifier, reduction, without air purifier, where Mwithout air purifier, i is the number of particles sensed at the ith point (i = 1, 2, 3) for the case without air purifiers and Mwith air purifier, i is the number of particles sensed at the ith points (i = 1, 2, 3) for the case with air purifiers. Figure 5 compares the results obtained from the simulation and test. Both the qualitative and quantitative comparisons between the mock-up experiment and the CFD simulation verify that the simulation model can project reasonable results to track the virus-carrying particles' trajectories. Therefore, a sophisticated CFD model that is made of many similar tables and occupants should be able to predict the transportation of the virus-carrying particles in a complex large built environment.
CFD Simulation and Analysis of Large Public Spaces
This study used the validated CFD model to simulate a restaurant and a ballroom as two representative public places to evaluate the effect of using portable air purifiers to mitigate COVID-19 in large spaces. Steady-state air circulation conditions were simulated to determine the best location, air exchange rate requirement, and quantity of air purifiers. The restaurant and the ballroom are conditioned with a central HVAC system, which supplies clean air from the ceiling inlets and exhausts room air from the ceiling/upper-level outlets. To assess the actual performance of the commercial purifier on mitigating COVID-19, virus-carrying particles released from one infected occupant in the restaurant and the ballroom were simulated and tracked, using the same settings as used in the validation Both the qualitative and quantitative comparisons between the mock-up experiment and the CFD simulation verify that the simulation model can project reasonable results to track the virus-carrying particles' trajectories. Therefore, a sophisticated CFD model that is made of many similar tables and occupants should be able to predict the transportation of the virus-carrying particles in a complex large built environment.
CFD Simulation and Analysis of Large Public Spaces
This study used the validated CFD model to simulate a restaurant and a ballroom as two representative public places to evaluate the effect of using portable air purifiers to mitigate COVID-19 in large spaces. Steady-state air circulation conditions were simulated to determine the best location, air exchange rate requirement, and quantity of air purifiers. The restaurant and the ballroom are conditioned with a central HVAC system, which supplies clean air from the ceiling inlets and exhausts room air from the ceiling/upperlevel outlets. To assess the actual performance of the commercial purifier on mitigating COVID-19, virus-carrying particles released from one infected occupant in the restaurant and the ballroom were simulated and tracked, using the same settings as used in the validation model. A grid sensitivity analysis was also performed to ensure the accuracy of the numerical results. The suitable total grid cells were determined to be about 650,000 and 4,470,000 for the restaurant and the ballroom, respectively. Table 3. Figure 6) of a restaurant (total space area 370 m 2 ) was in ICEMCFD (Integrated Computer Engineering and Manufacturing code for CFD) specifications of the restaurant are summarized in Table 3. To evaluate the effectiveness of air purifiers on reducing the potential infection to the healthy occupants, floor-standing air purifier (FAP), and table air purifier ( were arranged in the restaurant. As listed in Table 4, two kinds of commercial pur [21,22] were employed to purify indoor air. Indoor air flows into the purifier from the of the purifier and then flows out from the top of the purifier. Eight FAPs are plac both ends of the restaurant ( Figure 6). The restaurant was operated at full capacity, 177 occupants, in which one was the virus-carrier, as shown in Figure 6c. One TAP placed at the center of the dining table occupied by the infected occupant. To evaluate the effectiveness of air purifiers on reducing the potential infection risk to the healthy occupants, floor-standing air purifier (FAP), and table air purifier (TAP) were arranged in the restaurant. As listed in Table 4, two kinds of commercial purifiers [21,22] were employed to purify indoor air. Indoor air flows into the purifier from the side of the purifier and then flows out from the top of the purifier. Eight FAPs are placed at both ends of the restaurant ( Figure 6). The restaurant was operated at full capacity, with 177 occupants, in which one was the virus-carrier, as shown in Figure 6c. One TAP was placed at the center of the dining table occupied by the infected occupant. Particle release conditions were the same as those in the validation case (Table 2). Particle trajectories were investigated, respectively, under three ventilation conditions: (1) central air-conditioning system (CA); (2) central air-conditioning system with floor-standing air purifiers (CAF); and (3) central air-conditioning system with both floor-standing air purifiers and table air purifier (CAFT), to analyze the fate of the particles exhaled from the infected occupant.
Flow Field and Particle Trajectory Analysis
The movement of airflow is deterministic to the particle trajectory. Under the three ventilation conditions (i.e., CA, CAF, and CAFT), the velocity vector of airflow near the infected occupant (Figure 6c) at the height of the breathing zone (Z = 1.1 m) is shown in Figure 7. Installing portable air purifiers changes the local flow directions and mitigates table airflow that may cause cross-infection. The table unit, with proper capacity, displays explicit and favorable air inflow towards the purifier. The dispersion processes of the particles under the three flow fields are shown in Figure 8. The flow paths of the viral particles exhaled from the infected occupant are different for the three ventilation conditions. The particles under the CA system can easily arrive at the breathing zone of the opposite occupant. For the CAF system, the particles move up first and are then sucked by the floor-standing air purifier placed at the back, which purifies the air with the particles. The CAFT system presents a better purification performance, because most of the particles are directly sucked into the table purifier as well as pushed upwards due to the strong upward outflow from the top air exit of the unit. air purifiers and table air purifier (CAFT), to analyze the fate of the particles exhaled from the infected occupant.
Flow Field and Particle Trajectory Analysis
The movement of airflow is deterministic to the particle trajectory. Under the three ventilation conditions (i.e., CA, CAF, and CAFT), the velocity vector of airflow near the infected occupant (Figure 6c) at the height of the breathing zone (Z = 1.1 m) is shown in Figure 7. Installing portable air purifiers changes the local flow directions and mitigates the cross-table airflow that may cause cross-infection. The table unit, with proper capacity, displays explicit and favorable air inflow towards the purifier. The dispersion processes of the particles under the three flow fields are shown in Figure 8. The flow paths of the viral particles exhaled from the infected occupant are different for the three ventilation conditions. The particles under the CA system can easily arrive at the breathing zone of the opposite occupant. For the CAF system, the particles move up first and are then sucked by the floor-standing air purifier placed at the back, which purifies the air with the particles. The CAFT system presents a better purification performance, because most of the particles are directly sucked into the table purifier as well as pushed upwards due to the strong upward outflow from the top air exit of the unit. air purifiers and table air purifier (CAFT), to analyze the fate of the particles exhaled the infected occupant.
Flow Field and Particle Trajectory Analysis
The movement of airflow is deterministic to the particle trajectory. Under the ventilation conditions (i.e., CA, CAF, and CAFT), the velocity vector of airflow ne infected occupant (Figure 6c) at the height of the breathing zone (Z = 1.1 m) is sho Figure 7. Installing portable air purifiers changes the local flow directions and mi the cross-table airflow that may cause cross-infection. The table unit, with proper cap displays explicit and favorable air inflow towards the purifier. The dispersion pro of the particles under the three flow fields are shown in Figure 8. The flow paths viral particles exhaled from the infected occupant are different for the three vent conditions. The particles under the CA system can easily arrive at the breathing z the opposite occupant. For the CAF system, the particles move up first and ar sucked by the floor-standing air purifier placed at the back, which purifies the air w particles. The CAFT system presents a better purification performance, because m the particles are directly sucked into the table purifier as well as pushed upwards the strong upward outflow from the top air exit of the unit. Figure 9 compares quantitively the percentage of particles that are, respectively, discharged from the central exhausts, purified by the purifiers, and deposited on different surfaces under different ventilation conditions. Particles released from the mouth of the infector are convenient to spread in the horizontal direction under the central air-conditioning system (CA), especially at the height of the breathing zone. A small fraction of the particles is discharged through the exhaust outlets under CA. Most of them are spread out in the indoor space and then deposited on the diners (33%), the table (25%), the ground (25%), and the walls (11%). The floor-standing purifiers (FAP) can purifier 28% of the particles while increasing the particle deposition on the walls (48%) due to the redirected air inflow and outflow around the purifiers. The particle concentration at the height of the breathing zone under the CAFP system is smaller than that under the CA system. With the CAFP system, most of the viral particles are purified by the table air purifier (TAP)-about 76% of particles can be purified by TAP, a remarkable performance in purifying particles.
Fate Analysis of the Virus-Carrying Particles Exhaled from the Infected Occupant
(25%), and the walls (11%). The floor-standing purifiers (FAP) can purifier 28% of ticles while increasing the particle deposition on the walls (48%) due to the redire inflow and outflow around the purifiers. The particle concentration at the heigh breathing zone under the CAFP system is smaller than that under the CA syste the CAFP system, most of the viral particles are purified by the table air purifier about 76% of particles can be purified by TAP, a remarkable performance in p particles. To assess the quantitative performance of air purifiers on reducing infectio the reduction rate of the number of particles deposited on the surrounding occupa duction) is calculated by comparing the cases with and without air purifiers. Nreduct fined as in Equation (8): without air purifiers with air purifiers reduction without air purifiers where Nwithout air purifiers is the number of particles deposited on the occupants for without air purifiers, Nwith air purifiers is the number of particles deposited on the oc for the case with air purifiers. Using Equation (8) and the data in Figure 9, Nreduc tained for the CAF and CAFT cases, respectively, compared to the CA case, as Table 5. Nreduction of CAFT is greater than that of CAF, which verifies that the tabl rifier (TAP) has a superior performance in mitigating the infection risk. The optim tion of the air purifier will be in proximity where people are seated, congregated queue. The study of Mousavi et al. [23] indicated that the best location of a single air purifier unit is inside the isolation room and near the patient's bed. The resu study is, thus, consistent with the suggestion in the literature [23]. provides an effective m help mitigate airborne transmission of pathogens in the restaurant. TAP was fu vestigated to evaluate its application for one larger event (i.e., ballroom). The CFD of the ballroom (total space area 3200 m 2 ) built in ICEMCFD is shown in Figure 10 To assess the quantitative performance of air purifiers on reducing infection risks, the reduction rate of the number of particles deposited on the surrounding occupants (N reduction ) is calculated by comparing the cases with and without air purifiers. N reduction is defined as in Equation (8): N reduction = N without air purifiers − N with air purifiers N without air purifiers × 100% (8) where N without air purifiers is the number of particles deposited on the occupants for the case without air purifiers, N with air purifiers is the number of particles deposited on the occupants for the case with air purifiers. Using Equation (8) and the data in Figure 9, N reduction is obtained for the CAF and CAFT cases, respectively, compared to the CA case, as seen in Table 5. N reduction of CAFT is greater than that of CAF, which verifies that the table air purifier (TAP) has a superior performance in mitigating the infection risk. The optimal location of the air purifier will be in proximity where people are seated, congregated or in a queue. The study of Mousavi et al. [23] indicated that the best location of a single portable air purifier unit is inside the isolation room and near the patient's bed. The result of this study is, thus, consistent with the suggestion in the literature [23]. provides an effective means to help mitigate airborne transmission of pathogens in the restaurant. TAP was further investigated to evaluate its application for one larger event (i.e., ballroom). The CFD model of the ballroom (total space area 3200 m 2 ) built in ICEMCFD is shown in Figure 10 and the specifications of the ballroom are summarized in Table 6. The ballroom is full of diners (1320 total), with one of them infected ( Figure 10). One TAP is placed at the center of the dining table occupied by the infected person. Purifier specifications and particle release conditions, respectively, are shown in Tables 2 and 4. Particle trajectories were investigated, respectively, under two ventilation conditions: (1) central air-conditioning system (CA-Ballroom) and (2) central air-conditioning system with table air purifier (CAT), to analyze the fate of the viral particles exhaled from the infected occupant in the ballroom.
specifications of the ballroom are summarized in Table 6. The ballroom is full of d (1320 total), with one of them infected ( Figure 10). One TAP is placed at the center o dining table occupied by the infected person. Purifier specifications and particle re conditions, respectively, are shown in Tables 2 and 4. Particle trajectories were inv gated, respectively, under two ventilation conditions: (1) central air-conditioning sy (CA-Ballroom) and (2) central air-conditioning system with table air purifier (CAT analyze the fate of the viral particles exhaled from the infected occupant in the ballro
Flow Field and Particle Trajectory Analysis
The velocity vector of airflow at the height of the breathing zone (Z = 1.1 m) a table with the infector is shown in Figure 11, respectively, for the cases under two v lation conditions (i.e., CA-Ballroom and CAT). The table air purifier changes the local directions, displaying explicit and favorable air inflow towards the purifier. The dis sion processes of virus particles under the two flow fields are shown in Figure 12. spread range of the viral particles for the CA-Ballroom system is much larger than tha the CAT system, even reaching the neighboring tables. These particles, however, are fined around the purifier with the CAT system due to the negative pressure effect, w avoids the viruses' large-scale spread. Although the particles are not fully attracted b table purifier, the dispersion of the virus at the horizontal plane has been reduced si icantly by using TAP.
Flow Field and Particle Trajectory Analysis
The velocity vector of airflow at the height of the breathing zone (Z = 1.1 m) at the table with the infector is shown in Figure 11, respectively, for the cases under two ventilation conditions (i.e., CA-Ballroom and CAT). The table air purifier changes the local flow directions, displaying explicit and favorable air inflow towards the purifier. The dispersion processes of virus particles under the two flow fields are shown in Figure 12. The spread range of the viral particles for the CA-Ballroom system is much larger than that for the CAT system, even reaching the neighboring tables. These particles, however, are confined around the purifier with the CAT system due to the negative pressure effect, which avoids the viruses' large-scale spread. Although the particles are not fully attracted by the table purifier, the dispersion of the virus at the horizontal plane has been reduced significantly by using TAP. Figure 13 compares the percentage of the particles that are, respectively, exhausted from the central air outlets, purified by the purifier, and deposited on different surfaces under the two ventilation conditions. Although only 12% of particles were purified by the TAP, which is attributed to the longer distance between the unit and the infector (the larger diameter of the dining table) compared to the restaurant case, the TAP helps reduce the deposition of particles on the occupants (potential cross-infection risk) from 29% to 11%. More particles were pushed up towards the ceiling (49%) due to the upward exit flow of the TAP, rather than distributing horizontally, which is a potential cause of crossinfection.
We obtained 60.3% of Nreduction for the CAT compared to CA-Ballroom using Equation (8) and the data in Figure 13. While this number is lower than that in the restaurant, it is still promising, especially considering the small size and capacity of the purifier. The arrangement of a table air purifier can be as effective as more elaborate solutions installed in the HVAC system at a lower cost. Flexible solutions, such as portable air purifiers, can be redeployed throughout the property as needed. Figure 13 compares the percentage of the particles that are, respectively, exhausted from the central air outlets, purified by the purifier, and deposited on different surfaces under the two ventilation conditions. Although only 12% of particles were purified by the TAP, which is attributed to the longer distance between the unit and the infector (the larger diameter of the dining table) compared to the restaurant case, the TAP helps reduce the deposition of particles on the occupants (potential cross-infection risk) from 29% to 11%. More particles were pushed up towards the ceiling (49%) due to the upward exit flow of the TAP, rather than distributing horizontally, which is a potential cause of cross-infection.
Field Experiment and Analysis of Large Public Spaces
CFD simulation indicates that the installation of portable air purifiers can be fective measure to reduce the infection risks of aerosol transmission in large public To verify the actual performance of the purifiers in the real operation condition study further conducted the field experiment in two large public spaces (restaura ballroom). A commercial purifier (Blueair-411PACF105372) of 197.7 m 3 /h was in and tested in the experiment. Stage fog (water-glycerin-mixture) was used as a tra released from the mouth of the infector. The distribution of the fog was recorded as in Figure 14, under the central air-conditioning system with and without a purifie of the fog exhaled from the infector was sucked into the table air purifier, and the d uted area of the fog with the table purifier is much more confined than that witho purifier, similar to what was found in CFD simulation.
Similar performance was observed for the test in the ballroom ( Figure 15). Alt most of the released fog was not directly sucked in by the table purifier, the spread fog to the surrounding area was slowed down due to the table purifier. It appears t negative pressure area around the table purifier cannot completely cover the infec cupant when the unit is placed at the center of a large table. A more powerful table p will be required to provide a better performance. We obtained 60.3% of N reduction for the CAT compared to CA-Ballroom using Equation (8) and the data in Figure 13. While this number is lower than that in the restaurant, it is still promising, especially considering the small size and capacity of the purifier. The arrangement of a table air purifier can be as effective as more elaborate solutions installed in the HVAC system at a lower cost. Flexible solutions, such as portable air purifiers, can be redeployed throughout the property as needed.
Field Experiment and Analysis of Large Public Spaces
CFD simulation indicates that the installation of portable air purifiers can be an effective measure to reduce the infection risks of aerosol transmission in large public spaces. To verify the actual performance of the purifiers in the real operation conditions, this study further conducted the field experiment in two large public spaces (restaurant and ballroom). A commercial purifier (Blueair-411PACF105372) of 197.7 m 3 /h was installed and tested in the experiment. Stage fog (water-glycerin-mixture) was used as a tracer gas released from the mouth of the infector. The distribution of the fog was recorded as shown in Figure 14, under the central air-conditioning system with and without a purifier. Most of the fog exhaled from the infector was sucked into the table air purifier, and the distributed area of the fog with the table purifier is much more confined than that without the purifier, similar to what was found in CFD simulation.
in Figure 14, under the central air-conditioning system with and without a purifier. Most of the fog exhaled from the infector was sucked into the table air purifier, and the distributed area of the fog with the table purifier is much more confined than that without the purifier, similar to what was found in CFD simulation.
Similar performance was observed for the test in the ballroom ( Figure 15). Although most of the released fog was not directly sucked in by the table purifier, the spread of the fog to the surrounding area was slowed down due to the table purifier. It appears that the negative pressure area around the table purifier cannot completely cover the infected occupant when the unit is placed at the center of a large table. A more powerful table purifier will be required to provide a better performance. Similar performance was observed for the test in the ballroom (Figure 15). Although most of the released fog was not directly sucked in by the table purifier, the spread of the fog to the surrounding area was slowed down due to the table purifier. It appears that the negative pressure area around the table purifier cannot completely cover the infected occupant when the unit is placed at the center of a large table. A more powerful table purifier will be required to provide a better performance.
CFD simulation indicates that the installation of portable air purifiers can be an effective measure to reduce the infection risks of aerosol transmission in large public spaces. To verify the actual performance of the purifiers in the real operation conditions, this study further conducted the field experiment in two large public spaces (restaurant and ballroom). A commercial purifier (Blueair-411PACF105372) of 197.7 m 3 /h was installed and tested in the experiment. Stage fog (water-glycerin-mixture) was used as a tracer gas released from the mouth of the infector. The distribution of the fog was recorded as shown in Figure 14, under the central air-conditioning system with and without a purifier. Most of the fog exhaled from the infector was sucked into the table air purifier, and the distributed area of the fog with the table purifier is much more confined than that without the purifier, similar to what was found in CFD simulation.
Similar performance was observed for the test in the ballroom ( Figure 15). Although most of the released fog was not directly sucked in by the table purifier, the spread of the fog to the surrounding area was slowed down due to the table purifier. It appears that the negative pressure area around the table purifier cannot completely cover the infected occupant when the unit is placed at the center of a large table. A more powerful table purifier will be required to provide a better performance.
Conclusions
COVID-19 is a test like no others. Never before have the lives of so many people around the world been affected at this scale or speed. In this study, the air purifier, as a facility recommended by organizations such as the US CDC and ASHRAE, was investigated for its actual performance in reducing the infection risks of the virus. The installation of air purifiers in two typical public places (i.e., restaurant and the ballroom) was studied, as compared to the use of the existing central air-conditioning systems. Validated CFD models were created and applied to explore the qualitative and quantitative performance of the portable air purifiers, which was further verified by a qualitative field experiment in the actual restaurant and ballroom.
The research outcomes reveal that the central air condition system only exhausts a small fraction of the particles released from the mouth of the infector, and most of the particles are spread out indoors and ultimately deposited on occupants, tables, equipment, ground, and walls. Both floor-standing and table air purifiers, with proper locations and capacities, can effectively attract room air to flow towards the purifiers and, therefore, clean the "contaminated" air. Similar behavior and performance were observed in the actual restaurant and the ballroom, while the results indicate that each space is unique in geometry, layout, and system and, thus, needs to be addressed individually. Flexible solution allows the redeployments of the cleaning devices throughout the property as needed. Portable air purifiers with HEPA filtration provide an effective means to help mitigate the airborne transmission of pathogens, which can be as effective as more elaborate solutions installed in the HVAC systems at a lower cost. | 2021-08-05T13:12:27.357Z | 2021-07-29T00:00:00.000 | {
"year": 2021,
"sha1": "9ff315a19dded2ab05738a69ccdf72a23709cd51",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/11/8/329/pdf?version=1627536827",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d1457d641e84d45d77380f813294ef9e15e9f0c8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
253447784 | pes2o/s2orc | v3-fos-license | Co-ultramicronized palmitoylethanolamide/luteolin normalizes GABAB-ergic activity and cortical plasticity in long COVID-19 syndrome
Objective Transcranial magnetic stimulation (TMS) studies showed that patients with cognitive dysfunction and fatigue after COVID-19 exhibit impaired cortical GABAB-ergic activity, as revealed by reduced long-interval intracortical inhibition (LICI). Aim of this study was to test the effects of co-ultramicronized palmitoylethanolamide/luteolin (PEA-LUT), an endocannabinoid-like mediator able to enhance GABA-ergic transmission and to reduce neuroinflammation, on LICI. Methods Thirty-nine patients (26 females, mean age 49.9 ± 11.4 years, mean time from infection 296.7 ± 112.3 days) suffering from persistent cognitive difficulties and fatigue after mild COVID-19 were randomly assigned to receive either PEA-LUT 700 mg + 70 mg or PLACEBO, administered orally bid for eight weeks. The day before (PRE) and at the end of the treatment (POST), they underwent TMS protocols to assess LICI. We further evaluate short-latency afferent inhibition (SAI) and long-term potentiation (LTP)-like cortical plasticity. Results Patients treated with PEA-LUT but not with PLACEBO showed a significant increase of LICI and LTP-like cortical plasticity. SAI remained unaffected. Conclusions Eight weeks of treatment with PEA-LUT restore GABAB activity and cortical plasticity in long Covid patients. Significance This study confirms altered physiology of the motor cortex in long COVID-19 syndrome and indicates PEA-LUT as a candidate for the treatment of this post-viral condition.
Symptoms may persist following the acute illness or may first appear after recovery from the initial disease and may fluctuate or relapse over time (Soriano et al., 2022).
Persons struggling with the effects of the so-called ''post-acute COVID-19 syndrome" or ''long Covid" may have noticeable troubles with attention, memory, and executive function, with a high impact on quality of life (Nalbandian et al., 2021). Cognitive symptoms are likely due to the dysfunction of frontostriatal and/or frontoparietal brain networks. An MRI study showed a reduction in grey matter thickness in the orbitofrontal cortex and parahippocampal gyrus in a large sample of patients presenting cognitive decline after mild COVID-19 (Douaud et al., 2022). 18FDG-PET/CT studies revealed hypometabolism of frontal regions, including the olfactory gyrus (Guedj et al., 2021;Sollini et al., 2021). The neurophysiological alterations at cortical level are still partially obscure.
We have recently demonstrated by using paired-pulse transcranial magnetic stimulation (TMS) protocols in patients who developed fatigue and dysexecutive syndrome after severe COVID-19 a remarkable impairment of intracortical GABA B ergic activity, as shown by altered long-interval intracortical inhibition (LICI) (Ortelli et al., 2022;Versace et al., 2021). Subsequently, we were able to demonstrate similar alterations in patients with persistent fatigue and cognitive complaints after mild symptomatic COVID-19 (Ortelli et al., 2022).
Hence, we conducted a double-blind, placebo-controlled, randomized clinical trial (RCT) to investigate the effects of an 8week oral therapy cycle with palmitoylethanolamide (PEA) coultramicronized with the flavonoid luteolin (PEA-LUT) in patients with cognitive complaints and fatigue after mild COVID-19. We hypothesized that PEA-LUT could restore intracortical GABA Bergic neurotransmission measured by LICI.
Participants
The study was conducted at the 'long Covid' outpatient clinic of the Department of Neurorehabilitation (Hospital of Vipiteno, SABES-ASDAA) between September 2021 and March 2022.
Inclusion criteria were (a) previous diagnosis of SARS-CoV-2 infection confirmed through detection of virus RNA by polymerase chain reaction (PCR) testing of a nasopharyngeal swab; (b) subsequent recovery from infection as defined by two consecutive negative PCR tests separated by at least one day; (c) mild form of COVID-19 (symptoms might include fever, cough, sore throat, malaise, myalgia, anorexia, nausea, diarrhea, anosmia and ageusia) without necessitating hospital admission; (d) complaints of sense of fatigue and/or cognitive difficulties persisting after SARS-CoV-2 infection documented through following self-administered questionnaires: Fatigue Severity Scale (FSS) and Perceived Cognitive Difficulties Scale (PCDS). FSS is a self-administered 9-item questionnaire that investigates the severity of fatigue in different situations during the previous week and ranks perceived severity on a 7-point Likert scale (1 = ''strongly disagree"; 7 = ''strongly agree"); FSS sum score ranges from 7 to 63, the cut-off for pathology and for inclusion in the study was FSS > 36) (Krupp et al., 1989). The PDCS scale assesses perceived cognitive difficulties (referring to one or more of the following: forgetfulness, cloudiness, difficulty in focusing, thinking and communicating) on a 4-point Likert-scale: 0 = ''I have no cognitive difficulties"; 1 = ''I have slightly more cognitive difficulties than before COVID"; 2 = ''I have moderate cognitive difficulties most of the time; 3 = ''I have persistent cognitive difficulties" (Ortelli et al., 2022). PDCS score ! 1 were considered for inclusion in the study.
No restrictions were considered regarding the interval between disease onset and study participation.
Exclusion criteria were (a) prior or concurrent diagnosis of neurological, psychiatric, endocrine, metabolic or cardiopulmonary conditions; (b) clinical and/or radiological evidence of COVID-19 related pneumonia during the active phase of the disease; (c) anaemia; (d) pharmacological treatment with corticosteroids, antihistamines, antihypertensives, diuretics, antidepressants, anxiolytic or hypnotic drugs during the time of study.
Thirty-nine patients (mean age 49.9 ± 11.4 years, 26 females, mean education 13.4 ± 2.9 years, mean time from onset 296.7 ± 1 12.3 days) fulfilling the inclusion criteria were enrolled. Thirty-four patients were studied (five patients withdrew from the study after pre-intervention assessment). Their demographic and clinic characteristics are shown in Table 1. All patients were right-handed.
Study design
This randomized controlled trial (RCT) investigated the neurophysiological and cognitive impact of PEA-LUT administration in patients complaining of cognitive difficulties and fatigue after mild COVID-19 (henceforth ''long Covid patients").
Patients were assigned to one of two study groups (n = 17 each), receiving either PEA-LUT (Glialia Ò , 700 mg + 70 mg, sublingual microgranule formulation, Epitech Group SpA, Saccolongo, Italy) or PLACEBO (sublingual inert microgranules), administered orally bid for eight weeks. Glialia Ò is licensed in Italy as an oral food product for special medical purposes, with anti-inflammatory and neuroprotective properties.
Group allocation was centralized and occurred in a pseudorandomized manner taking into account a balance of age, gender, education, and duration of illness (Table 1).
All participants underwent neurophysiological and neuropsychological assessment the day before beginning treatment (pretreatment evaluation, PRE) and at the end of eight weeks PEA-LUT or PLACEBO administration (post-treatment evaluation, POST).
The study was approved by the local ethics committee (Comitato Etico dell'Azienda Sanitaria dell'Alto Adige, n. 99-2021) and was in accordance with the code of ethics of the World Medical Association . Written informed consent was obtained from all participants for the use of their clinical data for scientific purposes.
Primary outcome measure for PEA-LUT effects was the intracortical GABA B -ergic neurotransmission indexed with LICI. Secondary neurophysiological outcome measures were short-latency afferent inhibition (SAI) and long-term potentiation (LTP)-like cortical plasticity. As further exploratory outcomes, we also searched for modification in cognitive performance.
Neurophysiological and cognitive assessment
We recorded motor evoked potentials (MEPs) from first dorsal interosseus (FDI) muscle of the dominant side. TMS of the dominant primary motor cortex (M1) was performed with a highpower Magstim 200 (Magstim Co., Whitland, UK), through a 7 cm figure-of-eight coil, held over the optimum scalp position to elicit maximal MEPs for a given intensity in contralateral FDI, with a posterior-to-anterior current flow (Rossini et al., 1994;Rossini et al., 2015). Stimulation intensities were expressed as a percentage of maximum stimulator output (% MSO). Surface electromyography signals were band-pass filtered (3-3000 Hz) and amplified with a Digitimer D440-4 amplifier (Digitimer Ltd., Welwyn Garden City, UK). Single sweeps were digitized (sampling rate 10 kHz) and recorded on a computer for post-hoc analysis using a CED 1401 A/D converter and Signal 6 software (Cambridge Electronic Design, Cambridge, UK). Resting motor threshold (RMT) was defined as previously described (Rossini et al., 2015).
Paired-pulse TMS was used to investigate LICI at interstimulus interval (ISI) 100 ms with a stimulation intensity of 130% RMT for both conditioning and test stimulus (Valls-Solé et al., 1992). The chosen ISI was the most effective in highlighting altered LICI in previous studies on long Covid patients (Ortelli et al., 2022;Versace et al., 2021). LICI is considered to be a phenomenon dependent on slow inhibitory postsynaptic potentials mediated through GABA B -receptors (Ziemann et al., 2015).
SAI was used to evaluate M1 inhibition induced by sensory afferents. SAI is a marker of inhibitory sensorimotor integration that depends mainly on the excitatory effect of cholinergic thalamocortical projections onto the inhibitory GABAergic cortical network (Tokimura et al., 2000). The conditioning stimulus was delivered to the ulnar nerve at the wrist (at an intensity just above the motor threshold for evoking a visible twitch in FDI) and preceded the TMS by an ISI corresponding to the latency of the N20 component of the ulnar nerve somatosensory evoked potential (Di Lazzaro et al., 2007). The intensity of the TMS test pulse over M1 was adjusted to elicit stable MEPs of approximately 1 mV peak-to-peak amplitude in the relaxed FDI.
For both LICI and SAI, twenty stimuli were delivered both to elicit test and conditioned MEPs in a pseudo-randomized sequence.
For all protocols, the mean amplitude of the conditioned responses was expressed as a percentage of the corresponding mean unconditioned response.
Finally, we used a TMS protocol assessing LTP-like cortical plasticity, consisting in potentiation of MEPs after a session of excitatory repetitive TMS (rTMS) given as intermittent theta burst stimulation (iTBS). This iTBS protocol consisted of 3 TMS pulses at 50 Hz, repeated every 200 ms (5 Hz) for 2 s. Such a 2-s train of iTBS was repeated 20 times, every 10 s, for a total of 190 s (600 pulses) (Huang et al., 2005). The intensity of iTBS was set at 70% of RMT.
To measure LTP, we considered the mean peak-to peak amplitude of 20 MEPs in relaxed FDI collected with single-pulse TMS at 120% RMT stimulation intensity, ISI of 5 s, before (T 0 ) and 1, 10, and 20 min after iTBS (LTP T 1, T 10, T 20 ).
Global cognition was assessed with the Italian version of Montreal Cognitive Assessment (MoCA) (Santangelo et al., 2015); MoCA total score ranges from 0 (worst performance) to 30 (best performance), scores below 15.5 points are indicative of cognitive decline.
Executive functions were evaluated with the Italian version of the Frontal Assessment Battery (FAB) (Appollonio et al., 2005). FAB consists of six subtests exploring specific cognitive or behavioral domains related to the frontal lobes; each subtest is scored from 0 (worst performance) to 3 (best performance), for a maximum score of 18; FAB scores lower than 13.48 are considered abnormal.
Statistics
Central tendency and dispersion of continuous variables are reported as mean and standard deviation (SD) for demographical and neuropsychological data and as mean and standard error (SE) for neurophysiological outcomes. Descriptive statistics for categorical variables are reported as number and percentage. Between-group comparisons were carried out by the Mann-Whitney U-test for continuous variables and by the Chi-square test for dichotomous variables.
The effect of treatment over time for percent change in LICI, percent change in SAI, change in MEP amplitude in the LTP-like cortical plasticity test, and neuropsychological scores (MoCA, FAB) was investigated by a two-factor analysis of variance (ANOVA), the first factor being treatment (between-group factor, two groups: PEA-LUT and PLACEBO) and the second factor being time (withingroup factor, two measurements: PRE and POST), with repeated measurements in the time factor (repeated measures ANOVA). Mauchly's test was used to assess the sphericity assumption, and the Greenhouse-Geisser correction was applied if appropriate. A significant result from repeated measures ANOVA was followed up by post-hoc analysis for pairwise comparisons (Dunn-Sidak).
All tests were two-tailed. A p value < 0.05 was considered statistically significant. All statistical analyses were carried out using the
Results
A total of 34 post COVID-19 patients completed the evaluation sessions. Demographic and clinical data are depicted in Table 1. An equal proportion of the female population (64.7%) was present in both groups. Patients did not differ significantly with respect to age, education, or time since onset of disease, FSS and PCDS. PEA-LUT and PLACEBO treatment was well-tolerated by all patients, and no side effects were reported.
Neurophysiological and cognitive assessment
All neurophysiological findings with repeated measures ANOVA results are reported in Table 2 and illustrated in Figs. 1 and 2.
In LICI protocol performed at PRE, mean test MEP amplitude for the PEA-LUT group and for the PLACEBO group were 0.93 ± 0.42 mV and 0.86 ± 0.43 mV, respectively.
In LICI protocol performed at PRE, mean test MEP amplitude for the PEA-LUT group and for the PLACEBO group were 0.90 ± 0.43 and 0.82 ± 0.43, respectively.
A significant interaction (treatment  time) was observed in the percent change of conditioned MEP amplitude in the LICI test. This finding indicates a different trend of this variable in PEA-LUT patients as compared to patients in the PLACEBO group. Indeed, post-hoc analysis revealed a significant increase in the amount of inhibition of the conditioned MEP from PRE to POST in the PEA-LUT group (P = 0.009) but not in the PLACEBO group (P = 0.72).
In SAI protocol performed at PRE, mean test MEP amplitude for the PEA-LUT group and for the PLACEBO group were 0.88 ± 0.36 mV and 0.84 ± 0.35 mV, respectively.
In SAI protocol performed at POST, mean test MEP amplitude for the PEA-LUT group and for the PLACEBO group were 1.01 ± 0.32 mV and 0.81 ± 0.40 mV, respectively.
No significant treatment or time effect and no interaction were found, indicating no differences in this variable in PEA-LUT and PLACEBO patients.
Repeated measures ANOVA for MEP amplitude modulation in the LTP-like cortical plasticity test revealed a significant interaction (treatment  time) 1 and 10 minutes following iTBS. Post-hoc testing revealed a significant increase of the MEP amplitude from PRE to POST in the PEA-LUT group but not in the PLACEBO group (P = 0.0009 and P = 0.01, for 1 and 10 minutes, respectively, in the PEA-LUT group, P = 0.38 and P = 0.56 for 1 and 10 minutes, respectively, in the PLACEBO group. No significant treatment or time effect and no interaction was observed 20 minutes following iTBS. RMT was 52.5 (6.1) and 52.5 (10.6) %MSO in PEA-LUT and PLA-CEBO groups, respectively, at time PRE, which was not significantly different from POST: 52.4 (8.1) and 51.4 (8.5) %MSO in PEA-LUT and PLACEBO groups, respectively.
Results of cognitive tests are shown in Table 2. No significant interaction (treatment  time) was found in MoCA and FAB cognitive screening tests.
Discussion
In the present RCT oral PEA-LUT 700 + 70 mg administered bid for eight weeks increased the GABA B ergic activity of M1 measured with the LICI protocol in patients complaining of long-term fatigue and cognitive difficulties after mild COVID-19 (long Covid patients). In parallel, we also observed an improvement of LTPlike cortical plasticity.
The pathogenesis of long Covid has not yet been elucidated; potential contributors include persistent consequences of SARS-CoV-2 interactions with host microbiome/virome, clotting/coagulation issues, dysfunctional brainstem/vagus nerve signaling, neuro-inflammation, ongoing activity of primed immune cells, autoimmunity, dysregulation of the renin-angiotensin-aldoster one system, and endothelial cell damage (Nalbandian et al., 2021;Proal and VanElzakker, 2021). Animal and organoid model studies have shown that SARS-CoV-2 is able to reach and infect cells of the central nervous system (CNS) and to produce neuro-inflammation (Song et al., 2020;Song et al., 2021).
PEA is a saturated N-acylethanolamide belonging to the family of endocannabinoids, naturally produced in the body, and largely found in several food sources which can exert anti-inflammatory and neuroprotective effects (Petrosino and Di Marzo, 2017).
In recent years, several experimental pre-clinical and clinical studies have indicated that ultramicronized PEA (a formulation that maximizes PEA bioavailability and penetrance through the blood-brain barrier) is an effective therapeutic agent in different pathologies characterized by neurodegeneration and neuroinflammation such as Alzheimer's disease, frontotemporal dementia, Parkinson's disease, spinal cord injury, and traumatic brain injury (Assogna et al., 2020;Assogna et al., 2022;Beggiato et al., 2019;Cordaro et al., 2020, Petrosino andDi Marzo, 2017). It has been proposed that PEA-LUT prevents nervous tissue damage and counteracts the hypofunction of GABAergic interneurons by reducing the activation of mast cells, astrocytes, and microglia, and by limiting the release of pro-inflammatory mediators (Cordaro et al., 2020). Moreover, PEA is able to enhance GABA-ergic transmission down-regulating the synthesis of the endocannabinoid 2-AG, which acts retrogradely onto presynaptic CB1 cannabinoid receptors and suppresses GABA release. PEA can also control GABA transmission enhancing indirectly the levels of other endocannabinoids, through the so-called entourage effect (Kano, 2014;Musella et al., 2017).
Based on this evidence, we decided to investigate the impact of the oral administration of PEA-LUT on cortical GABA B -ergic activity of long Covid patients with fatigue and cognitive difficulties. In fact, we have already demonstrated an impairment of GABA Bergic neurotransmission within M1, indexed by LICI, and to a lesser extent, of central cholinergic circuits, assessed by SAI, after both severe (Versace et al., 2021) and mild COVID-19 (Ortelli et al., 2022).
LICI is a well-known marker of GABA B mediated intracortical inhibition within M1 (Ziemann et al., 2015). As demonstrated in studies of LICI, GABA B mediated inhibition is altered in various neuropsychiatric conditions such as psychotic mood disorders, epilepsy, Parkinson's disease, traumatic brain injury, and dementia (Fatih et al., 2021). GABAergic interneurons, especially those expressing the Ca 2+ -binding protein parvalbumin, inhibit M1 pyramidal cells through a negative feedback system (Sohal et al., 2009) and play a fundamental role in almost all neuronal coding and processing in the CNS.
Different cognitive abilities, mainly executive functions, are sensitive to cerebral GABA concentrations in the frontal cortex (Porges et al., 2017;Sumner et al., 2010). In particular, reduced LICI is now a recognized biomarker of fronto-temporal dementia (FTD) (Benussi et al., 2020) where it correlates with executive function deficit.
Interestingly, a 4-week treatment with PEA-LUT induced an improvement in frontal cognitive functions and a restoration of LICI in FTD patients (Assogna et al., 2020). Long Covid shares with FTD the impairment of executive functions (although to a different extent) and a comparable reduction of LICI (Benussi et al., 2020;Ortelli et al., 2022).
Moreover, degeneration of intracortical inhibitory GABAergic circuits within M1 has been reported in various affections of the central nervous system inducing fatigue (Liepert et al., 2005;Ridding et al., 1995;Vucic et al., 2011). In light of these studies, a downregulation of GABA activity and the consequent enhancement of cortical excitability could be seen as a compensatory mechanism for overcoming premature motor fatigue. On the other hand, one cannot exclude the possibility that cortical disinhibition is the cause of fatigue itself, as a system with upregulated excitability may have a lower range to further increase excitability.
SAI reflects M1 inhibition induced by sensory afferents and depends on the excitatory effect of cholinergic thalamocortical projections on inhibitory GABAergic cortical networks (Tokimura et al., 2000). Consolidated evidence points to a disrupted SAI mechanism in both Alzheimer's and Lewy-body disease patients, where SAI correlate with memory function (Di Lazzaro et al., 2002;Di Lorenzo et al., 2013;Nardone et al., 2006). Important indications on the relevance of LICI and SAI paradigms to investigate neurophysiological processes in the human cortex as well as their relationship to pathology come from TMSelectroencephalography (EEG) measurements. GABA B -mediated inhibition of cortical activity in M1 and dorsolateral prefrontal cortex (DLPFC) can be obtained with LICI protocols (Daskalakis et al., 2008;Farzan et al., 2010;Premoli et al., 2014). Prefrontal LICI deficits are specific to patients with schizophrenia and other neuropsychiatric disorders (Tremblay et al., 2019). The attenuation of cortical excitability induced by SAI protocols identifies cholinergic changes in M1 and DLPFC and correlates with executive functions (Bikmullina et al., 2009;Noda et al., 2017).
We tested LICI at the ISI of 100 ms, which usually yields maximum MEP inhibition (Valls-Solé et al., 1992) and which was effective in highlighting altered LICI in long Covid patients (Ortelli et al., 2022;Versace et al., 2021). We found markedly reduced LICI in a similar range to our previous studies, in which we compared long Covid patients to a control population matched for age, sex and V. Versace, P. Ortelli, S. Dezi et al. Clinical Neurophysiology 145 (2023) 81-88 education (Ortelli et al., 2022;Versace et al., 2021). LICI increased significantly (i.e., percentage ratio of mean conditioned to mean test MEP amplitude decreased) after intervention in the PEA-LUT group, but not in the PLACEBO group.
We assessed SAI at the most effective ISI between cortical and peripheral stimulation, i.e., coinciding with the cortical somatosensory evoked potential component N20 following ulnar nerve stimulation at the wrist (Di Lazzaro et al., 2007), and found reduced SAI in line with our previous observations (Ortelli et al., 2022;Versace et al., 2021). Unlike LICI, SAI was not significantly improved by PEA-LUT therapy in the present study.
Furthermore, we investigated LTP-like cortical plasticity in M1 with the technique of rTMS given as iTBS, i.e., a stimulation with high-frequency bursts (5 Hz) at theta frequencies (50 Hz) which is able to induce homotopic plasticity (Huang et al., 2005). In rTMS studies indeed, cortical LTP-like plasticity is evidenced by the transient increase in MEP amplitude outlasting repetitive brain stimulation by seconds or minutes and reflecting activity-dependent changes in the effectiveness of synaptic transmission (Ziemann et al., 2008).
In the studied cohort of long Covid patients, iTBS failed to induce the expected potentiation of MEP amplitudes, thus indicating altered LTP-like cortical plasticity. The transient physiological MEP facilitation, however, was restored in the post-treatment evaluation only in the PEA-LUT group.
The pathophysiological cascade in patients with severe COVID-19 is related to over-stimulation of T cells and macrophages with a subsequent release of an enormous quantity of pro-inflammatory cytokines such as interleukins and chemokines that can result in multi-organ dysfunction (Yuki et al., 2020). Moreover, GABAergic neurons have a higher expression of ACE2 receptors (Chen et al., 2020;Mukerjee et al., 2019). If SARS-CoV-2 enters the brain it has the potential to access GABAergic neurons, leading to functional impairment until apoptosis and causing excitatoryinhibitory imbalance (Ramani et al., 2020). Cytokine release from infected neurons and other activated microglia and astrocytes may also cause a decrease in GABA (Galic et al., 2012). GABAergic transmission is also impaired in hypoxic conditions , as also seen in severe COVID-19 patients with pneumonia (Versace et al., 2021).
The different pathways mentioned above through which PEA-LUT is able to enhance GABA-ergic transmission in the central nervous system may explain its effect on the GABA B -ergic (LICI) circuits found in the present study.
The molecular mechanism by which PEA-LUT improves LTP-like cortical plasticity is very likely related to the involvement of the cannabinoid system. PEA as an endocannabinoid anandamide congener, is known to modulate glutamatergic transmission mainly through cannabinoid CB1 receptor and the transient receptor potential vanilloid 1 and to restore LTP mechanisms (al-Ghoul et al., 1993;Basavarajappa et al., 2014;Boccella et al., 2019;Guida et al., 2015;Lutz et al., 2015;Zimmermann et al., 2019). Hippocampal PEA modulates reward and memory in mesolimbic areas through GPR55 receptors with the implication of glutamatergic projections emerging from ventral hippocampus (Kramar et al., 2017). Endocannabinoid signaling via anandamide or PEA is implicated in several neuronal functions and considered a potential therapeutic target for disorders associated with altered plasticity (Maccarrone, 2017;Zimmermann et al., 2019).
While we found that PEA-LUT had a beneficial impact on altered motor cortex physiology, we did not observe significant changes in the chosen cognitive measures.
MoCA and FAB, exploring global cognition and executive function respectively, exhibited a ceiling effect (Table 2) i.e. insufficient sensitivity for patients' cognitive disturbances, thus preventing the possibility of observing cognitive improvement after treatment.
Future RCTs on selected groups of patients with more pronounced cognitive impairment or using more sensitive cognitive / behavioural outcome measures will be better able to assess the clinical impact of this or other treatments. Furthermore, studies with a longer observation period could evaluate the duration of the treatment effect.
In conclusion, the present RCT demonstrates that PEA-LUT is able to enhance GABA B -ergic neurotransmission and LTP-like cortical plasticity in long Covid patients.
The mechanisms of action with which PEA-LUT exerted these effects are not deducible only from the current results and can be hypothesized on the basis of previous evidence, possibly depending on the reduction of central neuroinflammation or on the direct modulation of GABA-ergic and glutamatergic activity. The graph represents the mean amplitude of motor evoked potentials (MEPs) at baseline (T0) and after 1, 10 and 20 min (T1, T10, T20) excitatory repetitive transcranial magnetic stimulation given as intermittent theta burst stimulation (iTBS) to the primary motor cortex for 190 s (600 pulses) in co-ultramicronized palmitoylethanolamide/luteolin (PEA-LUT) (red) and PLACEBO (blue) patient groups at PRE (dashed line) and POST (solid line). A significant increase of MEP amplitude was evident 1 and 10 minutes after iTBS in the PEA-LUT group POST-treatment. ** P = 0.0009 and P = 0.01 in PEA-LUT group for T1 and T10 in post-hoc testing, respectively. | 2022-11-11T14:10:06.900Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "bce7af82453a415439e910ad770aee8220d55b52",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9650483",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1357164ae2c90317aaefa45ee30039fe7b548a57",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
5916084 | pes2o/s2orc | v3-fos-license | Developmental regulation of a VEIDase caspase-like proteolytic activity in barley caryopsis
Caspases are essential in animal programmed cell death both as initiator and executioner proteases. Plants do not have close caspase homologues, but several instances of caspase-like proteolytic activity have been demonstrated in connection with programmed cell death in plants. It was asked if caspase-like proteases are involved during development of the barley caryopsis. The presence of a caspase-6-like proteolytic activity that preferentially cleaved the sequence VEID was demonstrated. A range of protease inhibitors was tested and only caspase-specific inhibitors showed major inhibitory effects. The profile of VEIDase activity in developing starchy endosperm, embryo, and whole caryopsis was measured and showed a general trend of higher activity in young, rapidly developing tissues. The VEIDase activity was localized in vivo to vesicles, shown to be autophagosomes, in randomly distributed cells of the starchy endosperm. The VEIDase activity detected in barley caryopsis is similar to activities described previously in mammals, spruce, yeast, and thale cress. In mammals, spruce, and yeast, VEIDase activity has been shown to be positively correlated with the occurrence of programmed cell death. Several manifestations of programmed cell death exist in developing barley caryopsis, indicating a connection between VEIDase activity and developmental programmed cell death in barley.
Introduction
Development of the cereal caryopsis is characterized by a number of well-defined phases (Johannsen, 1884;Olsen, 2004).The initial double fertilization results in a diploid ovule and a triploid endosperm.The endosperm undergoes rapid syncytial enlargement forming a coenocyte, a spherelike structure surrounding a central vacuole.During the two following days, cellularization of the coenocyte occurs.The endosperm differentiates into two main tissues: a central region develops into the starchy endosperm and the peripheral cell layers form the aleurone layer.As the starchy endosperm cells fill up with starch and proteins, they die and the caryopsis desiccates.The aleurone cells remain alive until germination, fulfilling a vital role in mobilizing storage reserves in the starchy endosperm.
Programmed cell death (PCD) is an important part of caryopsis development (Young and Gallie, 2000a).Proper orchestration of PCD in various tissues may be essential for the correct development of the caryopsis.Transient tissues and cells are eliminated by PCD enabling correct differentiation and shaping of the remaining tissues and organs.Several tissues of the caryopsis succumb to PCD at different stages of its development (Young and Gallie, 2000a).The proper onset and progression of PCD depends on the balance between various phytohormones (Young and Gallie, 2000b).
A group of cysteine proteases called caspases are central players in many forms of animal PCD.Caspases have an obligate cleavage preference for an aspartic acid immediately N-terminal of the cleavage site (Earnshaw et al., 1999).In mammalian PCD, caspases fulfil a role as integrators and propagators of cell death signals (Denault and Salvesen, 2002).Although no close homologues of caspases have been found in plants, several reports show the involvement of caspase-like activity in plant PCD (Rotari et al., 2005).Recently, two groups of proteases responsible for caspase-like activity in plants have been identified.The first encompasses subtilisin-like serine proteases, termed saspases (Coffeen and Wolpert, 2004) and the second includes vacuolar processing enzymes (VPE) belonging to the same clan of proteases as caspases (Hatsugai et al., 2004;Rojo et al., 2004).
The role of caspase-like activity in barley caryopsis development is investigated here.It was found that VEIDase is the principal caspase-like activity.This activity is developmentally regulated.Only caspase-specific inhibitors had a major inhibitory effect on the VEIDase activity.In vivo, this activity is localized to spherical vesicles randomly distributed throughout the starchy endosperm.The VEIDase-containing vesicles were shown to be autophagosomes, strengthening the link between the VEIDase activity and autophagic PCD.
Plant material
The barley (Hordeum vulgare L. cvs Barke, Midas, and Pongo) plants were grown in a climate chamber (Sun et al., 1999).Spikes were individually checked to determine the day of anthesis.Only grains from the central half of the spike were used.
Caspase-like activity assay
Starchy endosperm and embryos were isolated by hand dissection.Due to their smallness it was not possible to isolate embryo and endosperm before 10 d post-anthesis (dpa) and 8 dpa, respectively.Isolated tissues were snap frozen in liquid nitrogen and proteins were extracted (Bozhkov et al., 2004).Caspase-like activity was measured as described previously (Bozhkov et al., 2004), with minor modification by adding MgSO 4 and CaCl 2 at 1 mM to the assay buffer.All substrates used were fluorogenic with the general structure, Ac-(X)XXXD-AMC (AMC, 7-amino-4-methylcoumarin; Peptide Institute Inc., Osaka, Japan), where X denotes amino acid residues at the P2 to P5 positions.All measurements were carried out at least three times on each sample.
In vivo staining of endosperm caspase-like activity VEIDase activity can be visualized in vivo with the substrate VEIDN, included in the CyToxiLuxÒ-PLUS kit (OncoImmunin, Inc., Gaithersburg, MD, USA).In vivo staining of intact starchy endosperm, cut into halves, was carried out at 10 dpa, and the embryos were adapted from Bozhkov et al. (2004).To test the specificity, a set of samples were first incubated in 0.53 MS medium with 2% sucrose containing the VEIDases inhibitors, 50 lM Z-VEID-CHO (Peptide Institute Inc., Osaka, Japan) and 50 lM Z-VEID-FMK (FMK, fluoromethylketone; MP Biomedicals, Irvine, CA, USA) for 30 min prior to staining.VEIDase activity was visualized using a Zeiss LSM 510 confocal microscope with the argon laser 488 nm line and an LP filter of 505 nm.For detection of DAPI stained nuclei, the diode laser 405 nm line and a detection window of 420-480 nm was used.The images were taken with a Capochromate 403/1.2Wobjective lens.
Double staining of autophagic vesicles with MDC and CyToxiLux
Starchy endosperm, isolated as described above, was ground to a fine powder in liquid nitrogen using a mortar and pestle.The powder was mixed with an equal volume of a 1:1 mix of 100 lM monodansylcadaverine (MDC; Sigma) solution in PBS and CS solution from the CyToxiLuxÒ-PLUS kit.The sample was mixed and incubated at ambient temperature for 1 h.An aliquot of the supernatant was applied to a microscope slide and viewed using a Zeiss LSM 510 confocal microscope with the blue diode laser 405 nm line and a BP filter of 420-480 nm for MDC and the argon laser 488 nm line and an LP filter of 505 nm for the VEIDase CyToxiLux fluorophore.The images were taken with a C-apochromate 403/1.2Wobjective lens.
Double staining of autophagic vesicles with LysoTracker Red and CyToxiLux
Starchy endosperm were prepared and stained with CyToxiLux as described above, followed by a 30 min incubation in 5 lM LysoTracker Red DND-99 (Invitrogen, Carlsbad, Ca, USA) in PBS with 2% sucrose and subsequently by three 10 min washes in PBS with 2% sucrose.The autophagic vesicles were visualized using a Zeiss LSM 510 confocal microscope with the argon laser 488 nm line and a BP filter of 505-530 nm for the VEIDase CyToxiLux fluorophore and the HeNe laser 543 nm line and a BP filter 560-615 for LysoTracker Red.The images were taken with a C-apochromate 403/1.2Wobjective lens.
In situ detection of DNA fragmentation
Terminal deoxynucleotidyl transferase (TdT)-mediated dUTP nick end labelling (TUNEL) was performed using the in situ Cell Death Detection Kit, TMR red (Roche, Basel, Switzerland).Whole caryopses of five different ages, 4, 9, 13, 19, and 25 dpa, were fixed in 4% formaldehyde (for 12 h in vacuum) and dehydrated through a graded series of ethanol and xylen and embedded in paraplast plus (Sigma).The TUNEL staining was applied to thin sections (6 lm) and images were acquired using the HeNe laser 543 nm line and detection window 585-615 nm.The objective lenses used were plan-neofluar 103/0.3,plan apochromate 203/0.75 and C-apochromate 403/1.2W.Positive controls (pretreating the sections with DNase before TUNEL staining) as well as negative controls (omission of TdT) were included.
Caspase-like activity in the barley endosperm
Assays for caspase-like activities in developing barley caryopses clearly indicated the presence of caspase-like proteases in the starchy endosperm.The highest activity was detected with a caspase-6 substrate, Ac-VEID-AMC (Fig. 1).The VEIDase activity was considerably higher at 10 dpa compared with 30 dpa.
In a first attempt to characterize the VEIDase caspaselike activity in barley endosperm, its sensitivity towards different protease inhibitors was tested.Only Z-VEID-CHO, a caspase-6 inhibitor, and Z-LEHD-CHO, a caspase-9 inhibitor, showed major inhibitory effects (Fig. 2A).The detected inhibition of Complete Protease Inhibitor Cocktail with EDTA can be attributed to the presence of EDTA, indicating the involvement of a metal ion cofactor.Z-VEID-CHO, Z-YVAD-CHO, and calpain inhibitor I were further tested at different concentrations to determine their inhibitory strength (Fig. 2B).Based on those measurements it was possible to interpolate the IC 50 for Z-VEID-CHO to 0.34 lM and for calpain I to 61 lM.The substrate and inhibitor specificity exhibited by the barley VEIDase activity is in agreement with other plant caspase-like and animal caspase activities, indicating the presence of a true caspase-like activity during barley caryopsis development.
VEIDase activity in barley endosperm, embryo, and whole caryopses
The profile of VEIDase activity in different tissues of developing caryopses was investigated next.The cleavage activity in the endosperm was highest in the early stages and then fell rapidly and remained at a low level throughout development (Fig. 3).The embryos exhibited high activity levels at the earliest developmental stages followed by a decrease to a steady-state level between 14 and 18 dpa, before it again increased at the end of embryonic development.In whole caryopses the activity peaked at 6 dpa and then decreased.The rise after 20 dpa might be attributed to the increase in embryonic VEIDase activity.
In situ localization of VEIDase activity in the barley caryopses
Having proven the existence of a VEIDase caspase-like activity in barley endosperm, the cell permeable fluorescent marker CyToxiLux, containing the substrate VEIDN, was used to investigate if the activity could be localized in vivo.In young endosperm tissue, 10 dpa, brightly fluorescent foci were clearly visible after the addition of substrates (Fig. 4A).The foci appeared to be spherical with a diameter of up to 5 lm.Cells containing these fluorescent spheres were randomly distributed throughout the starchy endosperm with no evident pattern.In endosperm that had been pretreated with caspase inhibitors almost no fluorescent signals could be seen (Fig. 4B).
Double staining of VEIDase activity and autophagic vesicles
The presence of the VEIDase activity in vesicular structures within the endosperm cells could be indicative of an autophagic type of PCD, which is executed through gradual disassembly of the cells by the growing lytic vacuoles (Baehrecke, 2002;Bozhkov et al., 2005a).This type of PCD is a principal cell suicide strategy in plants devoid of both specialized phagocytic cells and most of the molecular components of the apoptotic machinery.In plant PCD, lytic vacuoles arise from double-membrane vesicles called autophagosomes, which in turn originate from Golgi, proplastids or endoplasmic reticulum (ER; for a review, see Bozhkov et al., 2005a).Due to their acidic nature and membrane lipid composition, protease-containing autophagosomes can be specifically stained with monodansylcadaverine (MDC; Biederbick et al., 1995;Contento et al., 2005).In order to determine if the VEIDasecontaining vesicles in the starchy endosperm were indeed autophagosomes, dual staining of cell contents with CyToxiLux and MDC was performed.The results clearly illustrate the co-localization of VEIDase activity and autophagosomes (Fig. 5A-C).Similar results were obtained by in vivo staining using LysoTracker Red DND-99, a red-fluorescent dye that stains acidic compartments of the lytic pathways in living cells (Fig. 5D-F).Taken together, these data demonstrate that the observed VEIDase activity in the barley endosperm exists within autophagosomes, which are an integral part of the plant PCD process.
Nuclear DNA fragmentation in barley caryopses at different developmental stages
As caspase-like activities have been linked to PCD in several plant systems, nuclear DNA fragmentation, as another hallmark of PCD, was investigated in starchy endosperm by subjecting sections of barley caryopses to TUNEL staining.Interestingly, no TUNEL positive nuclei could be detected in the endosperm from samples 4, 9 or 13 dpa, i.e. stages with high VEIDase activity (Fig. 6A, B, and data not shown).As expected, TUNEL positive nuclei could be seen in the nucellus layer at 13 dpa (Fig. 6B) as well as at 4 dpa and 9 dpa (data not shown).In samples of 19 dpa, a small number of TUNEL positive nuclei could be seen in the starchy endosperm (Fig. 6C, D) and the number increased even further in the 25 dpa sample (data not shown).Positive controls treated with DNase prior to TUNEL labelling revealed a large number of nuclei, demonstrating the validity of the procedure, whereas the negative controls, without TdT, showed no staining (data not shown).
Discussion
VEIDase, a proteolytic activity evolutionary conserved across wide phylogenetic distances It has been shown here that VEIDase is a principal caspase-like activity in developing barley caryopses.There are several reports in the literature of VEIDase activity in non-mammalian systems.Caspase-like activity detected during embryogenesis in spruce (Bozhkov et al., 2004), in A. thaliana seedlings (Rotari et al., 2005), and in yeast during oxidative stress-and senescence-induced cell death (Madeo et al., 2002) all show a similar pattern of relative cleavage rate of different caspase substrates to the caspaselike activity reported in this work.The substrate preference profile of these caspase-like activities are also close to the one specified for mammalian caspase-6 (Talanian et al., 1997).Only in mammalian systems has the protease responsible for the VEIDase activity been isolated and shown to be a caspase-6.In yeast (Madeo et al., 2002) and spruce (Suarez et al., 2004), the VEIDase activity has been shown to correlate with metacaspase expression.However, metacaspases have different substrate specificity than canonical caspases (Vercammen et al., 2004;Bozhkov et al., 2005b), indicating that the VEIDase activity detected in vivo is caused by proteases acting downstream to metacaspases in the cell death pathway.It is interesting to note that the four examples of VEIDase activity are separated by long phylogenetic distance.It thus appears that very similar, with regard to activity, proteases involved in PCD, exist in angiosperms, both in dicots (Rotari et al., 2005) and monocots (this study) and in gymnosperms (Bozhkov et al., 2004), as well as in yeast (Madeo et al., 2002), and animals (Foley et al., 2004), indicating a striking case of convergent evolution.Further studies are required to identify the protease(s) responsible for the VEIDase activity.Biotinylated caspase inhibitors have previously been used to isolate proteases with caspase-like activity (Faleiro et al., 1997).Several proteins in the starchy endosperm interact with the general caspase inhibitor, Biotin-VAD-FMK (M Bore ´n, unpublished data).One of those proteins does not bind after pretreating the sample with the Ac-VEID-CHO inhibitor, indicating a VEIDase specific binding pattern.Unfortunately this protein has so far eluded all attempts at identification.VEIDase activity was localized in vivo to spherical bodies throughout the starchy endosperm (Fig. 4).There are several reports of lytic organelles with a similar appearance in different plants during PCD.Those organelles have been shown to arise from ER (Hayashi et al., 2001;Schmid et al., 2001), Golgi or proplastids (Filonova et al., 2000).As has been shown, using double staining with known markers for lytic organelles (Fig. 5), the VEIDasecontaining spherical shapes detected in the barley starchy endosperm can be classified as autophagosomes.There is growing evidence linking the protease-containing lytic organelles to PCD in plants (for a review see Hatsugai et al., 2006).It was recently shown that VPE posseses caspase-1-like activity (YVADase) as well as being localized to lytic vacuoles (Hatsugai et al., 2004;Rojo et al., 2004), thus providing a mechanistic link between caspase-like activity and lytic compartments during the execution of plant PCD. Isolation of plant proteases with VEIDase activity will be important for a better understanding of the mechamisms of autophagic PCD in plants.
Role of PCD during cereal caryopsis development.
Development of the barley caryopsis from fertilization to ripening involves several instances of PCD (Cejudo et al., 2002), which has been divided into four stages: (I) syncytical, (II) cellularization, (III) differentiation, and (IV) maturation (Bosnes et al., 1992).A fifth stage, (V) germination (G), needs to be added to cover all aspects of PCD during caryopsis development.To sketch an overview of PCD in the developing cereal caryopsis, available data from barley, maize, rice, and wheat have been superimposed onto the developmental timeline of the barley caryopsis (Fig. 7).Programmed cell death is first observed in the maternal nucellar and pericarp tissues as early as 5 dpa in wheat (Dominguez et al., 2001).The maternal tissues are progressively degraded, to supply nutrients and provide space, until the last specialized transfer cells in the nucellar projection finally denucleate around 18 dpa in wheat (Dominguez et al., 2001) and 24 dpa in maize (Kladnik et al., 2004).The high VEIDase activity correlates well with high levels of PCD in the young caryopses (Figs 3, 7).The first signs of PCD in the starchy endosperm, as indicated by the bar labelled a in Fig. 7, are manifested by denucleation starting as soon as cellurization is completed in rice (Lan et al., 2004) and by loss of membrane integrity as detected by electron spin resonance (ESR) in wheat (Golovina et al., 2000).In endosperm, the highest levels of VEIDase activity correspond with this early period of cell death (Figs 3, 7).This is interpreted to mean that VEIDase activity and PCD participate in early endosperm differentiation.Since PCD at this stage was not indicated by TUNEL staining it appears that the VEIDase activity detected is uncoupled from DNA fragmentation during early seed development.PCD, as indicated by TUNEL and Evan's blue staining, represented by the bar labelled b in Fig. 7, can be seen from 12 dpa in rice (Lan et al., 2004) and 16 dpa in wheat and maize (Young and Gallie, 1999).The suspensor cells are the first cells of the embryo committed to PCD (Bozhkov et al., 2005a), starting at 14 dpa in maize.A VEIDase activity has been shown to be essential for suspensor cell PCD during spruce embryo development (Bozhkov et al., 2004).In late embryonic development, c. 27 dpa in maize, a few cells in the scutellar node are still TUNEL positive as a result of vascular differentiation (Giuliani et al., 2002).In the present study, the two waves of embryonic PCD are reflected by the two peaks of VEIDase activity in the barley embryo (Figs 3,7).A few days after germination the aleurone cells enter PCD (Fath et al., 2000).All the times indicated in Fig. 6 should be considered approximate.Based on available evidence it is our belief that all cereals have a conserved sequence of PCD during caryopsis development.It differs in details and timing but the overall pattern and sequence of PCD events appear conserved between cereals.Based on the results, in combination with the available literature, we would like to suggest that the VEIDase activity found in barley starchy endosperm and in the embryo plays an important role during caryopsis development.
Fig. 1 .
Fig. 1.Caspase-like activity in barley starchy endosperm.Cleavage activity against different caspase substrates is expressed as a percentage of the highest activity (VEIDase).Error bars denote sem.
Fig. 2 .
Fig. 2. Effect of protease inhibitors on VEIDase activity.(A) A range of protease inhibitors was tested for their ability to inhibit the VEIDase activity in vitro.The concentrations are given in the Material and methods section.Remaining activity is expressed as percentage of activity without inhibitor.Errors denote sem.(B) Inhibitory effect of three inhibitors at various concentrations.Remaining activity is expressed as percentage of activity without inhibitor.Errors bars denote sem.
Fig. 3 .
Fig. 3. VEIDase activity during barley caryopsis development.VEIDase activity at various time points during development in starchy endosperm, embryo and whole caryopsis.Errors bars denote sem.
Fig. 4 .
Fig. 4. In vivo localization and inhibition of VEIDase activity in central starchy endosperm as visualized by confocal microscopy after cleavage of VEIDN substrate (green fluorescence) included in the CyToxiLuxÒ-PLUS kit.Bars¼50 lm.(A) VEIDase activity and nuclei counter-stained with DAPI (violet fluorescence), overlaid by a full spectrum light microscopic image.(B) VEIDase and nuclei counter-stained with DAPI (violet fluorescence).(C) Inhibition of VEIDase activity after incubation with a mixture of VEID-CHO and VEID-FMK.Nuclei were visualized with DAPI (violet fluorescence).
Fig. 5 .
Fig. 5. Double staining of VEIDase activity and autophagosomes in the central starchy endosperm as visualized by confocal microscopy after cleavage of VEIDN substrate (green fluorescence) included in the CyToxiLuxÒ-PLUS kit.Bars¼10 lm.CyToxiLux-stained endosperm viewed with excitation and filter settings for LysoTracker red and MDC, respectively, displayed no fluorescence (not shown).(A) Confocal microscopic image of in vitro VEIDase activity.Scale bar¼10 lm.(B) Confocal microscopic image of autophagosomes using monodansylcadaverine (purple fluorescence).Same area as in (A).(C) Composite confocal microscopic image of (A) and (B).Double-stained areas shown with light green colour.(D) Confocal microscopic image of in vivo VEIDase activity.Scale bar¼10 lm.(E) Confocal microscopic image of autophagosomes using LysoTracker RED DND-99 (red fluorescence).Same area as in (D).(F) Composite confocal microscopic image of (D) and (E).Double-stained areas shown with yellow colour.
Fig. 6 .
Fig. 6.Nuclear DNA fragmentation in barley caryopsis slices.(A, B) Images showing TUNEL-stained sections of barley caryopsis at 13 dpa.Note a lack of DNA fragmentation in the starchy endosperm.Shown by arrows in (B) are TUNEL positive nuclei in the nucellus.se, starchy endosperm; a, aleurone; n, nucellus; p, pericarp.Scale bar¼100 lm.(C) Image of TUNEL-stained section of barley caryopsis at 19 dpa.A few nuclei stain TUNEL positive (red) mostly in the region closest to the aleurone layer but also deeper into the starchy endosperm.Scale bar¼100 lm.(D) Image of TUNEL stained section of barley caryopsis at 19 dpa under higher magnification.Scale bar¼50 lm.
Fig. 7 .
Fig. 7. Instances of programmed cell death during cereal caryopsis development.Black bars denote periods of cell death in the various tissues during the four stages (I-IV) of caryopsis development, representing, respectively, the syncytical, cellularization, differentiation, and maturation phase, and during germination (G; stage V).Periods of VEIDase activity in each tissue is denoted by filled stars.Numbers on the x-axis denote dpa.a, the first sign of PCD in the starchy endosperm; b, PCD as defined by TUNEL and Evan's blue staining. | 2014-10-01T00:00:00.000Z | 2006-11-01T00:00:00.000 | {
"year": 2006,
"sha1": "6aaf9a27fc0c313ed9b23f528914011a017a8b2f",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jxb/article-pdf/57/14/3747/1573148/erl136.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "6aaf9a27fc0c313ed9b23f528914011a017a8b2f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254617280 | pes2o/s2orc | v3-fos-license | Parametric MRI Detects Aristolochic Acid Induced Acute Kidney Injury
Exposure to aristolochic acid (AA) is of increased concern due to carcinogenic and nephrotoxic effects, and incidence of aristolochic acid nephropathy (AAN) is increasing. This study characterizes renal alterations during the acute phase of AAN using parametric magnetic resonance imaging (MRI). An AAN and a control group of male Wistar rats received administration of aristolochic acid I (AAI) and polyethylene glycol (PEG), respectively, for six days. Both groups underwent MRI before and 2, 4 and 6 days after AAI or PEG administration. T2 relaxation times and apparent diffusion coefficients (ADCs) were determined for four renal layers. Serum creatinine levels (sCr) and blood urea nitrogen (BUN) were measured. Tubular injury scores (TIS) were evaluated based on histologic findings. Increased T2 values were detected since day 2 in the AAN group, but decreased ADCs and increased sCr levels and BUN were not detected until day 4. Significant linear correlations were observed between T2 of the cortex and the outer stripe of outer medulla and TIS. Our results demonstrate that parametric MRI facilitates early detection of renal injury induced by AAI in a rat model. T2 mapping may be a valuable tool for assessing kidney injury during the acute phase of AAN.
Introduction
The exposure to aristolochic acid (AA) has become an increasing concern due to its mutagenic, carcinogenic and nephrotoxic effects [1][2][3]. AAs are a group of compounds found in plants in the family Aristolochiaceae. These plants have long been used for various medicinal purposes, such as arthritis, abscesses, inflammation and chronic pain [2,4]. Millions of people have been affected by aristolochic acid nephropathy (AAN), which is recognized as a toxic interstitial nephropathy caused by ingestion of plants containing AA [5]. Despite measures that regulate AA consumption in many countries and regions, cases of AAN continue to occur especially in areas where traditional medicine is popular [6]. It is estimated that more than 100 million people are at risk of developing AAN worldwide [7]. AAN was demonstrated a biphasic evolution of renal injury, with the first acute phase characterized by signs of proximal tubular cell swelling and necrosis [8][9][10][11][12]. The early diagnosis of acute kidney injury (AKI) induced by AA is of great clinical importance for disease management and effective therapeutic intervention [13,14]. Despite their well-known shortcomings, serum creatinine (sCr), a surrogate marker for glomerular filtration rate (GFR), and sCre-based estimated GFR (eGFR) are still used as the standard point-of-care metrics for the diagnosis of AKI in the vast majority of medical settings, in accordance with current clinical practice guidelines [15,16]. Their diagnostic clinical utility in the case of AAN is, moreover limited, as they are surrogate markers for GFR rather than markers for injury of proximal tubules. Histologic examination remains the mainstay for diagnosis of AAN, which is invasive and susceptible to sampling error [4,13]. Therefore, a reliable non-invasive biomarker that aims for early diagnosis, disease progression monitoring and prognosis of AAN is of urgent clinical need [4]. Magnetic resonance imaging (MRI) plays an important role in kidney disease research, for not only providing excellent anatomical images but also being useful in probing renal pathophysiology using functional renal MRI techniques, such as blood oxygenation leveldependent (BOLD) image contrast, water diffusion weighted imaging (DWI), and magnetic resonance relaxometry, etc. [17][18][19][20]. Quantitative mapping of the relaxation time T 2 is a well-established technique for (pre)clinical research. T 2 is sensitive to both the free water content in tissue and the amount of paramagnetic deoxygenated hemoglobin (deoxyHb) per tissue volume (voxel). Therefore, T 2 mapping allows the assessment of the degree of tissue edema as well as probing renal BOLD [21][22][23][24][25][26][27][28][29]. DWI, which probes the tissue microstructure through the displacement of water molecules, has been shown to provide specific information on renal tissue and is considered to be a biomarker of MR renal functional imaging [30][31][32][33].
Previous studies demonstrated that changes of renal T 2 mirror characteristic pathophysiologic changes in the course of AKI development and its possible progression to chronic kidney disease (CKD). These include tissue hypoperfusion and hypoxia as observed, e.g., in the initial reperfusion phase following renal ischemia and upon administration of an x-ray contrast agent, edema formation and inflammatory infiltration, and finally, fibrosis [32,34,35]. In a study of kidney transplantation, the abrogation of the physiological gradient of T 2 across the kidney reflected the loss of tubular function and identified kidney grafts with acute rejection [36]. Restricted water diffusion, which manifests itself in a reduction of the apparent diffusion coefficient (ADC), has been reported in AKI when the interstitial space was narrowed due to infiltration of inflammatory cells or fibrosis [37][38][39].
Recognizing the clinical need for diagnosis, disease management and therapy of AAN, this study examines the applicability of quantitative MRI of T 2 and ADC for the assessment of early renal microstructural alterations and proximal tubular injury induced by AA.
Experiment Protocols
All animal experiments were performed under a protocol approved by local Animal Ethics Committee (IACUC No.: L2018132). Twenty-eight male Wistar rats with an age of 5 to 6 weeks and an initial weight ranging from 140-160 g were obtained from the Animal Center of our institute. All rats were kept in cages, housed in an animal room with stable environment (i.e., temperature, humidity and 12 h-cycle light) and provided with drinking water and rat chow ad libitum. Figure 1 shows the schematic representation of experiment protocols performed in rats of the AAN group and the control group. Weight-matched rats were randomly assigned to AAN group (n = 22) and control group (n = 6). A previously described acute AAN model was used to induce AKI [11]. The AAN group received daily intraperitoneal injections of 40 mg/kg AAI ((Sigma-Aldrich, St. Louis, MO, USA) dissolved in polyethylene glycol (PEG) for six days. The control group received daily intraperitoneal injections of PEG the same dosage and schedule as the AAN group.
All animal experiments were performed under a protocol approved by local Animal Ethics Committee (IACUC No.: L2018132). Twenty-eight male Wistar rats with an age of 5 to 6 weeks and an initial weight ranging from 140-160 g were obtained from the Animal Center of our institute. All rats were kept in cages, housed in an animal room with stable environment (i.e., temperature, humidity and 12 h-cycle light) and provided with drinking water and rat chow ad libitum. Figure 1 shows the schematic representation of experiment protocols performed in rats of the AAN group and the control group. Weight-matched rats were randomly assigned to AAN group (n = 22) and control group (n = 6). A previously described acute AAN model was used to induce AKI [11]. The AAN group received daily intraperitoneal injections of 40 mg/kg AAI ((Sigma-Aldrich, St. Louis, MO, USA) dissolved in polyethylene glycol (PEG) for six days. The control group received daily intraperitoneal injections of PEG the same dosage and schedule as the AAN group. The rats were fastened 12 h before the MRI examination which was performed (at a set time of the day) for six rats from each group before, at day 2, day 4 and day 6 after AAI or PEG administration. Kidney and blood samples were collected from four rats after each MRI examination for the AAN group and after the last MRI examination for the control group.
Data Acquisition
MRI was conducted on a 7 Tesla animal scanner (Pharma Scan, Bruker Biospin, Ettlingen Germany) using a volume RF coil for transmission and 4-channel rat surface RF coil for signal reception. Rats were placed in supine position to ensure that kidneys were close to the RF coil and to reduce kidney movement caused by respiration. Anesthesia was conducted by means of inhalation of isoflurane, 5% for seduction and 2% for maintenance. Respiration was monitored and kept at a rate of 60 respiration cycles per minute using a ventilator. A circulating water blanket maintained at 38℃ was placed on the belly. High spatial resolution T2 weighted 3D Fast Spin Echo (FSE) imaging (TR/TE = 3000/66 ms; RARE factor = 20; FOV = 40 × 55 × 4 mm 3 ; matrix = 200 × 275 × 20; voxel size = 0.2 × 0.2 × 0.2 mm 3 ; number of averages = 1; scan duration = 13 min) was first performed for ana- The rats were fastened 12 h before the MRI examination which was performed (at a set time of the day) for six rats from each group before, at day 2, day 4 and day 6 after AAI or PEG administration. Kidney and blood samples were collected from four rats after each MRI examination for the AAN group and after the last MRI examination for the control group.
MR Image Analysis
T 2 maps were calculated by a pixel-wise fitting using a mono-exponential fitting algorithm based on auto-regression on linear operation (ARLO) of multi-echo data [40] with Matlab 2014 software (MathWorks, Natick, MA, USA).
The principal component analysis denoising method was used for noise suppression on the original DWI images [41]. ADC maps were obtained by pixel-wise fitting of the diffusion sensitized images to Equation (1) using non-linear least squares method: where S b and S 0 denote the MR signals with and without diffusion weighting, and b is the diffusion weighting strength. Regions of interest (ROIs) were defined for the renal cortex (CO), the outer stripe of outer medulla (OSOM), the inner stripe of outer medulla (ISOM) and inner medulla (IM). The ROIs were manually drawn on the right kidney. Mean values of T 2 and ADC were estimated for each renal layer.
Biochemical Measurements and Histologic Evaluations
Blood samples were collected from the aorta and centrifuged at 3000 r/min for 10 min at 4 • C. The serum was stored at −80 • C for sCr and blood urea nitrogen concentration (BUN) measurements. sCr and BUN levels were measured using an automatic biochemical analyzer (AU480; Beckman Coulter, Pasadena, CA, USA).
Kidneys were harvested and fixed in formalin after being perfused with saline via the left ventricle while rats were under general anesthesia. Formalin fixed tissues were embedded in paraffin. Longitudinal sections were cut at a slice thickness of 2 µm and stained with hematoxylin and eosin to evaluate glomerular and tubular injury. Semiquantitative determination of the tubular injury score (TIS) was carried out by methods reported previously [42,43]. Briefly, the tubular injury included the following categories: no injury (0), tubular epithelial cell flattening (1), brush border loss (1), cell membrane bleb formation (1-2), cytoplasmic vacuolization (1), cell necrosis (1-2), tubular lumen obstruction (1)(2). Scores for all categories were accumulated for final tubular injury scoring with the maximum score being nine.
Statistical Analysis
Statistical analyses were carried out using SPSS 22.0 (IBM, New York, NY, USA). Data are presented as mean ± standard error of the mean (SEM). Longitudinal changes in T 2 and ADC after AAI injection were assessed by using one-way analysis of variance (ANOVA) for repeated measurements followed by post hoc multiple comparisons. Differences in T 2 and ADC between the AAN group and the control group were evaluated using unpaired t test. Correlations between TIS and MR parameters were determined by calculating the Spearman correlation coefficient. A p value lower than 0.05 was considered to be significant.
Results
High spatial resolution 3D T 2 -weighted MRI revealed structural changes of AAItreated kidneys at different time points (Figure 2). Four renal layers can be distinguished with good tissue contrast at baseline. The contours are outlined with different colors in the zoomed image. The isotropic resolution of 0.2 mm even facilitated the visualization of the collecting ducts in the outer medulla (arrow). After the administration of AAI, the corticomedullary T 2 -weighted contrast differentiation (CMD) diminished at day 2 and day 4. The collecting ducts in the medulla could not be observed either at day 4. At day 6, the OSOM show marked hyperintensity compared to CO and ISOM. No distinct changes of IM were observed from T 2 -weighted images.
with good tissue contrast at baseline. The contours are outlined with different color the zoomed image. The isotropic resolution of 0.2 mm even facilitated the visualizatio the collecting ducts in the outer medulla (arrow). After the administration of AAI, corticomedullary T2-weighted contrast differentiation (CMD) diminished at day 2 and 4. The collecting ducts in the medulla could not be observed either at day 4. At day 6, OSOM show marked hyperintensity compared to CO and ISOM. No distinct change IM were observed from T2-weighted images. Representative T2 maps of the AAN (upper row) and the control group (lower r are illustrated in Figure 3. For kidneys without AAI, T2 showed an increased grad from CO to IM. For the AAN group, T2 of CO and OSOM were increased to day 6. outcome of the statistical analysis of T2 of the AAN group is shown in Figure 4A. Th increase of OSOM was immediately detected at day 2 (baseline vs. day 2 p = 0.005), w that of CO was not detected until day 4 (baseline vs. day 4 p < 0.006). T2 of both la were increased remarkably at day 6 (CO: baseline vs. day 6 p < 0.001; OSOM: baseline day 6 p < 0.001). OSOM present the most pronounced T2 change: T2,day2 = 53.2 ms to T = 69.4 ms. T2 of ISOM and IM increased until day 4 followed by a sudden decline obser at day 6, showing T2 relaxation times being smaller than at baseline. Representative T 2 maps of the AAN (upper row) and the control group (lower row) are illustrated in Figure 3. For kidneys without AAI, T 2 showed an increased gradient from CO to IM. For the AAN group, T 2 of CO and OSOM were increased to day 6. The outcome of the statistical analysis of T 2 of the AAN group is shown in Figure 4A. The T 2 increase of OSOM was immediately detected at day 2 (baseline vs. day 2 p = 0.005), while that of CO was not detected until day 4 (baseline vs. day 4 p < 0.006). T 2 of both layers were increased remarkably at day 6 (CO: baseline vs. day 6 p < 0.001; OSOM: baseline vs. day 6 p < 0.001). OSOM present the most pronounced T 2 change: T 2,day2 = 53.2 ms to T 2,day6 = 69.4 ms. T 2 of ISOM and IM increased until day 4 followed by a sudden decline observed at day 6, showing T 2 relaxation times being smaller than at baseline. The comparison between T2 derived from the AAN and the control group is shown in Figure 4B. For the control group, T2 remained stable from baseline to day 6 for all four renal layers. T2 of CO and OSOM was significantly higher in AAN than in the control group at day 4 and day 6 (CO: day 4 p = 0.002 and day 6 p < 0.001; OSOM: day 4 p = 0.009, day 6 p < 0.001 and p < 0.001). ISOM showed a T2 reduction (p = 0.013) only at day 6 if benchmarked against the control group. For IM, no significant T2 difference was detected between the AAN group and the control group. The T2 relaxation times of the AAN and the control group are summarized in Table 1.
The ADCs obtained for the AAN and the control group are summarized in Table 2. The results derived from the statistical analysis of the ADCs of the AAN group are shown The comparison between T 2 derived from the AAN and the control group is shown in Figure 4B. For the control group, T 2 remained stable from baseline to day 6 for all four renal layers. T 2 of CO and OSOM was significantly higher in AAN than in the control group at day 4 and day 6 (CO: day 4 p = 0.002 and day 6 p < 0.001; OSOM: day 4 p = 0.009, day 6 p < 0.001 and p < 0.001). ISOM showed a T 2 reduction (p = 0.013) only at day 6 if benchmarked against the control group. For IM, no significant T 2 difference was detected between the AAN group and the control group. The T 2 relaxation times of the AAN and the control group are summarized in Table 1. The ADCs obtained for the AAN and the control group are summarized in Table 2. The results derived from the statistical analysis of the ADCs of the AAN group are shown in Figure 5A. ADC of OSOM was significantly reduced at day 6, while ADC of ISOM was significantly reduced from day 4. The time course of CO and IM showed no significant changes. The ADC comparison between the AAN and the control group is illustrated in Figure 5B. The control group showed no ADC change from baseline through day 6. The AAN group yielded ADCs lower than those of the control group, particularly in OSOM and ISOM.
The results of sCr levels and BUN are shown in Figure 6. No difference in sCr was observed between the control group and the baseline of the AAN group. For the AAN group, an increased sCr level was only detected at day 4 (control vs. day 4 p = 0.009, AAN baseline vs. day 4 p = 0.005). The increase of BUN was not detected until day 4 (control vs. day 4 p = 0.012, control vs. day 6 p = 0.008, AAN baseline vs. day 4 p = 0.012, AAN baseline vs. day 6 p = 0.008). in Figure 5A. ADC of OSOM was significantly reduced at day 6, while ADC of ISOM was significantly reduced from day 4. The time course of CO and IM showed no significant changes. The ADC comparison between the AAN and the control group is illustrated in Figure 5B. The control group showed no ADC change from baseline through day 6. The AAN group yielded ADCs lower than those of the control group, particularly in OSOM and ISOM. The results of sCr levels and BUN are shown in Figure 6. No difference in sCr was observed between the control group and the baseline of the AAN group. For the AAN group, an increased sCr level was only detected at day 4 (control vs. day 4 p = 0.009, AAN baseline vs. day 4 p = 0.005). The increase of BUN was not detected until day 4 (control vs. day 4 p = 0.012, control vs. day 6 p = 0.008, AAN baseline vs. day 4 p= 0.012, AAN baseline vs. day 6 p= 0.008). in Figure 5A. ADC of OSOM was significantly reduced at day 6, while ADC of ISOM was significantly reduced from day 4. The time course of CO and IM showed no significant changes. The ADC comparison between the AAN and the control group is illustrated in Figure 5B. The control group showed no ADC change from baseline through day 6. The AAN group yielded ADCs lower than those of the control group, particularly in OSOM and ISOM. Renal tissue injuries of the AAN and the control group were investigated with hematoxylineosin staining (Figure 7). At baseline, examination of all kidney samples showed Renal tissue injuries of the AAN and the control group were investigated with hematoxylineosin staining (Figure 7). At baseline, examination of all kidney samples showed no significant abnormalities before treatment ( Figure 7A). For the AAN group, the histologic lesions were mainly presented in proximal tubules. At day 2, the changes of the proximal tubular epithelial cells were characterized by vacuolization and swelling ( Figure 7B, ∆). Inflammatory cell infiltration was detected (long arrow). At day 4, tubular necrosis (arrow head), loss of brush borders (*) and fragments of tubular epithelial cells (#) were observed ( Figure 7C). The above tubular damages became diffusive at day 6 ( Figure 7D). No signs of regeneration or interstitial fibrosis were observed. No histologic changes were found in the control group at day 6 ( Figure 7E). The glomeruli remained intact in both groups. The AAN group revealed an averaged TIS baseline = 0, TIS day2 = 3, TIS day4 = 6 and TIS day6 = 8. For the control group TIS = 0 was found.
Tomography 2022, 8, FOR PEER REVIEW 8 no significant abnormalities before treatment ( Figure 7A). For the AAN group, the histologic lesions were mainly presented in proximal tubules. At day 2, the changes of the proximal tubular epithelial cells were characterized by vacuolization and swelling ( Figure 7B, Δ). Inflammatory cell infiltration was detected (long arrow). At day 4, tubular necrosis (arrow head), loss of brush borders (*) and fragments of tubular epithelial cells (#) were observed ( Figure 7C). The above tubular damages became diffusive at day 6 ( Figure 7D). No signs of regeneration or interstitial fibrosis were observed. No histologic changes were found in the control group at day 6 ( Figure 7E). The glomeruli remained intact in both groups. The AAN group revealed an averaged TISbaseline = 0, TISday2 = 3, TISday4 = 6 and TISday6 = 8. For the control group TIS = 0 was found. The correlations between TIS and MR parameters are illustrated in Figure 8. Significant linear correlations were observed between TIS and T2 of CO (r = 0.996, p = 0.004) and OSOM (r = 0.999, p = 0.008). No correlation was observed between TIS and ADC. The correlations between TIS and MR parameters are illustrated in Figure 8. Significant linear correlations were observed between TIS and T 2 of CO (r = 0.996, p = 0.004) and OSOM (r = 0.999, p = 0.008). No correlation was observed between TIS and ADC.
Discussion
This study examined the feasibility of parametric MRI for the assessment of renal injury in an AAI-treated rat model using the quantitative MRI metrics T 2 and ADC. T 2 -weighted MRI was able to separate four renal layers from each other with good tissue contrast and showed microstructural changes of different renal layers at each time point. Severe renal injury was detected with ADC mapping. T 2 of OSOM and CO presents changes earlier than the standard clinical metrics (sCr and BUN) for diagnosing AKI, and was significantly correlated with TIS.
T 2 mirrors free water content in tissue and the amount of deoxyHb per tissue volume. Thus, an increase of T 2 may result from increased free water, e.g., due to edema formation, a decrease in deoxyHb, or a combination hereof. Lowered blood oxygenation (i.e., decreased O 2 saturation of Hb) and a decrease in the blood volume fraction result in decreased deoxyHb per tissue volume. The latter can be induced by active vasoconstriction and by passive compression of intrarenal vessels. It is most probable that the observed changes in T 2 represent the sum of all these effects. In the face of the relatively rigid renal capsule, edema formation and influx of immune cells will result in an 'intrarenal compartment syndrome': as intrarenal pressure increases, intrarenal blood vessels become compressed [29,[44][45][46]. Studies on AAN revealed that the intoxication results in intrarenal vasoconstriction due to acute deficiency of vasodilatory nitric oxide (NO) [47]. Besides its effect on T 2 , passive compression and active vasoconstriction diminish O 2 supply, thereby resulting in tissue hypoxia. Renal tissue hypoperfusion and hypoxia are a pivotal early element in the pathophysiology of AKI of various origins as well as for the promotion from AKI to CKD [48][49][50][51][52][53].
In order to improve the delineation of the rat kidney, an organ with a size about 10 × 4 × 3 mm, 3D fast spin echo T 2 -weighted imaging with 0.2 mm isotropic spatial resolution was employed. At day 2 and day 4, the obscurity of the distinguishable CMD can be attributed to changes in T 2 that were more pronounced in the OSOM versus the CO. This could rely on a pronounced edema formation and/or pronounced decrease in deoxyHb in OSOM. It must be noted that the outer medulla (OM) is the area at risk in AKI, due to its sparse O 2 delivery compared to the high O 2 demands for active solute transports in the thick ascending limb of the loop of Henle and in the pars recta of the proximal tubule [48,49,53]. Therefore, the vicious cycle of hypoxia, cell damage, edema formation, and vessel compression in OM is pronounced versus the other layers. The marked hyperintensity obtained for OSOM at day 6 probably mirrors the further progression of these pathophysiologic events. In the current study, T 2 -weighted imaging with high spatial resolution and sufficient SNR has shown potential for the assessment of structural and blood oxygenation changes induced by AA.
The contrast changes presented on T 2 -weighted images of different renal layers could be further clarified and confirmed by the changes derived from T 2 mapping. Among the four renal layers, T 2 time course of OSOM showed the earliest change, followed by the T 2 time course of CO. T 2 of OSOM and CO were increased monotonically afterwards. In particular, T 2 of OSOM showed the most pronounced changes. It was reported in previous studies that the proximal tubule was the main target of AA, while the glomerulus was rarely found injured at early stages of AAN [9,10,48]. Our histologic analysis revealed that injury to the proximal tubules was mainly observed for OSOM, with some spatial extension to CO. This finding suggests that the rise of T 2 in OSOM and CO is in part attributable to proximal tubular epithelial cell swelling and inflammatory cells observed in histologic analysis. In addition, T 2 obtained for OSOM and CO revealed significant correlations with TIS. This indicates that T 2 mapping might be a useful technique for the assessment of AA-induced renal injury.
Similar to previous studies, in which T 2 found for OSOM (or OM) showed the most pronounced changes after ischemia/reperfusion injury (I/R) [32,34,35], our study demonstrated the pronounced T 2 changes upon AA intoxication for OSOM. While part of the underlying mechanisms differ, e.g., the primary toxic effect in AAN, our observations confirm that OM is the most vulnerable layer of the kidney in various forms of AKI. Our T 2 time courses of AAI treated kidneys showed no signs of decline within the observation period. This finding is in line with clinical observations: in many patients, AAN turns out to be a rapidly progressing disease [54].
The ADCs obtained for the AAN group were significantly lower than those of the control group, especially for OSOM and ISOM. Although the ADCs of the AAN group showed a decrease over time, significant differences between the time points were barely detected. ADC reduction obtained for OSOM at day 6 and ISOM at day 4 constitute an exception. Significant ADC reduction and T 2 prolongation suggest severe renal tissue damage at these time points. In previous studies, ADC reduction after AKI was reported to be associated with cell swelling and infiltration of inflammatory cells into the interstitial space [32,36,55]. Our results demonstrated ADC changes of OSOM only at day 6 after the intoxication of AAI. This indicates a decrease in the interstitial water fraction only at this stage of AAI intoxication. No correlation between ADC and TIS was observed. We hypothesize that several factors may have contributed to the insensitivity of ADC during the first 4 days. First, cell density may need to increase to a certain level to be detected by ADC changes. Second, besides apparent water diffusion, ADC also reflects blood flow in the microcirculation and tubular fluid flow in the kidney [38,39]. Hence, more sophisticated DWI models that deal with water diffusion, microcirculation and tubular flow separately such as IVIM or less constrained, data-driven non-negative least squares (NNLS) continuum approaches for renal DWI analysis should be employed in future studies [56][57][58][59].
In accordance with current clinical practice guidelines, in the vast majority of medical settings, the diagnosis of AKI is still based on the surrogate parameters for GFR, sCr and BUN. The elevation of these metrics was not detected until day 4 (sCr at day 4 and BUN at day 6) when tubular necrosis was shown. T 2 changes occurred earlier and showed significant elevation at day 2 when vacuolization and swelling of proximal tubular epithelia cells were observed. This observation is consistent with previous reports concluding that the assessment of kidney injuries with blood tests is feasible when kidney function was impaired severely [60][61][62][63]. Unlike sCr and BUN results, T 2 obtained for OSOM differed significantly among all time points. The well-known shortcomings of sCr and BUN for the detection of AKI of all etiologies include their low sensitivity, their exceptionally sluggish response related to their large volume of distribution, and the fact that they do not mirror early pivotal events in the pathophysiology of AKI [64,65]. Thus, T 2 mapping is conceptually appealing for monitoring progression of AA-induced disease in its initial phase. Our study demonstrated another advantage of T 2 mapping over sCr and BUN because T 2 mapping enables layer-specific and even voxel-based evaluation of kidney alterations.
T 2 showed a significant positive correlation with TIS, of which T 2 of OSOM revealed the highest correlation coefficient. This finding deems T 2 a suitable imaging marker candidate for the assessment of tubular injury during the acute phase of AAN.
A recognized limitation of our study is that the MRI examination covered only day 2 to day 6 after AAI administration. Recognizing the T 2 alterations in OSOM observed at day 2 already, we advise to use an earlier time point for further validation of the efficacy of renal T 2 mapping for the earliest possible detection of renal injury of AAN. Likewise, time points beyond day 6 should be studied in order to elucidate the further pathophysiologic events that lead to fibrosis and, ultimately, to CKD. It is an additional recognized limitation of our study that the sample size of the biochemical and histologic evaluations is relatively small.
In summary, parametric MRI facilitates assessment of microstructural and functional renal alterations induced by AA. MRI-derived parameters, in particular T 2 , enables detection of AAN earlier than the blood-based metrics used in today's standard AKI diagnostics. Our results, furthermore demonstrate that high spatial resolution morphological T 2 -weighted MRI supports visualization of early structural changes. T 2 mapping is more sensitive than DWI for the monitoring of early alterations and progression of kidney injury following AAI intoxication. This finding underlines the potential of T 2 mapping as a | 2022-12-14T16:08:33.500Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "27f17e685fbcf932d209fae0d9bd48829f5bf092",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2379-139X/8/6/243/pdf?version=1670672449",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "575b71dcc7c9eefd767d2d050270e644a9b427d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
189046281 | pes2o/s2orc | v3-fos-license | Research on Pushing the Individualized Online Learning Resources for College Students
: Education has ushered in the third great change in human history, and meeting the individual learning needs of each learner has become the main content of this educational change. Personalized learning promotes students to maximize their individual potential and meets the needs of current college personnel training. With the development of information technology, the internationalization of online learning content in China is becoming more and more obvious. It is the mission of online education in the information age to use the Internet and digital innovative technology to customize each student's own personalized learning resources. However, students 'personalized online learning faces many problems: the contradiction between massive online resources and college students' indiscriminate blind search; The inefficient use of these resources results in a waste of resources; It does not really realize the vision of customization, interactive learning, and so on. Based on the relevant theoretical foundation, we build a personalized online learning resource push model for college students.
Introduction
With the increasing globalization of educational technology, the internationalization of online learning content in China has become more and more obvious. Many Chinese students will participate in online learning with students from around the world and experience the advantages of online learning in terms of interactivity, participation and richness. In the context of globalization, the use of information technology to promote education reform is a necessary road, and information learning is a process based on the Internet to promote the combination of high-quality resources at home and abroad to learners.
In the vision of modern education, everyone is different, with their own unique interests, hobbies and learning needs. Personalized learning is to pay attention to and explore each student's unique endowment. Education in the Internet era breaks through the limitations of schools and classrooms. Through online learning, students can adopt unique learning paths, learning methods and learning strategies according to their own characteristics and needs, thus effectively promoting individual development and self-realization. This is the real picture of lifelong education and learning society that is emerging in the Internet era. With the popularization of network technology, the cost effectiveness of network hardware equipment and mobile terminal equipment is becoming more and more high, providing material guarantee for the development of personalized learning. In short, online resources are an important channel for college students to acquire knowledge and expand their horizons. It is the general trend of university teaching reform to apply various methods to carry out individualized learning.
Online learning resources
In the context of the rapid development of information technology, the ways and means of knowledge acquisition have undergone great changes. The traditional classroom teaching environment has gradually integrated into the network environment. We use various computer teaching software to present and beautify the teaching content, and make, publish, and share rich teaching resources. With the rapid development of a new generation of information technology such as computer, electronic technology and network communication technology, a new type of electronic learning resource, namely online learning resources, has emerged in the online learning environment. In general, online learning resources refer to various information resources for teaching and learning activities that can be operated in a multimedia computer or network environment after digital processing. Online learning resources are based on traditional learning resources, but they also give new characteristics, expand the connotation and extension of knowledge, improve teaching methods, improve teaching results, and are conducive to cultivating learners 'ability to learn independently.
Resource push model
Resource push model is a process flowchart that shows resource demand, resource push and resource feedback. The resource push model of this paper is to meet the requirements of university students to learn diversity, use large data, cloud computing and other technologies, effectively integrate a huge information database, and use related information technology means to build information resource push mode that is easy for college students to find and use. Based on the analysis of learners 'characteristics and resources' characteristics, resource push is carried out to provide learners with individualized learning resources based on the principle of association clustering. The characteristics of the resource push model in this study are: the resource is well-targeted, and the push method is diversified.
Connectivism Learning Theory
The communicative learning concept holds that the ability to do things and the method of learning are more important than knowledge itself. Pipeline is far more important than content. Connectivism learning must meet two basic conditions: First, learners have received education and have the confidence and ability to use the Internet to learn. Secondly, learners can judge the authenticity and correctness of information. Learners need to master the methods and ways of choosing, processing, filtering knowledge, and picking key points in order to obtain the knowledge content they need.
In the construction and application of resource push models for college students, based on the principle of related clustering, a two-feature model of "learner-resource" is constructed, and there is a connection between learners and others. There is a link between resources and other resources, and between learners and resources. In these one-to-many relationships, individual network resources and other social network resources interact and communicate with each other. Individuals can transmit resources for sharing by others, that is, upload. Each learner can also actively download resources. In the complex information network environment, how to effectively push resources out, we want to send personalized resources to college students must be oriented. It is necessary to analyze the college students 'majors, interests, and learning needs in detail, and through research, understand the current situation and needs of college students' online resources delivery, and push different types of resources to different users in different ways, so as to achieve individual push. Enable learners to achieve efficient learning.
Multiple Intelligence Theory
The theory of multiple intelligence holds that human intelligence is diverse and consists of nine kinds of intelligence such as language, logic, space, sports, music, interpersonal, introspection, natural exploration, and survival. There are differences in the intellectual composition of different individuals. In the process of teaching, it is advocated that "student-centered" should be used to carry out teaching activities. Gardner emphasizes the plasticity, flexibility, and diversity of intelligence. He believes that intelligence depends on the potential or tendency that has been recognized or not yet recognized in the individual's cultural background. This potential or tendency is the ability to solve problems or produce products in a specific cultural or social environment. This kind of intelligence is formed by the individual's growth, development and mature construction of internal cognition and external cognition. The theory of multiple intelligence believes that evaluating students should be evaluated in one aspect. Because of the differences in individual intelligence and characteristics, their learning methods are different. Because of the diversity of intelligence, the evaluation of student learning should be participated by many people, and the content and method of evaluation should also reflect diversity. The personalized online resource delivery for college students emphasizes the importance of individual students, multiple intelligence, and independent selection. Therefore, the courses selected for students should involve a wide range, rich content, and diverse types. Students can consciously explore, discover, and develop their own intellectual potential according to their own interests. In the process of online resource delivery, attention should be paid to the design of the learning environment, interactive communication, evaluation feedback, etc., the understanding and extension of knowledge, and the conscious cultivation of students 'independent thinking ability and the ability to solve problems in life practice.
Humanism Theory
Humanistic theory emerged in the United States in the 1950s and 1960s. The theory holds that in the process of education and teaching, the learner's own position and meaning should be used to describe learning, not the position of others. Research on how to create a good environment for learners to perceive the world from their own perspective, understand the world, understand the world, and reach the highest level of self-realization. The purpose of humanism is to pay attention not only to the development of cognition in teaching, but also to the development of the emotions, interests, and motives of middle school students in teaching, and to pay attention to the understanding of the students 'inner psychological world in order to meet the students' interests, needs, experiences, and personality differences. To develop students 'potential, stimulate the interaction between cognition and emotion, and attach importance to the psychological aspects of creativity, cognition, motivation, and emotion to restrict behavior. Humanism learning theory pays attention to the main position of people in learning activities, putting respect for people, understanding people, and believing people first in education. In the design of personalized resource delivery for college students, we must pay attention to the study of the college students as a group, because all resources, environments, methods, technologies, etc. are for learners. In the design of resource push, we should give full consideration to the current situation and needs of college students 'personalized online learning. Humanistic learning theory emphasizes the all-round development of human beings and emphasizes self-development in the learning process. In the personalized online resource delivery for college students, we should focus on the "center" of students to elaborate the design of each link. Ensure the initiative and enthusiasm in the process of discussion and communication so that college students can realize their sense of self-existence.
The Design Concept of the Model
The design principles of the model are as follows: Relevance: Association is the export of interrelationships between data. Clustering is similar to classification, and the similarity and difference of data are divided according to big data. Data in the same category are very similar. In this study, after the learner analysis and resource analysis, learners with common interests and learning needs will be promoted through the principle of association clustering, and similar related learning resources will be promoted to college students to improve learning efficiency and effectiveness.
Ease of use: The push model in this study embodies the design idea of "learner-centered".
Actively create an effective learning environment and atmosphere, attach importance to the guidance and help of online learning, and design an environment that can mobilize learners 'enthusiasm for autonomous learning, and pay attention to cultivating learners' cooperative learning ability. Whether it is the setting up of the online learning environment, the choice of online learning technology, the design of teaching and learning resources, the organization and implementation of activities, etc., all need to reflect good ease of use, users can quickly and conveniently apply. All these fully explain that the design of online resource push should reflect the learner-centered design concept. Utility: The product's user experience can directly determine the success or failure of the product. Successful online resources must also have a successful user experience. To know whether the user experience of the online resource push model is good, a user experience analysis needs to be established.
Precision: Online learning plays an increasingly important role in the reform of higher education. As one of the means for college students to obtain information, online resources have been widely used. Based on the analysis of learners 'characteristics and resources' characteristics, the online resource push model is built in this study to realize the personalized resource push for different college students, which embodies the characteristics of the precision and demand of online resource push. For learners with different subject backgrounds, they can group and push what they are interested in to achieve personalized and customized point-to-point, point-to-point precise content push.
Research Model Construction
The construction of this research model is based on the aforementioned theories of association, multiple intelligence, humanism, etc.. According to the current situation and needs of college students, college teaching teachers, online resource production and management personnel, and detailed statistical analysis, combined with the design principles, design bases, design concepts, etc., with reference to relevant models, finally built a personalized online resource push model for college students. Learners based on big data and cloud computing and based on the association clustering principle, a hybrid, knowledge point as a unit online learning resource push model, which is learner-centered, Firstly, learner analysis using cloud computing, big data, etc.; Construct a personalized and intelligent learning environment based on the characteristics of learners and resources. The personalized learning resource push for college students is divided into four types: intelligent push; Proactive push; Manual push and mixed push. The process of these four types of push is analyzed in detail by means of semantic analysis. The model is divided into nine major elements: learner analysis; Personalized learning environment construction; Analysis of resource characteristics; Resource organization; Resource push strategy; Resource delivery platform; Resource push type; Resource delivery technology; Guarantee mechanism. The model adopts the principle of correlation clustering and takes college students as the center.
The related clustering principle of the model is embodied in: related clustering among various online learning resources; The related clustering among teachers, experts and learners; The related clustering between learners and other learners at home and abroad; Related clustering between resource platforms; Related clustering among online learning resources, learners, and online resource platforms.
The model is centered on learners: the ultimate goal of building the model is also to provide learners with the quality resources they really need to improve the online learning effect.
Conclusions and Perspectives
Personalized learning is based on the differences of learners, and with the support of advanced technology, flexible resource push methods are adopted to meet learners 'learning needs. This paper uses big data and cloud-related technologies, adopts the principle of related clustering, and constructs a two-feature analysis model based on the characteristics analysis of learners and resource characteristics analysis. And with the knowledge point as the unit to carry on the online learning resources personalized push, push the object for college students. The specific studies are as follows: This paper makes a detailed investigation and analysis on the push status and demand of individual online learning resources for college students. This study introduces the present situation and needs of online learning resource delivery for college students, and makes a questionnaire and analyses the reliability of the questionnaire questions. A detailed questionnaire survey was conducted for college students, university teachers, resource production staff, resource management personnel, and resource delivery personnel. The investigation method combines network investigation with field investigation. The results of the survey are statistically analyzed, and finally the results of the current situation of online learning resource push and the needs of college students for online learning resources are obtained. It will lay the foundation for the next research on model element analysis and model construction.
This study adopts the principles of related clustering, using the theory of associative learning theory, multiple intelligence theory, personalized learning theory, humanistic learning theory, etc. as the theoretical guidance, using related technologies such as big data and cloud computing. A dual feature analysis model based on learner feature analysis and resource feature analysis is constructed. This model uses a variety of resource push methods, including intelligent push, active push, manual push and mixed push. Among them, artificial push is divided into three types: teacher push, expert push and peer push. The resource form of the push is based on the knowledge point to realize the personalized learning resource push for college students.
This study constructs a personalized online resource push model for college students, which includes nine elements. They are learner feature analysis, dynamic resource feature analysis, personalized, intelligent online learning environment design, resource organization form, resource push strategy, resource push platform, resource push type, resource push key technology, and resource push guarantee mechanism. This paper expounds the nine elements of the model in detail and provides a theoretical framework for the research of personalized online resource push. | 2019-06-13T13:20:39.560Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "e392c52b9a6d97c32290ddf804cdacc72cff4cc0",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25906374.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "67d506b65a3cd4fd8587e450e826693769946f04",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
257879985 | pes2o/s2orc | v3-fos-license | Investigating Native Metal Ion Binding Sites in Mammalian Histidine-Rich Glycoprotein
Mammalian histidine-rich glycoprotein (HRG) is a highly versatile and abundant blood plasma glycoprotein with a diverse range of ligands that is involved in regulating many essential biological processes, including coagulation, cell adhesion, and angiogenesis. Despite its biomedical importance, structural information on the multi-domain protein is sparse, not least due to intrinsically disordered regions that elude high-resolution structural characterization. Binding of divalent metal ions, particularly ZnII, to multiple sites within the HRG protein is of critical functional importance and exerts a regulatory role. However, characterization of the ZnII binding sites of HRG is a challenge; their number and composition as well as their affinities and stoichiometries of binding are currently not fully understood. In this study, we explored modern electron paramagnetic resonance (EPR) spectroscopy methods supported by protein secondary and tertiary structure prediction to assemble a holistic picture of native HRG and its interaction with metal ions. To the best of our knowledge, this is the first time that this suite of EPR techniques has been applied to count and characterize endogenous metal ion binding sites in a native mammalian protein of unknown structure.
1) Experimental procedures
Protein purification HRG was purified from rabbit serum (Sigma Aldrich, Poole UK) using immobilised Ni 2+ -affinity chromatography (HisTrap column; Cytiva, Little Chalfont, UK). Prior to purification the serum was centrifuged at 4,000 × g for 30 min at 4°C, filtered through a 0.45 µm filter (Sartorius, Epsom, UK) and imidazole was added to a final concentration of 5 mM. The resultant fractions containing HRG were further subjected to anion exchange (HiTrap DEAE FF column, Cytiva) and gel filtration (HiLoad Superdex-75 column, Cytiva) chromatography. All purification steps were carried out using an ÄKTA Purifier (Cytiva). Prior to any experiment, the purified protein was dialysed in the appropriate buffer.
Prior to titration, rabbit HRG and CuCl2 (Sigma Aldrich, Poole UK) were dissolved separately in ITC buffer (140 mM NaCl, 50 mM Tris, pH 7.4) to final concentrations of 10 µM and 1.5 mM, respectively, at 25°C and pH was re-adjusted to 7.4 to avoid mismatch conditions. The titration involved a single injection of 0.4 µl of CuCl2 over 0.8 s followed by 18 injections of 2 µl CuCl2 over 5 s with a 150 s interval between injections to allow full equilibration. The stirring speed was set to 750 rpm throughout.
Additionally, a control experiment was carried out whereby the titrant was injected into buffer and the resultant heats were deducted from those obtained in the main experiment.
Heparin binding assay
Zinc ion binding to HRG is known to increase the affinity of this protein to unfractionated heparin (UFH). In order to assess whether Cu II ions elicit a similar effect, binding of HRG to immobilised UFH in the presence and absence of Zn II and Cu II was performed. For this Costar Carbohydrate Binding Plates (Sigma-Aldrich, Poole, UK) were used. UFH (Acros Organics, Loughborough, UK) was activated with 15 mM sodium periodate according to the manufacturer's protocol. Purified rabbit HRG (rHRG) was biotinylated using a Biotinylation Kit (Stratech, Newmarket, UK) and then dialysed overnight at room temperature against 50 mM HEPES, 150 mM NaCl, 0.2 % Tween 20, pH 7.4. The activated UFH was dialysed twice for 1 h at room temperature against 0.1 M sodium acetate, pH 5.5. The activated UFH (10 µg/well) was then bound to the plate overnight. Wells were washed with 50 mM HEPES, 150 mM NaCl, 0.2 % Tween 20, pH 7.4 and then blocked with 50 mM Tris buffer pH 8.2, 0.2 % fish gelatine for 1 h at room temperature. Biotinylated rHRG was added to the wells (0-3 µM) for 2 h at 37 °C in the absence and presence of 50 µM ZnCl2 or 50 µM CuCl2. After washing with 50 mM HEPES, 150 mM NaCl, 0.2 % Tween 20, pH 7.4, binding was detected with alkaline phosphatase-linked streptavidin (1:10,000 dilution, ThermoFisher Scientific, Loughborough, UK) and p-nitrophenol phosphate substrate (200 µg/well, Sigma-Aldrich). The reaction was stopped with 3 M NaOH and the absorbance was read at 405 nm using a Dynex MRX spectrophotometer (Dynex Technologies, West Sussex, UK).
EPR sample preparation
Pseudo-titration samples were prepared with a protein concentration of 250 µM in buffer containing 50 mM Tris and 140 mM NaCl at pH 7.4 and Cu II from 1 to 20 molar equivalents (from 250 µM to 5.0 mM). The control sample contained no protein and 250 µM Cu II . Two batches of protein were prepared, the first one yielded the control sample and pseudo-titration samples from 1 to 10 molar equivalents of Cu II ; the second batch yielded pseudo-titration samples at 5 and 10 molar equivalents Cu II (repeat samples for reproducibility), and the 12, 15, and 20 molar equivalent samples. Samples were mixed with equal amounts of ethylene glycol for cryoprotection, resulting in a final protein concentration of 125 µM and Cu II concentration of 125 µM to 2.5 mM, transferred to 3 mm (70 µL sample volume) quartz EPR tubes, and immediately frozen in liquid nitrogen.
To determine the maximum binding capacity of HRG four further samples were prepared at a final Cu II concentration of 2.5 mM, thus corresponding to the Cu II concentration in the 20 molar equivalents sample of the pseudo-titration. Samples had final protein concentrations of 50, 25, 12.5, and 6.25 M, corresponding to 50, 100, 200, and 400 molar equivalents of added Cu II , respectively.
Note that Tris buffer shows weak complexation of free copper ions, leading to 14 N hyperfine interactions. However, other commonly used buffers such as PBS buffer, lead to precipitation resulting in reduced Cu II -loading. 1 Indeed, there are very few (if any) options for buffers that do not complex free copper ions. This is highlighted by a review on the use of pH buffers and their interaction with metal ions, where all of the buffers examined (out of the 31 buffers listed) exhibited the capacity to undergo complexation with copper ions. 2 Therefore, for consistency and comparability ITC buffer (140 mM NaCl, 50 mM Tris, pH 7.4, see above) was used for EPR sample preparation.
Continuous wave (CW) EPR
CW EPR spectra were obtained at 120 K with a Bruker EMX 10/12 spectrometer running Xenon software and equipped with an ELEXSYS Super Hi-Q resonator at an operating frequency of ~9.5 GHz with 100 kHz modulation. Temperature was controlled with an ER4141 VTM Nitrogen VT unit (Bruker) operated with liquid nitrogen. CW spectra were recorded using a 160 mT field sweep centred at 310 mT, a time constant of 40.96 ms, a conversion time of 6.67 ms, and 8000 points resolution. An attenuation of 10.0 dB (20 mW power) and a modulation amplitude of 0.2 mT were used. CW spectra were phase-and background-corrected and the double integral was obtained using the Xenon software. Spectra were field-corrected using DPPH as a standard.
Pulse experiments
Pulse experiments were performed at X- (9.5 GHz) and at Q-band (34 GHz) both operating on a Bruker ELEXSYS E580 spectrometer, with probe-heads supporting a split ring resonator (4118X-MS3) for Xband and a 3 mm cylindrical resonator (ER 5106QT-2w in TE012 mode) for Q-band, respectively. Pulses were amplified by pulse travelling wave tube (TWT) amplifiers (Applied Systems Engineering) with nominal output of 1 kW and 150 W at X-and Q-band, respectively. Temperature was controlled via cryogen free variable temperature cryostats (Cryogenic Ltd) operating in the 3.5-300 K temperature range.
Temperature optimisation for PELDOR experiments were performed between 10 K and 50 K using the HRG sample with 2 equivalents of Cu II . Here, T2 (or Tm) was determined from a 2-pulse decay experiment with stretched exponential decay for fitting, and T1 was the determined longer from an inversion recovery experiment. Determined relaxation times were then used to assess the relative sensitivity per temperature as described previously. 3 T2 experiments were further performed on selected samples to investigate the change in relaxation behaviour between 1 and 400 molar equivalents of Cu II .
PELDOR experiments were performed using the four-pulse DEER 4-6 pulse sequence π/2(A) -τ1 -π(A) -(τ1 + t) -π(B) -(τ2 -t) -π(A) -τ2 -echo at 30 K as described previously, 7 with a frequency offset (pump -detection frequency) of +80 MHz (~3 mT). Shot repetition times were set to 300 s; 1 was set to 380 ns, and 2 to 3300 ns. Pulse lengths used were 16 and 32 ns for π/2 and π detection, and 14 ns for the ELDOR π pump pulse. The pump pulse was placed on the resonance frequency of the resonator and ~1.7 mT away from the maximum of the spectrum to lower field to allow placing pump and detection positions symmetrically around the maximum. PELDOR data were analysed using DeerAnalysis2015. 8 Raw experimental PELDOR data were cut at 2500 ns to remove artefacts at the end of the time trace, and background-corrected using a monoexponential decay function with the background start point set to 1250 ns before subjecting the trace to Tikhonov regularisation. A regularization parameter α of 100,000 was chosen by visual inspection for all traces. Further statistical analysis of PELDOR data was performed as described previously, 9 using the validation tool of DeerAnalysis 2015. 8 Briefly, the background start time was varied from 5% to 80% of the dipolar evolution time in 16 trials and 50% random noise was added (level 1.50) with 50 trials, yielding 800 trials per trace. Trials were pruned, keeping only those where the root mean square deviation (rmsd) was within 15% of the rmsd of the best fit; these trials were further used for determination of the mean modulation depth and error.
Hyperfine spectroscopy
3-pulse electron spin echo envelope modulation (ESEEM) 10-12 spectroscopy was performed at 30 K at X-band frequencies (~9.5 GHz) and on the maximum of the field-swept spectrum with a pulse length of 16 ns for π/2 detection and inter-pulse delay set at the blind spot of the proton (~216 ns). The delay T, set at 280 -300 ns, was incremented with a dwell time T of 8 ns and a 4-step phase cycle was used. Four dimensions were recorded, whereby the inter-pulse delay was incremented by 0.5 1(H); the first was selected for further processing and setting up the HYSCORE experiment (see below). Data were analysed by fitting an exponential decay background function to the (phasecorrected) raw data, subtracting the raw data by this background function, and then dividing the difference by the background function, thus retaining amplitude information after fast Fourier transformation (FFT) similar as described previously. 13 The resulting trace was further subjected to a Hamming window, zero-filling and FFT, before obtaining the absolute (or magnitude) spectrum.
Hyperfine sublevel correlation (HYSCORE) 14 spectroscopy was performed at 15 K at X-band frequencies (~9.5 GHz) and on the maximum of the field-swept spectrum with a pulse length of 16 and 32 ns for π/2 and π, respectively, set at the blind spot of the proton (~216 ns) as chosen from the 3-pulse ESEEM, t1 = t2 = 56 ns, and a 4-step phase cycle. Data were processed and analysed using the Matlab plugin HYSCOREAN, 15 employing Hamming apodization, zero-filling, 3 rd order polynomial background correction, and diagonal and anti-diagonal spectral symmetrisation, keeping a similar amount of noise for each spectrum by adjusting the minimum contour level percentage accordingly.
EDNMR 16-17 measurements were conducted by using an additional second frequency option (E580-400U) jointly with the Q-band setup described above at 30 K. Measurements were taken at low field (1.0240 -1.0630 T) and high field (1.1674 -1.1833 T) on a Hahn echo with an ELDOR HTA (high turning angle) pulse length of 8 µs and an ELDOR attenuation of 30 dB. Only low-field spectra are shown as they offer increased resolution. Note that directly coordinated nitrogen atoms visible in EDNMR would not contribute to the ESEEM signal under our experimental conditions (pulse lengths of π/2 = 16 ns at X-band).
Simulations
Representative numerical simulations of 3-pulse ESEEM data were performed with EasySpin, 18 using the implemented simulation function "saffron". Prior to simulation, data was background corrected, apodised with a Hamming window, zero-filled and Fourier transformed. The 3-pulse ESEEM simulations were conducted in the frequency domain. EDNMR simulations for the control and HRG with 1 equivalent of Cu II were performed as described previously. 19 The central blind spot was modelled by a sum of Gaussian and Lorentzian lines and subtracted from both raw data sets.
JNet version: 2.3.1; UniRef90 release: 2014_07, 09-Jul-2014. Figure S2: AF2 prediction for rabbit HRG (UniProt Q28640). Colours indicate pLDDT confidence scores between 0 and 100; very low pLDDT scores (below 50 = orange colour) are likely to be unstructured or intrinsically disordered; note that the prediction does not inform on the relative likelihood of different conformations of such regions. Figure S6. Stacked overlays of CW EPR spectra for comparison of four molar equivalents (1, 5, 10, and 20) of Cu II to illustrate the gradual decrease of SHF splittings. The right plot is zoomed into the SHF region for better visualisation.
3) Continuous wave (CW) EPR data
Decreasing resolution of SHF upon addition of Cu II was in agreement with earlier data. 20 While this could in principle be caused either by broadening from introduction of dipolarly coupled close-by Cu II or by (un-)specific binding of Cu II to non-histidine sites, the latter is unlikely as the resolution of the signal was lost more quickly from 10 molar equivalents of Cu II onwards than a mere dilution of one type of sites would explain. Thus, either there is dipolar broadening or existing binding sites change.
Considering an additive spectrum, where an additional component was added to the existing spectrum exhibiting 14 N SHF couplings, one could not explain our observations. Here, while the amplitude of the resolved SHF couplings seemed roughly halved going from 5 to 10 molar equivalents of Cu II , instead of being halved again going from 10 to 20 equivalents they have almost completely vanished at this Cu II loading. This suggested that the situation was more complicated and that spectra could not be simulated simply by assuming 10 binding sites with SHF and 10 binding sites without.
Instead, there was either spectral broadening involved or SHF of initial binding sites were changing upon occupation of additional sites. This situation rendered simulations based on simply adding new species with added Cu II unfeasible. Table S1. Measured values for AII and gII. DPPH was used as a reference.
EDNMR
Effect of buffer coordination: The control sample containing no protein showed peaks arising from the interaction of Cu II with nitrogen atoms of the Tris buffer ( Figure S7). Figure S7. EDNMR simulations for the control (Cu II in Tris buffer; left) and HRG with 1 molar equivalent of Cu II (right). Only the right-hand side of the EDNMR spectra are shown and the central blind spot has been subtracted from both raw data sets. Parameters to simulate the EDNMR spectrum of Cu II in Tris buffer are given in Table S2.
A good agreement between experiment and simulation of both the control EDNMR spectrum and the spectrum in presence of HRG could be obtained, not considering the peak of the proton Larmor frequency (as indicated in Figure S7). Details of hyperfine parameters used for the simulations are given in Table S2 In the presence of HRG, EDNMR spectra showed additional peaks with one coinciding with the proton Larmor frequency contribution, as well as a broad background feature. Based on the simulations the positions of the defined new peaks confirmed direct coordination of at least two imino nitrogen nuclei of imidazole rings to Cu II with a mainly isotropic hyperfine coupling caused by significant electron spin density delocalisation onto these nuclei that could not be attributed to histidine residues forming the Cu II binding sites.
These hyperfine couplings were apparent already in presence of 1 molar equivalent of Cu II , and EDNMR spectra remained virtually unchanged with increasing Cu II concentration up until [15][16][17][18][19][20] equivalents. This indicated that, within this metal ion ratio, all populated histidine binding sites showed very similar binding geometries. Speculatively, the broad background feature may be attributed to heterogeneity in coordination of the Cu II to the high affinity binding sites, however it is important to note that no other specific couplings (which would indicate specific sites) were resolved.
Further increasing the relative amount of Cu II (50 and more molar equivalents of Cu II ) led to the loss of the histidine-associated peaks, suggesting that non-histidine binding sites became dominant. In agreement with the SHF observed in the CW EPR spectra, the histidine-associated peaks have been lost at 50 molar equivalents of Cu II and above. If one assumed a mere additive behaviour for the composition of the spectrum, then 40% of histidine-associated peaks should remain visible at 50 molar equivalents of Cu II .
These data led to the hypothesis that different metal ion binding sites have different relaxation behaviour, where Cu II bound to high-affinity sites relaxes faster and is thus contributing significantly less to echo detected experiments once lower-affinity sites are being occupied, as otherwise one would expect the hyperfine coupling to be recoverable even from the broadened spectra. Quantitative analysis of ESEEM peak amplitudes after Fourier transformation (FFT) and peak integration provided further support for our EDNMR data:
3-pulse ESEEM
At up to 20 molar equivalents of Cu II , the FFT amplitudes were relatively constant, while at higher molar equivalents (≥50) they were substantially reduced. The intensity of the DQ peak (or the DQ peak integral) depends on the number of histidine residues involved in the binding site. 21 DQ peak integral values were fairly stable between 5 to 15 molar equivalents of Cu II but decreased at 20 molar equivalents, suggesting that less histidine residues per Cu II were available for binding from this point.
At up to 15 molar equivalents, quantitation of the DQ peak suggested coordination of the Cu II by at least two imidazole rings, which was also in good agreement with simulations (see below). 13, 21 At 1 molar equivalent of Cu II NQI and DQ peak integrals were slightly lower than between 5 to 15, again suggesting binding to less than two histidine residues; the reason for this is not clear but might be due to some competition with the Tris buffer. shown on the right for HRG with 20 molar equivalents of Cu II , with simulations (red) demonstrating a good fit to experimental data (black) could be obtained assuming two remote histidine nitrogen atoms (see Table S3 for simulation parameters). Figure 4C in the manuscript.
In an attempt to visualise also the weak two-nitrogen double quantum transitions (visible in the ESEEM spectra at ~8 MHz) [22][23] we re-processed the 5 to 20 molar equivalents Cu II HYSCORE spectra allowing more noise. This processing revealed weak but visible peaks at the expected positions, shown below in the projection contour plots, with strongest peaks observed at 15 molar eq. Cu II , in line with our other observations. HYSCORE with 5 molar equivalents of Cu II HYSCORE with 10 molar equivalents of Cu II Normalised signal intensity 2 x / ns control 1 eq Cu II 1 eq Cu II 10 eq Cu II 20 eq Cu II 50 eq CuI I 400 eq Cu II Figure S13. Two-pulse echo decay for selected samples. Note the marked step from 20 to 50 molar equivalents of Cu II , with an additional component appearing in the decay. Figure S14. PELDOR temperature optimisation. All optimisation experiments were performed with HRG + 2 molar equivalents of Cu II and 50% ethylene glycol for cryoprotection. T1 is measured with a 3pulse inversion recovery experiment (top left), Tm with a 2-pulse decay experiment (top right). The optimum temperature was derived from the relative sensitivity per temperature (bottom row) taking into account the Boltzmann factor, the temperature-dependent Tm (or T2; i.e., how fast is the loss of coherence), and averaging (i.e., how fast do populations re-equilibrate) as described previously. 3 An optimum temperature of 30 K was determined for PELDOR measurements from these relaxation data. 2 eq 4 eq 5 eq 7 eq 9 eq 10 eq 15 eq t / s Figure S17. Exemplary PELDOR data for HRG with 1, 5, 10, and 15 molar equivalents of Cu II added. Raw and background-corrected data are shown in the left and middle graphs, respectively. Corresponding y-stacked distance distributions are given to the right. Shown are the 95% confidence estimates (± 2σ) of the distance distributions as obtained by statistical analysis. Colour bars represent reliability ranges as described in the DeerAnalysis 8 manual (green: shape reliable; yellow: mean and width reliable;
Molar eq Cu II
orange: mean reliable: red: no quantification possible).
As shown above, PELDOR experiments yielded very broad distance distributions that we refrained from quantifying and that did not change significantly within confidence intervals between 1 and 20 equivalents of Cu II added.
Constructing a multi-site binding polynomial
We are assuming a speciation model with two different metal ion binding sites for the HRG, each with different affinities reflecting high and lower affinity sites. Simulation of empirical modulation depths observed in the HRG + Cu II PELDOR pseudo-titration series requires a mathematical description of the protein-ligand binding equilibria of each species in solution, which can be achieved via a general multisite binding polynomial. 24 The fractional population of each species, the macroscopic speciation vector, , is a function of the following parameters: total protein concentration, [ ] 0 , total ligand concentration, [ ] 0 , dissociation constants 1 and 2 , and the number of high affinity, , and low affinity, , sites. These fractional macroscopically-bound populations are significant in the simulation of PELDOR modulation depths because each species will contribute to the observed modulation depth with a weighting proportional to the product of their relative population, and the number of spins present in that species: Where f is as defined above, is the inversion efficiency of the pumping pulse and is the total number of spins in the system. The unmodulated echo contributions (1 − ) −1 are averaged for all species with = 1 to Cu II ions bound taking into account the increase in Cu II signal and normalising by their contribution to the signal at zero time. 9 PELDOR modulation depths were simulated using MATLAB, and mean square error was used as a metric for simulation quality:
Exploratory simulations of PELDOR modulation depths
Cu II -Cu II Q-band PELDOR measurements were performed on rHRG at 125 μM protein concentration, and with varying equivalents of Cu II , ranging in concentration from 125-2500 μM. Previous literature, and empirical ITC data suggested there are 12 high-affinity ligand-binding sites (n = 12), with a KD (at 235 K) of 5 × 10 -8 . The extrapolation of KD to low temperature was facilitated by application of the van't Hoff equation and is made necessary because the binding kinetics are fast with respect to the flashfreezing, and so EPR measurements reflect the binding equilibrium at the freezing-point temperature.
The inversion efficiency (λ) of a 16 ns rectangular pump pulse respective of Cu II spectra at Q-band frequency, and with an offset of ~150 MHz was demonstrated to be 1-2%. 25 For the subsequent simulations, inversion efficiency was approximated as 1.5% (λ = 0.015), unless otherwise stated. It should be noted that for the KD and number of low affinity sites, there was little literature or empirical data to corroborate accurate values.
Since approximate values are available for all other parameters, we first investigated the influence of the number, and KD of the low affinity sites. The experimental modulation depths and 95% confidence intervals are given in Table S7, with the corresponding Cu II concentrations. Figure S18 shows error surfaces for value pairs of (m, KD2) when comparing simulated values to the empirical data, and clearly demonstrates the relative insensitivity of the simulation to the number and affinity of the second class of sites. Table S8 gives the error minima for value pairs of (m, KD2), and indicates that as the number of lowaffinity sites increases, the affinity correspondingly decreases, as would be expected for a constant modulation depth. While the global error minimum is at (m = 3, KD2 = 1.0 × 10 -5 ), it is seen that the error minimum is not particularly pronounced, and experimental data is also reasonably described by m = 5, 10 and 15. Table S8. The minimum mean square error of each trace shown in Figure S18 (left) and the corresponding optimum value of KD.
For the first 12 equivalents of Cu II added to the HRG, the modulation depth is approximately linearly increasing and implies that binding is quantitative (i.e., n ≥ 12 and KD1 << 1.25 × 10 -4 ). Since Δ continues to increase for 15 and 20 molar equivalents of Cu II , this implies that HRG continues to bind Cu II , and that low affinity sites are not fully saturated in the regime of Cu II concentrations initially investigated. Therefore, this suggests that m ≥ 8 (m = 8 presupposes that binding is quantitative, which is unlikely given that the optimal low-affinity KD values predicted by the simulation (for m ≥ 5) are > 1.25 × 10 -4 ).
It is more probable that the true value of m is significantly higher than 8, since this would account for the continuously increasing Δ and the non-linearity of the increase for 15 and 20 equivalents of Cu II (many sites being sparsely populated), however one cannot use the simulations to make precise inferences about the number and KD of the low affinity sites.
Contrarily, it can be shown that to satisfactorily describe the observed data, particularly for the first 12 equivalents, n ≥ 12, as illustrated in Figure 5 in the main text.
This is highly consistent with the previous literature and empirical ITC data, and values greater than n = 12 lead to a simulated modulation depth profile which begins to plateau at higher concentrations of Cu II and therefore overestimate Δ at 20 equivalents. It should also be noted that regardless of the KD1 chosen for the simulation, the error function is always minimized for n = 12. In light of CW EPR measurements that indicated sustained Cu II -binding up to 100 equivalents of Cu II with respect to HRG protein, further simulation was performed using 100 low-affinity binding sites (m = 100), with a dissociation constant of ~5.0 × 10 -2 . Results are shown below. | 2023-04-02T06:17:22.802Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "382cfc3f2d873ded78daed1f7b1c5500bb100de0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/jacs.3c00587",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "89fb6d981517ffed750f893cfce949cec3315c62",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119172608 | pes2o/s2orc | v3-fos-license | Topological effects on transition temperatures and response functions in three-dimensional Fermi superfluids
We investigate the effects of topological order on the transition temperature, $T_c$, and response functions in fermionic superfluids with Rashba spin-orbit coupling and a transverse Zeeman field in three dimensions. Our calculations, relevant to the ultracold atomic Fermi gases, include fluctuations beyond mean-field theory and are compatible with $f$-sum rules. Reminiscent of the $p_x + i p_y$ superfluid, the topological phase is stabilized when driven away from the Bose-Einstein condensation and towards the BCS limit. Accordingly, while experimentally accessible, $T_c$ is significantly suppressed in a topological superfluid. Above $T_c$, the spin and density response functions provide signatures of topological phases via the recombination or amplification of frequency dependent peaks.
This leads to the goals of this paper. Here we aim to determine (in the case of intrinsic pairing) how a transition from a trivial to a topological phase is reflected in the superfluid transition temperature T c . Additionally, we show how and when a transition in the bandstructure can be experimentally detected via studies of the finite temperature density-density and spin-spin correlation functions. The ultracold atomic Fermi gases are ideally suited for tuning between trivial and topological phases. As in the p x + ip y superfluid, from the perspective of T c , we find these intrinsically paired superfluids self-consistently adjust to stabilize topological phases in the BCS regime. This occurs despite the fact that moderate SOC reinforces BEC behavior through enhanced pairing [17][18][19][20], even in the normal state [14,21,22].
Experiments require the consideration of non-zero temperature T . Although studies of the ground state have been the focus [12], finite T effects have been included in the literature [13,23] at the mean-field level. A major weakness of this approach is that computing T c in this manner does not reflect the topological band-structure, which depends importantly on the existence of a normal state pairing gap. Here we remedy this inadequacy through the inclusion of fluctuations [21,24], and also establish that T c is experimentally accessible.
There are proposals in the literature which suggest that the topological phase might be observed in atomic Fermi gases through the compressibility κ [13,14] or via radio frequency (RF) based probes [12]. However, changes in κ appear to reflect topology only in the limit of small SOC [13,14]. RF experiments in principle measure the electronic dispersion, but resolution and finite temperature broadening effects are not yet [25] well controlled.
Here we introduce an alternative probe: the frequency dependent density-density or spin-spin correlation functions [26] at temperatures T > T c . The position or threshold of peaks in these responses, importantly, reflects band-structure. In the topological phase we find that a peak in the density response is significantly amplified due to a saddle point Van Hove singularity, often seen in correlated superfluids [27,28]. In the trivial phase the spin response exhibits two distinct peaks, which merge into a single peak in the topological phase.
The many-body Hamiltonian is of the Bogoliubov-de Gennes (BdG) form where ∆ is a pairing gap and H 0 (k) = σ y [H * 0 (−k)] σ y is the time-reversed single-particle (hole) Hamiltonian. There are four branches in the BdG eigenvalue spectrum, ηE αk for α, η = ±1 with the positive energy dispersion A three-dimensional superfluid described by the above BdG Hamiltonian belongs to one of three distinct topological phases. The topological phase diagram is specified by inequalities derived from solving E − (k z , k ⊥ = 0) = 0. No nodes appear when b z < ∆, corresponding to a non-topological or "trivial" superfluid. If µ > 0 and (µ 2 + ∆ 2 ) > b 2 z > ∆ 2 , the topological superfluid has four nodes (4-Weyl points) which emerge at Finally, for arbitrary µ, the system is a topological superfluid with two nodes (2-Weyl points) when b 2 z > (µ 2 + ∆ 2 ) [13,15,23]. For Rashba SOC, the dispersion around the nodes is linear in momentum, and is described by a Weyl Hamiltonian with topologically protected nodes.
To compute the transition temperature T c , we build on the well established mean-field theory [13,15,23], and incorporate fluctuation effects in a consistent fashion [24,29]. We write the mean-field gap equation [12,13,23] as where f (x) is a Fermi distribution and g < 0 is an attractive interaction. Where relevant, we regularize integrals by introducing a scattering length defined through [30]. The coherence factor v ηαα (k, k) (and its generalization, v ηαα (k, k − q)), is presented in the Supplemental Material [31]. Their specific form is irrelevant for the present discussion. One has to distinguish T c from the lowest temperature, denoted T * , at which the mean-field gap equation satisfies ∆(T * ) = 0. Such an analysis requires a natural extension [21,24] of Eq. (3) to finite Q ≡ (iω, q) (where iω is a Matsubara frequency): From the structure of Eq. (3) it is apparent that Γ (0) depends on both the full energy spectrum E αk as well as the bare energy ξ αk . Thus, one might expect (as implemented in Eq. (4)), that the fluctuation corrections should depend on an asymmetric combination of bare and dressed Green's functions [32]. The quantity Γ (Q) has been of interest [17,23] for computing the binding energy and mass of the pairs associated with the isolated two-body physics. We emphasize that to describe pairs which are intrinsic to the manybody system, one should not set ∆ and µ to zero, as is usually done [17,23]; the many-body state is not simply a gas constructed from entities of the two-body problem. In 3D there are metastable or resonant pairs for all parameters below T * , whereas the two-body bound states only exist for positive scattering length. Furthermore, for sufficiently large b z , the effective mass in the two-body problem diverges near unitarity [23], and is not defined at negative scattering length.
To characterize the mass of the many-body system, consider the vertex in Eq. (4) expanded at small momenta, where (using Eq. is the effective pair mass for the component of momentum parallel (perpendicular) to the SOC vector. While it is sometimes possible to calculate the effective pair masses M ⊥ , M analytically, in general this is not necessary. Rather, it suffices to calculate numerically the second-order derivative at small q [33].
Notice that the vertex function Γ (Q) reflects, up to a constant, the non-interacting Green's function of a thermal Bose gas with pair dispersion ω q , below the condensation temperature (Γ −1 (0) = 0). We interpret this Q = 0 contribution by considering the following quantity, proportional to a Bose occupation number n B : This characterizes the excitation gap in the limit in which all pairs are non-condensed, but in which T c is approached from below. The Thouless condition (describing the instability of the normal state) or divergence of Γ(0) requires that the above expression for ∆ 2 lies on the mean-field curve determined from Eq. (3). The condition for T c is then simply obtained [21,24,29] by equating the constraint on ∆(T c ) via Eq. (5) with that obtained from the mean-field gap equation in Eq. (3). Importantly, all these arguments can be generalized so that the computation of T c as a fluctuation correction to any BCS-BEC mean-field theory (including LOFFlike phases) is now accessible, based on writing Γ(Q) as a natural extension of the appropriate mean-field vertex function (e.g., Eq. (4)). This approach is distinguished from other BCS-BEC theories [34][35][36] by the fact that the transition temperature is dependent on a normal state excitation gap; in this way T c will reflect changes in bandstructure associated with the transition from trivial to topological phases. It is also distinguished by the fact that the present approach avoids the unphysical first order transition found in all other BCS-BEC theories [37]. We will assume throughout that, above T c , the meanfield gap represents a reasonable approximation [38] for the normal state ∆.
Phase diagram.− To understand the effects of SOC and the Zeeman field on condensation and pairing, we numerically compute T * and T c , varying 1/k F a, b z , and λ. Where relevant, we measure quantities in terms of the Fermi momentum (k F ), energy (E F ), or temperature (T F ). It is convenient to define a shifted chemical potential, δµ = µ − µ 0 , where µ 0 = −max E so (1 + b 2 z /E 2 so )/2, b z and E so = λ 2 /m is the SOC energy. In this way, a necessary (but not sufficient) condition for a topological phase is that δµ > 0. these plots are consistent with earlier work [14]. A close analogy between varying λ and varying 1/k F a is seen in Fig. 1(a) and Fig. 1(b). We define "weak" or "enhanced" pairing relative to b z = 0. The former is associated with small λ or negative 1/k F a while the latter corresponds to either large λ or large positive 1/k F a. Thus, Fig. 1(c) is characteristic of the generic weak pairing regime while Fig. 1(d) is characteristic of the strong pairing case produced by either large 1/k F a or large λ. We analyze the top two figures by focusing on a decreasing abscissa which effects a transition from a trivial to topological phase (shown as shaded). In Fig. 1(a), corresponding to 1/k F a = 0, b z = 2.5E F , this transition is driven by varying the SOC strength λ. In Fig. 1(b) it is driven directly by varying the scattering length 1/k F a; somewhat after the point δµ > 0 is crossed, a further decrease in 1/k F a (towards the BCS limit) allows the system to reach a topological phase. Here we see a series of two transitions from topologically trivial to 4-Weyl and then to 2-Weyl superfluids. While there is some initial decline in T c with diminishing 1/k F a, the most significant decrease in T c occurs in the 2-Weyl case.
The next two panels contrast the regime of weak pairing ( Fig. 1(c)) with that of enhanced pairing ( Fig. 1(d)). In the first case, the system is BCS-like everywhere. Increasing b z gradually suppresses T c and there is no clear signature in T c of the change from a trivial to a topological phase (shown as shaded in the figure). As shown in Fig. 1(d), when the pairing is enhanced, T c becomes insensitive to variations in the Zeeman field until δµ = 0.
Shortly thereafter, the topological phase transition is crossed and T c rapidly declines.
We can see from the last figure, in particular, that the satisfaction of the topological inequality and the δµ = 0 condition importantly define a transition (often quite sharp, as in Fig. 1(d)) between a superfluid, characterized by a larger gap, and larger pair mass, M ⊥ ∼ 2m (i.e., more "BEC-like"), and a superfluid with a small gap, ∆/E F 1, and a small pair mass M ⊥ m which is "BCS-like". The resulting behavior of T c arises in the topological phase because there is a competition between the effects of a decreasing pair mass and a decreasing mean-field pairing gap as b z increases. The net effect is a lowering of T c in the topological phase. This can, in turn, be viewed as a form of BEC-BCS transition. The details are presented in the Supplemental Material [31].
One can inquire as to why the topological transition becomes more apparent (as reflected in T c ) on the strong pairing side ( Fig. 1(d)), whereas it is less evident (from the perspective of T c ) when in the weak pairing limit ( Fig. 1(c)). These differences are reflected in the evolution of the band-structure via a Van Hove singularity as the topological transition is crossed. To address this, Fig. 2 presents a constant energy contour plot for the band +E −1,k . The two axes correspond to the in-plane (k ⊥ ) and out-of-plane (k z ) momenta. For definiteness, we have chosen 1/k F a = 0 and µ(T ), ∆(T ) are determined for a temperature just above T c . Local extrema in this figure reflect Van Hove singularities, either at isolated points or extended in a ring-like structure. Each of the three panels in a given row corresponds to increasing values of b z with only the left-most figures in the trivial phase. The top three figures are in the weak pairing regime whereas the bottom three figures are in the regime of enhanced pairing.
A key observation from these figures is that in the weak pairing limit there is a smooth evolution from a trivial to topological phase, whereas for enhanced pairing the band-structure evolves rather dramatically from a trivial and BEC-like phase to a topological and BCS-like phase. Indeed, the topological transition in the lower panel is roughly correlated with the appearance of additional Van Hove singularities (as indicated). This is in contrast to the upper panel where Van Hove singularities of the trivial and topological phases are relatively unchanged. These figures help interpret the behavior observed in Fig. 1(c) and Fig. 1(d).
Frequency dependent spin and density response functions.− As in previous work [21] we write the correlation functions (above T c ) as The density-density correlation function χ ρρ (Q) corre- gives the corresponding spin-spin correlation function. The differences between the density or spin responses are the coherence factors w αα ,ηη (k, k + q), which are rather complicated and are presented in the Supplemental Material. As a numerical check on these calculations, the f -sum rule for the density response and related sum rules [21] for the spin response hold for all q.
Quite generally, the correlation functions for a paired normal state can be decomposed into two parts; one involving the difference: E (2,−) (k, q) = |E −1,k − E ±1,k+q | which enters as a thermal contribution (at T = 0), and the other involving the sum: E (2,+) (k, q) = |E −1,k + E ±1,k+q |, which we call the multiparticle contribution. We address the q = 0 spin response, χ SiSj (ω, 0), (where i, j are x or y) so that inter-band terms dominate. Thus, for the ±1 subscript in the density response, the −1 band label yields the main contribution, whereas in the spin response the +1 band label is most important. Figure 3(a) shows χ SxSy (ω, 0) for both the trivial and topological phases. In the trivial phase there are two clearly resolvable peaks; the first peak is associated with the thermal contribution and the second with the multiparticle contribution. By contrast, there is only one peak in the topological phase. A related signature for the Hall conductivity (in 2D) at T = 0, rather than, as here, above T c , was suggested earlier [16].
Importantly, this provides a means of distinguishing between the trivial and topological phases. We can analytically identify the position of the maximum in the first (thermal) peak, which is due to a flat band in E (2,−) (k, 0), and appears at precisely 2b z . The threshold for the second peak is ω 1 ≡ min k E (2,+) (k, 0). In the -χ Contrast between topological (solid, red) and trivial (dashed, black) phases of the frequency dependent spin-spin ( Fig. 3(a)) and density-density ( Fig. 3(b)) correlation functions. Both response functions are calculated at 1/kF a = 0 and λ/kF = 1, with respective wave-vectors of q = 0 and q = 0.5kFẑ for the spin and charge responses. The inset in Fig. 3(b) shows the energy contours of E (2,+) (k, q)/EF in the topological phase, with k ⊥ and kz in units of kF . The dashed lines highlight the saddle point Van Hove singularity whose magnitude determines the frequency location of the peak response in Fig. 3(b).
We now focus on the density-density correlation function χ ρρ (ω, q), which is only non-zero when q = 0. This is shown in Fig. 3(b) for the case of unitarity: 1/k F a = 0, and we can again compare the trivial and topological phases. Here λ/k F = 1 and we plot the imaginary part of the response function, χ ρρ(q, ω) [21,31], deep in the topological phase (b z /E F = 2) and in the trivial phase (b z /E F = 1.2) at q = 0.5k Fẑ and T = 0.21T F (just above T c ).
In the trivial phase there are two peaks, one associated with thermal contributions involving E (2,−) (k, q) and the second with the multiparticle component involving E (2,+) (k, q). In the topological phase, there is a large peak at ω/E F = 0.6, which arises from a (2D) saddle point Van Hove singularity contribution in E (2,+) (k, q). This is associated with ∇ k E (2,+) (k, q) = 0, which (via the density of states) enters as a denominator in the response functions. These saddle point Van Hove singularity effects are well known [27,28] and are illustrated in the inset on the right and discussed in the Supplemental Material. Importantly, here we observe that as the system enters the topological phase they amplify the peaks in the density-density correlation function, thus helping to distinguish between the trivial and topological phases.
Conclusions.− This paper addresses how an intrinsically produced condensation temperature varies across a topological transition, induced by varying SOC, Zeeman coupling, or the scattering length. Importantly, the introduction of fluctuations necessarily introduces a feedback of the topological band-structure into T c . The passage from the trivial to the topological phase is accompanied by a transition in which the system is driven towards a low T c , more BCS-like phase with smaller pair mass and smaller gap. Nevertheless, there is a range of b z in the topological phase where T c ∼ 0.1T F , which is experimentally accessible [30].
We also present methods of detecting the topological band-structure above T c , exploiting frequency dependent peaks in the density and spin responses. The topological transition appears in the spin response as a recombination of two peaks, which are separate in the trivial phase. In the topological superfluid, the dynamical density response exhibits a greatly amplified peak associated with a (2D) saddle point Van Hove singularity. In both the response functions and T c we find that the topological transition appears quite smooth in the weak pairing and much more abrupt and apparent in the strong pairing regime.
Note added.− Recently, we became aware of a complementary paper that considers fluctuation effects in spinorbit coupled superfluids with fixed relative population density using a closely related formalism [39]. Here we present derivations of the vertex function Γ(Q), where Q ≡ (iω, q), along with the coherence factor v ηαα (k, k − q), which appear in Eq. (4) of the main text. We begin by writing the non-interacting Green's function in terms of projectors as follows where K ≡ (iν, k) and P 0 α (k) = 1 2 U k (1 + ασ z ) U † k is a projector into the single-particle band α = ±1. The matrix U k is the unitary operator that diagonalizes H 0 (k) to produce the single-particle dispersion ξ αk = ξ k + α |h|.
Similarly, the Nambu Green's function G (K) for a superfluid can be written in terms of projectors as where we have used the inverse of the BdG Hamiltonian, H BdG , to define the normal and anomalous Green's functions G(K) and F (K), along with their time-reversed counterparts G(K) = iσ y [G(−K)] T iσ y and F (K) = σ y [F (−K)] T σ y . The projectors P ηα = ψ ηα ψ † ηα are constructed from the BdG eigenvectors where U k = diag {U k , V k } rotates the particle (hole) sector to the spin-orbit basis with a unitary matrix U k (V k ), and we have defined θ = cos −1 (b z /|h|), E 0k = ξ 2 k + ∆ 2 cos 2 θ, and ζ αk = E 0k + α|h|. Note that, up to a sign, ζ αk limits to ξ αk as ∆ → 0 or b z → 0, and to E αk as λ → 0.
For convenience, the 4 × 4 projector matrices can be expressed as four 2 × 2 sub-matrices as The Green's function G (K) is found from the appropriate 2 × 2 sub-matrix with the corresponding projector We can now define a quantity χ (Q), known as the pair susceptibility, which has been introduced in previous papers [S1]; it is a natural extension of χ (0) and appears in Γ (Q): where G 0 (K) = iσ y [G 0 (−K)] T iσ y is the time-reversed, or hole, non-interacting Green's function.
Substituting the above definitions then gives Performing the summation over Matsubara frequencies reduces this expression to Here the coherence factor which appears in Eq. (3) and Eq. (4) of the main text is The vertex function can now be defined by Γ(Q) ≡ χ (Q) + g −1 −1 . Using the expression for the susceptibility in Eq. (S8), we obtain the vertex function Γ(Q) as given in Eq. (4) of the main text. The familiar gap equation can then also be obtained from the Thouless criterion: χ (0) + g −1 = 0. Figure S1 shows the major findings concerning the topological phase diagram in the enhanced pairing limit. In Fig. S1(a) the transition temperature T c and the pairing onset or mean-field transition temperature T * are plotted as functions of the Zeeman field b z . Here we consider the limit of enhanced pairing through large SOC strength λ, which leads to a more BEC-like behavior at b z = 0. (We could also have considered small λ with sufficiently large 1/k F a without qualitatively affecting the result.) In Fig. S1(b) the pair mass, which is an important component of T c , is plotted as a function of b z . Finally, in Fig. S1(c) we present a plot of the zero temperature mean-field gap as a function of this Zeeman field. Notable here is that for a large range of b z , the pair mass M ⊥ (T = 0), [S2] the mean-field gap ∆(T = 0), and the transition temperature T c , are only weakly dependent on b z . Nevertheless, Fig. S1(a) shows a striking contrast between the two temperatures T c and T * . Whereas T * decreases continuously with Zeeman field, T c tends to be relatively constant until the point δµ = 0 is crossed, after which point the topological phase transition is encountered and T c starts to rapidly decline. This demonstrates that T * is completely independent of topology, whereas T c clearly reflects the topological phase transition. Now we give insight into the behavior of T c , which depends on the solutions to the mean-field gap and number equations and on the behavior of the pair mass. From the perspective of T c , we find that the trivial to topological transition reflects a BEC-BCS transition. This is reminiscent of other topological superfluids, such as a p x + ip y superfluid, where the topologically non-trivial phase occurs on the BCS side of the BCS-BEC crossover. Ultimately we will show that the central physics arises from the behavior of Γ(Q) and Γ(0), which, through the coherence factors implicitly contain the BdG wave-function. In this way the change in topology is correlated to changes in T c .
A. Dependence of the transition temperature Tc on the Zeeman field bz
Our aim now is to understand how T c depends on b z . While mean-field theory would predict a phase transition at T * , T c is crucially dependent on the pair mass. To make this discussion more concrete, we refer to Fig. S2 which indicates how the transition temperature is deduced. The procedure is as follows. First we define the quantity ∆ 2 pg (T ), which is commonly referred to as the pseudogap. From Eq. (5) in the main text, we see that T c is the point at which this function coincides with the mean-field gap ∆ 2 (T ). Figure S2(a) presents a plot of the mean-field gap vs. temperature for two different values of b z which are associated with the topological phase. For these two values of b z , Fig. S1 shows that both the pair mass and the transition temperature are decreasing rather rapidly with increasing b z . The dotted lines which intersect these curves are plots of ∆ 2 pg (T ) and the points of intersection represent T c for each value of b z . Also labeled is T * for each value of b z . It is also clear that the slope of the lines is greater for the lower b z case. This is because the pair mass is larger. Consequently the fluctuation suppression of T c relative to T * is, thereby, greater. This leads to a larger separation between T * and T c at lower b z . Nevertheless, because T * is so much larger here, T c is as well. Indeed, Fig. S2(a) shows that even in the narrow range of b z , the transition temperature rapidly drops, as observed in Fig. S1(a). Figure S2(b) is a similar representation of our transition temperature calculations for two values of b z in the trivial phase. This regime is associated with the BEC limit where one can see that both the pair mass and T c are very nearly independent of Zeeman field. This should be contrasted with the behavior of T * for these two values of b z , which shows a substantial decline with increasing b z .
Of the four values of b z illustrated in Fig. S2(a) and Fig. S2(b), the higher b z = 7.6E F is clearly in a BCS-like limit where the pair mass is small and T c ≈ T * . The lower two values of b z (b z /E F = 2, 5) are in the BEC regime, where T c and the pair mass are relatively constant. Finally, the case b z = 7.3E F is in an intermediate regime between BCS and BEC-like behavior.
We can understand these BCS and BEC endpoints semi-quantitatively. From the definition of Γ(Q) in the text it is straightforward to show that a 0 ∆ 2 pg (T ) = (M T /2π) 3/2 ζ(3/2), whereM = M M 2 ⊥ 1/3 is the geometric mean of the pair mass and ζ(z) is the Riemann-zeta function. In the extreme BEC limit (where ∆(T c ) ≈ ∆(0),M (T c ) ≈M (0)), it follows that T c ∼ ∆ 2 (0)/M (0), or that the transition temperature scales with the inverse pair mass. This is to be associated with the trivial phase. In the extreme BCS limit we have ∆ 2 (T c ) = c(1 − T c /T * )∆ 2 (0) (with dimensionless constant c). This yields (1 − T c /T * ) = (T c /T * ) 3/2 ∆ 2 pg (T * )/c∆ 2 (0). Solving for T c in this small pseudogap (BCS) limit implies that T c ∼ T * , as expected. This describes the situation within the topological phase. Note that this BCS expression is independent of both the pair mass and the gap.
B. General structure of pair mass
The previous figures indicate that the pair mass is an important component of the present theory. The pair mass can be written as where v ηαα (k, k − q) is the coherence factor defined earlier. By symmetry, the off diagonal elements (i = j) of 1/M ij vanish, and furthermore 1/M xx = 1/M yy ≡ 1/M ⊥ , and we also define 1/M zz ≡ 1/M . The most significant features of the pair mass, as shown in Fig. S1(b), can be understood from this equation. We start by considering the pole structure. Of the eight denominators of the form ηE αk + ξ α k−q at q → 0, only the term with α = α = η = −1 can be zero. The critical value, b c , for which this denominator can vanish is found to be µ < 0 with b c = 1 2 µ 2 + ∆ 2 + |µ| . Note that this condition occurs half-way between the topological phase transition and the δµ = 0 transition. For b z < b c , the system is topologically trivial. This regime can be understood if we note that no denominators in the inverse pair mass can be zero. Therefore, the integral is dominated by the coherence factors at small momentum, which will be approximately constant. Furthermore, as we will show, in this same limit the mean-field solution is also strongly insensitive to b z . This combines to produce the constant curve for the topologically trivial phase as observed in the pair mass, and consequently T c .
Shortly after the topological phase transition is crossed, b z > b c , a line of singularities in the integrand emerges defined implicitly by the curve b 2 z − ∆ 2 /4 = ξ k |h (k)|. After performing the q derivatives in Eq. (S11), a term will appear with the pole (−E −,k + ξ −,k ) −3 . At larger b z , this third order singularity will dominate all other terms in the integrand, and the integral is well approximated by only this contribution. This will result in a rapid growth of 1/M ij at large b z , which leads to the suppression in the pair mass that was observed numerically.
The crossover between these two regimes is characterized by a small increase in the pair mass. This can be further understood by noting that the lower-order poles contribute with a negative sign. This reduces the inverse pair mass for a small region before the third order pole begins to dominate. Thus, there is a small increase in the pair mass after the condition b z = b c is crossed.
C. Dependence of the mean-field equations on the topological phase transition
The mean-field behavior of the gap function is also an important ingredient in these calculations of T c . We now demonstrate the sensitivity of the mean-field solutions on the condition for the topological phase transition. This should be contrasted with the behavior of T * , which is defined as smallest T that solves the gap equation in the ∆ = 0 limit: Since ∆ does not enter into the above equation, the topological condition is not relevant for T * ; this is in contrast to the behavior of T c . We now address the mean-field gap (I G ) and number (I N ) equations and limit our discussion here mainly to the ground state. This is largely for simplicity and because the topological condition is most directly reflected here. Moreover, it is relatively straightforward to generalize to finite T . We begin by defining functions such that the self consistent mean-field equations are obtained when Differentiating these equations with respect to b z , and applying the chain rule, one can find the change in ∆ and µ with b z is (S13) We can now understand the dependence of the mean-field gap and chemical potential on the Zeeman field b z . Specifically, at zero temperature and in the enhanced pairing limit, the gap and chemical potential are approximately independent of the strength of the Zeeman field until we cross into a topologically non-trivial phase. After this point, the gap begins to rapidly drop with Zeeman field, whereas the shifted chemical potential (δµ) increases. This demonstrates that these changes can be associated with a BEC to BCS transition. Analyzing this behavior is most tractable in the limit that λ → 0. We consider I G , but a similar argument applies for I N . In this limit, the gap equation is where θ(x) = 1 for x ≥ 0 and θ(x) = 0 for x < 0 is the unit step function. These expressions show that the system is completely insensitive to b z , provided that min k ξ 2 k + ∆ 2 ≥ b z . This condition is precisely the condition for a topologically trivial superfluid. As b z is increased, the system eventually crosses into a topologically non-trivial phase. In this phase, the step function vanishes for some range of k, allowing the gap equation to be satisfied with smaller ∆. Thus, the system becomes more BCS-like with increasing b z .
If we consider the limit with λ > 0, we see that similar arguments hold. For example, the coherence factor in I G contains, in part, a term of the form The α = −1 contribution can switch sign for some parameters. Specifically, the minimum condition for this term to change sign occurs at k ⊥ = 0. Taking this limit gives a condition that is only satisfied in a topologically non-trivial phase.
In addition to the change in sign once the topological condition is crossed, if λ = 0, there is an additional variation with b z that is not due to the suppression of the integrand in I G and I N . However, this small shift is not responsible for the more qualitative effect of an evolution from BEC-like to BCS-like physics as the topological phase transition is crossed.
III. RESPONSE FUNCTIONS IN THE NORMAL PHASE
In this section we derive the density-density and spin-spin correlation functions in the presence of SOC and a Zeeman field. Here we focus on the normal phase (T > T c ), which does not require collective-mode contributions. The density-density or spin-spin correlation functions, as in the main text, can be written as χ SiSj (Q) ≡ dτ e iωτ T τ S qi (τ ) S −qj (0) , for a many-body density or spin operator S qi = ss k c † ks (σ i ) ss c k+qs . Here i = j = 0, with σ 0 = 1 2 , corresponds to the density-density correlation function χ ρρ (Q), whereas i, j ∈ {x, y, z} gives the corresponding spin-spin correlation function. We emphasize that in the normal state there is no anomalous Green's function component, but the existence of normal state pairs allows one to write the correlation functions as the sum of two terms (S16) Here we associate F (K) = (∆/∆ * ) F (K) with a pseudogap vertex contribution, which leads to where we have introduced the coherence factor w αα ,ηη (k, k + q) = Tr[σ i P ηα (k)σ j P η α (k + q)] + Tr[σ i Q ηα (k)σ j R η α (k + q)].
(S18) Upon performing the summation over Matsubara frequencies, we obtain the expression for the density-density or spin-spin correlation function given in Eq. (6) of the main text: χ SiSj (Q) = k αα ,ηη f (ηE αk ) − f (η E α k+q ) ηE αk − η E α k+q + iω w αα ,ηη (k, k + q). (S19) In any theory of correlation functions, it is important to ensure they satisfy the appropriate sum rules, which may or may not arise due to a conservation law. For the density response, the f -sum rule is a consequence of charge conservation, which holds even in the presence of SOC and a Zeeman field. The explicit form of the f -sum rule is ∞ −∞ dω π −ω χ ρρ(ω, q) = nq 2 m . (S20) Here, and below, we analytically continue the response to real frequency χ SiSj (ω, q) = lim δ→0 χ SiSj (iω, q)| iω=ω+iδ , and then define the dissipative part χ SiSj (ω, q) of the correlation function. Note that for i = j, the dissipative part is equivalent to the imaginary part of the response function, but for i = j it may be complex in general [S1]. As can be shown [S1], using charge conservation in the form of the Ward-Takahashi identity, the above density-density correlation function manifestly satisfies the f -sum rule. The associated sum-rule for the χ SxSy (ω, q) correlation function can be shown [S1] to be ∞ −∞ dω π (−ω χ SxSy (ω, q)) = 0.
Since χ SxSy (ω, q) is even in frequency, this sum rule is trivially satisfied. An additional sum rule can be derived, however, which is given by a different moment of the spin-spin correlation function; this sum rule is where σ z = K Tr [σ z G(K)] is the spin polarization in theẑ direction. In all our numerical calculations the spin sum rule is satisfied to within one percent and similarly the f -sum rule for density is found to be accurate to within five percent.
A. Van Hove singularities and indications of the topological phase
In the main text we have noted that these two-body probes are to be contrasted with the more frequently studied (one-body) spectral functions, as measured in photoemission or radio frequency experiments. We consider peaks in the response functions as a function of the frequency ω for a fixed wave-vector q. These are associated with thermal contributions involving E (2,−) (k, q) and multiparticle contributions involving E (2,+) (k, q), where we define E (2,−) (k, q) = |E −1,k − E ±1,k+q |, We find that the −1 sign is most relevant for the density response and the +1 sign is most relevant for the q = 0 spin response. Van Hove singularities lead to important enhancements in the response functions because the density of states enters as a denominator. These singularities correspond to ∇ k E (2,±) (k, q) = 0, and they are associated with minima, maxima or saddle points. Importantly, for a system with rotational symmetry, k ⊥ and k z form an effectively two-dimensional integrand. In such systems, it is the saddle point Van Hove singularity which leads to the most prominent peaks. We can now understand features of Fig. 3(b) in the main text. The enhanced peak of the topological phase is associated with a saddle point Van Hove singularity. The deeper one goes into the topological regime, the more pronounced the saddle point effects are. Figure S3 presents a contour plot of these constant energy surfaces for E (2,+) (k, q) which enters into the density response. The different panels indicate how the system evolves from the trivial to the topological phase (left to right). The top (a)-(c) and bottom (d)-(f) panels correspond to weak and enhanced pairing, respectively. The contours | 2015-07-09T15:35:05.000Z | 2015-07-09T00:00:00.000 | {
"year": 2015,
"sha1": "644e0180ae14a50c925dde75eee79d0e7b3734df",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.92.134523",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "644e0180ae14a50c925dde75eee79d0e7b3734df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245605345 | pes2o/s2orc | v3-fos-license | Enhanced all-optical Y-shaped plasmonic OR, NOR and NAND gate models, analyses, and simulation for high speed computations
In this digital era, all-optical logic gates (OLGs) proved its effectiveness in execution of high-speed computations. A unique construction of an all-optical OR, NOR, and NAND gates based on the notion of power combiner employing metal–insulator-metal (MIM) waveguide in the Y-shape with a minimal imprint of 6.2 µm × 3 µm is presented and the structure is evaluated by finite-difference time-domain (FDTD) technique. The insertion loss (IL) and extinction ratio (ER) for proposed model are 6 dB and 27.76 dB for NAND gate, 2 dB and 20.35 dB for NOR gate and 6 dB and 24.10 dB for OR respectively. The simplified model is used in the construction of complex circuits to achieve greater efficiency, which contributes to the emergence of a new technique for designing plasmonic integrated circuits.
Introduction
The semiconductor trade has achieved tremendous advancement over the last several decades, despite the limitations of restricted bandwidth, high input power, short switching time, high power dissipation, and communication delays. Optical communication was developed as a solution to address these issues since it substituted electrons with photons as a technique to transmit information (Swarnakar et al. 2021a;Hussain et al. 2018). Optics surpasses electronic systems in terms of operating bandwidth, minimum electromagnetic interference, speed, and large data transfer capacity. Since optical circuits may offer potential solutions for signal transmission and processing, they are considered as bright prospects (Rao et al. 2021).
The modern technology aims for rapid communication devices that can surpass the diffraction limit of photonic devices, and plasmonics have shown to be a viable option in this regard (Pal et al. 2021;Cotter et al. 1999;Rezaei et al. 2018). It is the science that focuses on generating, optical signal detecting and processing at metal-dielectric interface (Sorger et al. 2012). Plasmonic waveguides may be used to address the main disadvantages in a plasmonic circuit, such as restricted propagation time, rise in temperature and difficulty in changing the signal direction. In recent years, researchers have examined different plasmonic waveguides including MIM, IMI, and dielectric-loaded surface plasmon polaritons (DLSPP) (Moradi et al. 2019). Compared to insulating waveguides, plasmonic MIM waveguides enable greater confinement and therefore been presented as a potential option for nanoscale optical circuits (Chen et al. 2009;Singh et al. 2017;Dolatabady and Granpayeh 2012;Sadeghi et al. 2019;Sharma et al. 2020). Noble metals like silver and gold have surface plasmons at visible and near-infrared wavelength. For optical devices, plasmonic nanoparticles are excellent rivals because of their unique features such as absorption, coupling, and scattering (Nozhat et al. 2017). Logic gates are essential components in all optical circuit design and a numerous optical equipments, like directional coupler (DC), Mach-Zehnder Interferometer (MZI), power couplers, and power dividers are utilized to realise logic gates such as AND (Kumar and Singh 2016;Ghomashi et al. 2021), OR (Sharma et al. 2020;Ghomashi et al. 2021), XOR (Nozhat et al. 2017;Raja et al. 2021), NOT (Fakhruldeen and Mansour 2018;Raja et al. 2021;Wu et al. 2013) and XNOR (Rao et al. 2020a), along with universal gates such as NAND (Nozhat et al. 2017;Kumar and Singh 2016;Ghomashi et al. 2021;Fakhruldeen and Mansour 2018;Raja et al. 2021;Wu et al. 2013;Rao et al. 2020a) and NOR (Rao et al. 2020a(Rao et al. , 2020b. As a result, these all-OLGs may be utilised to construct any combinational circuit including multiplexers, demultiplexers, parity generator, adders and subtractors, and code translators (Meymand et al. 2020;Kumar et al. 2017;Swarnakar et al. 2020). Conventional OLGs give cut-off states and interference effects, but plasmonic logic gates greatly minimize size and signal losses; it can also lower signal thresholds of logic operations and offer rapid switching in optical devices (D'souza and Mathew 2016;Singh et al. 2019;Anguluri et al. 2021;Swarnakar et al. 2021b).
Two-dimensional (2D) MIM waveguides are selected for this logic device implemention because of their simple configuration and ability to confine light at the nanoscale, reasonable propagation distances, and low crosstalk making them ideal candidates for various ultra-compact devices as well as other characteristics (Al-Musawi et al. 2020;Dolatabady and Granpayeh 2017;Li et al. 2020;Safinezhad et al. 2021). Many logic operations may be performed with the MIM device without changing the phase of input signals in subsequent implementations (Singh et al. 2021;Choudhary and Kumar 2021;Jasim et al. 2021).
The framework of the suggested paper presents a simplified all-optical OR, NOR and NAND gate design with power coupling concept employing a Y-shaped waveguide and analysed using FDTD method. The Y-power coupler plasmonic OR, NOR and NAND gate design is reported in Sect. 2. Section 3 presents the simulation findings. Section 4 has a result analysis, in which the current study is also compared to prior published studies, and Sect. 5 concludes the paper.
The all-optical plasmonic OR, NOR, and NAND gate structure employing Y-power combiner
A miniaturized all-optical plasmonic OR, NOR, and NAND gate is modelled by arranging S-bend and linear waveguides of equal width (W) using power combiner concept. The presented design is obtained in a wafer size of 6.2 μm × 3 μm by arranging two parallel S-bend waveguides of equal length along XZ axis in Y shape separated by a distance, D and joined to single end of a linear waveguide whose structure is shown in Fig. 1. An external change in the phase controls the inputs supplied to both ends of the power combiner. The final minimized structure is achieved by varying the Y-combiner parameters such as the length of S-bend waveguide (L s ), separation gap between inputs (D), and length of linear waveguide (L). By altering the L s and maintaining D as 2.5 µm, different factors like the highest output power when turned ON and OFF and also ER may be determined, as shown in Table 1. The ER is estimated by comparing the peak output power in ON ( P out|ON ) with peak output power in OFF ( P out|OFF ) states and is represented as (1) Extinction ratio(ER) = 10 log 10 P out|ON P out|OFF Fig. 1 Schematic of Y shaped OR, NOR, and NAND logic gate using plasmonic waveguide whereas the Insertion loss (IL) is defined as the ratio of total input power (P in ) to the total output power ( P out ) is given as The output findings are used to determine the performance metrics such as IL and ER. It is observed that for L s of 3.6 µm, the obtained ER value is 27.76 dB that is higher compared to the rest. The L s versus ER plot is depicted in Fig. 2.
Similarly, all parameters indicated are indeed computed by changing the value of D by maintaining L s at 3.6 µm, the results are tabulated in Table 2. It is observed that ER is more for a separation between waveguide of 2.6 µm resulting in 27.76 dB. The separation between waveguides versus ER is plotted and displayed in Fig. 3.
The L s and D are maintained constant at 3.6 μm and 2.6 μm, correspondingly, whereas L is adjusted to get the highest ER. Table 3 presents the ER for various linear lengths of waveguide. The higher ER of 26.87 dB is reported for L with 2.5 μm. L versus ER plot is depicted in Fig. 4 and the final dimensions of the Y-power coupler plasmonic OR, NOR, NAND gate with the highest ER are tabulated in Table 4.
From Tables 1, 2, 3 it is observed that L S with 3.6 µm, D and L L as 2.6 µm and 2.8 µm has got the best ER. So, the footmark of the designed structure is fixed in the dimension of 6.2 µm × 3.0 µm. By considering the minimized structure's footmark the OR and NOR gates are also verified.
The Y-power combiner plasmonic gate design and FDTD simulation results
The optimized all-optical OR, NOR, NAND gate design contains plasmonic waveguide of refractive index (n) as 2.1 with Boron Nitride material and continuous-waveform (CW) in transverse electric (TE) mode, with wavelength (λ) of 1.55 µm provided at both inputs. The power at input is 0.7e9 W/m and 3e9 W/m for low and high intensity optical signals, According to the theory of wave optics, constructive interference occurs when the phase difference between two optical waves is greater than 2n, where n = 0, 1, 2… It will have an extremely high power level, which is consistent with the logic state "1" when this happens. If the phase difference is (2n + 1) π, destructive interference occurs, resulting in logic zero at the output port (D'souza and Mathew 2016). All the input states of two-input OR, NOR, NAND are provided with the change in phase of the inputs with either 0° or 180° to satisfy the gate's output. Table 4 shows the parameters of the simulation for the design proposed.
OR gate
Based on truth table of OR logic gate, when both inputs are low, the output is low; otherwise, it generates a high output. The explanation that follows gives details of different inputs and phase changes done to the Y-shape combiner to make it work with the OR outputs. In this instance, both the Y-combiner inputs are provided a signal of lesspower intensity 0.7e9 W/m. According to the OR gate truth table, the output power is noted to be low. varied phase of 180° and 0° is given to both input signals resulting in destructive interference and the OR gate output is observed to be low (Y = 0) as shown in Fig. 5(a). For next case, the upper end of combiner is given less-power intensity (0.7e9 W/m) in contrast the lower end of combiner is provided a signal of greater-power intensity (3e9 W/m). A same phase of 180° or 0° is allowed in both inputs resulting in constructive interference thereby the OR gate's output is high (Y = 1), as illustrated in Fig. 5(b). In contrast to the preceding instance, the upper end of combiner is given a greater intensity (3e9 W/m) whereas the below end is provided a signal of less-intensity power (0.7e9 W/m). Due to constructive interference, high output of OR gate (Y = 1) is achieved as depicted in Fig. 5(c). Finally, the two inputs of power coupler are provided Fig. 6 Propagation of light across the suggested Y-shaped plasmonic NOR gate power combiner for various instances using FDTD method Fig. 7 Propagation of light across the suggested Y-shaped plasmonic NAND gate power combiner for various instances using FDTD method a high intensity power signal (3e9 W/m), same phase of 180° or 0° is provided resulting in constructive interference with high output (Y = 1) as shown in Fig. 5(d).
NOR logic gate
According to the truth table of NOR logic gate, when both inputs are low, the NOR logic gate creates a high output; but provides low output for the rest. The explanation that follows goes into detail into the various inputs and phase shifts used by the Y-shape combiner to meet the NOR output's requirements. In the first instance, both the Y-combiner inputs are provided a signal of less-power intensity 0.7e9 W/m. Same phase of 180° or 0° is given to both input signals resulting in constructive interference and the NOR gate output is observed to be higher (Y = 1) as shown in Fig. 6a. in the next case, upper end of combiner is given less-power intensity (0.7e9 W/m) in contrast the lower end of combiner is provided a signal of greater-power intensity (3e9 W/m). As in the preceding example, destructive interference will arise, reducing the strength of the output signal. As a result, the NOR gate's output is low (Y = 0), as illustrated in Fig. 6b. In contrast to the preceding case, the upper end of combiner is given a greater intensity (3e9 W/m) whereas the below end is provided a signal of less-intensity power (0.7e9 W/m). Due to destructive interference, resulting in low output of NOR gate (Y = 0) as depicted in Fig. 6c. Finally, the two inputs of power coupler are provided a high intensity power signal (3e9 W/m), The phase for one input is 180° and the other is 0° resulting in destructive interference resulting in low output intensity (Y = 0) as shown in Fig. 6d.
NAND logic gate
When both inputs are high in a NAND logic gate, the output is low; otherwise, the output is high. An explanation of how the NAND output alters with phase in a Y-shape combiner can be found below. Figure 7 depicts the propagation of light along the proposed NAND gate for the specified input signal pairings using the FDTD method. In the initial state, both the Y-combiner inputs are provided a signal of less-power intensity 0.7e9 W/m. According to the NAND gate truth table, the output power is noted to be high. Same phase of 180° or 0° is given to both input signals resulting in constructive interference and the NAND gate output is observed to be higher (Y = 1) as shown in Fig. 7a. While in the next case, the upper end of combiner is given less-power intensity (0.7e9 W/m) and in contrast the lower end of combiner is provided a signal of greaterpower intensity (3e9 W/m). A same phase of 180° or 0° is allowed in both inputs. As in the preceding example, destructive interference will arise, reducing the strength of the output signal. As a result, the NAND gate's output is high (Y = 1), as illustrated in Fig. 7b. In contrast to the preceding instance, the upper end of combiner is given a greater intensity (3e9 W/m) whereas the below end is provided a signal of less-intensity power (0.7e9 W/m). Thereby, due to the same phase, constructive interference will arise, resulting in high output of NAND gate (Y = 1) as depicted in Fig. 7c. Finally, the two inputs of power coupler are provided a high intensity power signal (3e9 W/m) The phase for one input is 180° and the other is 0° resulting in destructive interference resulting in low output intensity (Y = 0) as shown in Fig. 7d.
Performance analyses
The design has a minimal footprint of 6.2 μm × 3 μm compared to the structures in previous works (Raja et al. 2021;Singh et al. 2019Singh et al. , 2021Anguluri et al. 2021;Swarnakar et al. 2021b;Al-Musawi et al. 2020;Dolatabady and Granpayeh 2017;Li et al. 2020;Safinezhad et al. 2021). The performance of proposed design has been examined using simulated parameters such as ER, IL, response time, and speed. So, the proposed design obtained IL and ER of 6 dB and 27.76 dB. The other parameters like response time of 32 fs and bit rate of 31.25 THz is observed. Table 5 displays the P out observations for various input power intensities and phases for the Y-shaped MIM waveguide. A comparison of the proposed structure with previous published works is made and tabulated in Table 6. The proposed logic device is constructed using -MIM waveguide due to its simple configuration, ability to confine light at the nanoscale, low crosstalk and reasonable propagation distance, which made them ideal competitors for various ultra-compact devices. Only MIMs are capable of confining light to deep sub-wavelengths and routing it at the nanoscale (Choudhary and Kumar 2021;Jasim et al. 2021). From Table 6, it is clear that the MIM waveguides results in highest extinction ratio compared to that of ring resonators and IMI waveguides. With the same structure OR, NOR and NAND gates were designed and also all the simulated parameters were calculated resulted in highest ER and transmission efficiency with less insertion loss shown in this design structure. More over for the same structure response time and speed are also calculated.
Conclusion
The suggested Y-shaped MIM waveguide concept is utilised to construct an all-optical OR, NOR, and NAND logic gate. The structure has a surface area of 6.2 µm × 3 µm, which is less than earlier efforts. In this study, the IL and ER calculated are key factors that are determined to be 6 dB and 27.76 dB, respectively and response time of 32 fs and bit rate of 31.25 THz are also observed. The length of S-bend waveguide, input separation, and linear length of waveguide are designed to provide the highest ER while minimising waveguide losses. The Y-combiner-based NAND gate, with its simple construction and controllability, can provide a novel approach for implementing digital logic functions in electronics. The presented design has a simplified structure that might be utilized to develop ultra-compact devices for rapid optical computing in the future. | 2022-01-01T16:13:19.588Z | 2021-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "d28488bc208455c45006f879df155ec72aad1dd8",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1054031/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "abbbf99f82c37f6aadce3408be5d4e2734ec8de4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
39784826 | pes2o/s2orc | v3-fos-license | Concurrent Chemoradiation for Cancer of the Cervix: Results of a Multi-Institutional Study From the Setting of a Developing Country (India)
Purpose The primary output of hospital-based cancer registries is data on cancer stage and treatment-based survival that can be used to evaluate patient care, but because there are many challenges in obtaining follow-up details, a separate study on patterns of care and patterns of survival for patients at selected sites was initiated under the National Cancer Registry Programme of India. This article presents the results for cervical cancer. Patients and Methods A standardized patient information form was used to record patient information, and data were entered into a central repository—the National Centre for Disease Informatics and Research. The study patients were from 12 institutions and were diagnosed between January 1, 2006, and December 31, 2008. Patterns of treatment were assessed for 7,336 patients, and patterns of survival were determined for 2,669 patients from six institutions, at least 70% of whom had data regarding follow-up as of December 31, 2012. Results Of 7,336 patients, 55.5% received optimal radiotherapy (RT). In all, 80.9% of patients had locally advanced cancers (stage IIB to IVA), 51.1% received RT alone, and 44.4% received concurrent chemoradiation (RTCT). In 1,753 patients with locally advanced cancers, significantly better survival was observed with RTCT than with RT alone (5-year cumulative survival, 70.2% v 47.3%; hazard ratio, 0.48; 95% CI, 0.41 to 0.56). Conclusion A conservative estimate indicates that, on an annual basis, 38,771 patients with cervical cancers in India alone do not get the benefit of RTCT and thus they have poorer survival. There is a need to reiterate the National Cancer Institute's alert that advised supplementing chemotherapy to radiation for locally advanced cancer of the cervix in the context of the developing world, where 84.3% of cancers of the cervix occur.
INTRODUCTION
Cervical cancer comprises 7.92% of cancers in women worldwide, 1 and in India alone, estimates indicate that there will be approximately 94,000 new cases per year. 2 Information on care for patients with cancer and patient survival is essential in assessing cancer treatment services, and a hospital-based cancer registry (HBCR) is central to this effort. 3 In developing countries, follow-up after treatment presents many challenges. 4 The main aim of this study on patterns of cancer care and survival (POCCS) was to obtain information on treatment based on clinical stage and on survival for patients with cancers of the cervix, breast, and head and neck. This study of POCCS presents results on cervical cancer.
An enhanced version of the prior technique 15 for capturing electronic data and using the Internet to transmit that data to a central repository constituted the basic design and framework on which the required clinical information was obtained.
PATIENTS AND METHODS
Twelve institutions (centers) participated in the study. The names of the institutions along with members of the Patterns of Cancer Care and original report abstract Survival Group are provided in the Appendix (online only). A standardized patient information form (Data Supplement) created by oncologists with specific expertise in treating cancer of the cervix was hosted on the Hospital-Based Cancer Registries Web site. A printed form with instructions was supplied for each study participant. Trained staff completed the form by using patient and/or attendant interviews, by scrutinizing medical records and other relevant documents and registers, and by having discussions with concerned clinicians. Collaborating centers were given individual login IDs and passwords along with instructions for entering data online to be electronically transmitted to a central repository-the National Centre for Disease Informatics and Research (NCDIR). The mandate and mission statements of this one-of-akind center (an outcome of the National Cancer Registry Programme of the Indian Council of Medical Research) are provided at the NCDIR Web site.
Selection Criteria
Treatment patterns based on cancer stage were examined for 7,336 newly diagnosed (January 1, 2006, to December 31, 2008) patients with cervical cancer treated at their respective institutions. However, survival analysis was restricted to data from six centers that had follow-up information for at least 70% of their respective patients as of December 31, 2012. The total number of patients was 2,686, but 17 had no details on follow-up after the date of last treatment; therefore, they were excluded, leaving 2,669 patients. Some institutions had details on follow-up for more than 70% of their patients; thus, the overall pooled percentage of patients with follow-up information for survival analysis was 87%.
The main end point, overall survival, was defined as date of diagnosis to date of death from any cause (when death was before January 1, 2013). All other patients were regarded as alive, and the last date of follow-up was the censored date. The number and proportion of patients with toxicity (in both early and late complications) and recurrence are based on any one such reported event.
Only squamous cell carcinomas are included. All centers followed the International Federation of Gynecology and Obstetrics (FIGO) staging system. 16 Detailed survival analysis and discussion are focused on locally advanced cervical cancer (FIGO stage IIB to IVA) because advances in treatment (especially concurrent chemoradiation [RTCT]) relate to this category, which involved 72.3% of the patients (ie, 1,930 of 2,669).
Radiotherapy
The standard practice of administering approximately 50 Gy (total dose in 20 to 25 fractions) radiation to the entire pelvis was followed. 17,18 Other parameters such as the use of intracavity brachytherapy, use of a radiotherapy (RT) machine (linear accelerator or cobalt-60), number of fields, and duration/fractionation of RT were also considered. To simplify analysis, the term "optimal radiotherapy," as outlined by the Chemotherapy for Cervical Cancer Meta-Analysis Collaboration 19 and Shrivastava et al 18 was used for this study. Optimal RT is defined as administering at least 45 Gy by external beam (minimum of 20 fractions) plus intracavity brachytherapy (any dose). All other types of RT were classified as suboptimal. The RT machine used and beam arrangement (fields) were separately factored and were adjusted for in the statistical analysis. Only a few patients received RT via intensity-modulated RT, image-guided RT, or other types of RT, and these factors were disregarded.
Chemotherapy
Chemotherapy (CT) was administered within 1 week before or 1 week after the start of RT in 89.1% of the patients. The predominant protocol was monotherapy with cisplatin (cis-dichlorodiammine platinum). Patients who received other drugs alone or in combination with cisplatin were grouped separately. The total dose of cisplatin, the number of cycles, and dose in mg/m 2 per cycle were calculated. However, to simplify analysis, we used only the total dose of cisplatin given. Dose was grouped a priori into less than 150 mg, 150 to 199 mg, 200 to 239 mg, and Ն 240 mg. The mean and median number of cycles was four, and 66.2% of patients received four or more cycles. The average computed dose per cycle was 40 mg/m 2 per week. More than 75% of patients received at least 39 mg/m 2 cisplatin per week. The median total dose of cisplatin was 200 mg, and the average number of weeks of administration was 3.9.
Software Applications and Quality Checks
In-house Internet-based software applications on the Hospital-Based Cancer Registry and NCDIR Web sites were modeled for data capture, checking at data provided at entry and, subsequently, tracking patient follow-up, updating treatment information, and recording follow-up details. Collaborating centers were provided exclusive login IDs and passwords with instructions for data entry and transmission. The data were downloaded periodically at the NCDIR. Data checks included checking dates and verifying discrepancies in clinical information (Data Supplement). Lists of incorrect or unlikely cases were sent to the appropriate centers for rectification. In addition, a center-wise random sample of 10% of the cases was created, and centers were asked to re-abstract the medical records for certain essential parameters.
Statistical Analysis
Kaplan and Meier 20 and Cox proportional hazards ratio 21 analyses in the SPSS software package (SPSS, Chicago, IL) were used to calculate the 5-year cumulative survival (FCS) percentages and fatality risk (with statistical significance), respectively. Multivariable analysis was performed by using Cox proportional hazards regression analysis. Table 2. The relative proportions in the RT parameters such as teletherapy dose, teletherapy plus brachytherapy combination, number of RT fractions, deciphered optimal RT (based on these three factors), type of RT machine, and RT fields suggest that those who received concurrent CT received better delivery of RT. In the first instance in Table 3, optimal RT was adjusted for RT machine and RT field; only RT field remained significant. Data in Table 3 show that patients who received cisplatin at 150 mg or more had better survival, although there was no survival difference in subgroups who received a total dose of more than 150 mg. Patient data for those who received optimal RT were adjusted compared with data for those who received a total dose of at least 150 mg cisplatin.
RESULTS
Patients were further regrouped according to combinations of optimal and/or suboptimal RT and total dose of cisplatin (Table 4 and Fig 1). The best survival was seen in patients who received optimal RT and 150 mg or more of cisplatin (FCS, 71.5%) and in those who received suboptimal RT with 150 mg or more of cisplatin (FCS, 76.2%). The shortest survival (FCS, 43.2%) was seen in the group of patients who received suboptimal RT with no CT. . Although an increased complication rate was observed with larger dose, there was no correlation between mortality and dose of cisplatin. The complications recorded were mainly parametrial fibrosis followed by hematologic, GI, and renal complications and skin reactions. These were largely comparable in the two treatment groups. Table 5 compares survival in our study with that in other key publications. The major differences between this study and the others are the sample size (substantially higher in this study) and the study setting (ie, this is the only study from a developing country). Otherwise, the survival benefit of RTCT over RT is comparable. Figure 2 depicts the Kaplan-Meier comparative survival curves for patients who received RT alone and those who received RTCT.
DISCUSSION
The survival benefit of RTCT over RT alone was reported in clinical trials in 1999. [22][23][24][25][26][27] Since then, and with the alert issued by the National Cancer Institute, 28 several studies have validated the improved disease-free survival and overall survival of RTCT over RT especially in locally advanced cervical cancer. These studies include updates, systematic reviews, and meta-analyses. 19,23,29 A recent Chinese study 30 reported a 5-year overall response rate of 67% in the RTCT arm and 53% in the RT arm. Singh et al 31 reported less morbidity and mortality with neoadjuvant CT. There have been some contradictory reports. 14,32 But the authors 14 concluded that bulky tumors, poor nutritional status, and small sample size could have contributed to differences in outcome not being observed.
The current focus of clinical research has shifted to determining the efficacy of other drugs and examining aspects of tolerance, toxicity, and effectiveness of a lower dose of cisplatin. Ushijima et al 33 reported a favorable response with an average total dose of 200 mg of cisplatin. We also did not observe significant differences in survival with total cisplatin doses of more than 150 mg. The cisplatin dose compared with RT dose 17 requires further investigation, at least in the Indian context wherein nutritional status, immunity, and comorbid conditions could play a role.
When the analysis was performed by using optimal and/or suboptimal RT and 150 mg total dose of cisplatin, we found that even patients who had received suboptimal RT but with a total dose of cisplatin of 150 mg or more did as well as or better than those who received optimal RT, suggesting that cisplatin is more important than optimized RT.
Improved survival with RTCT over RT in early-stage (IA2-IIA) cervical cancer has been reported. 34,35 Surgery and not RTCT had an impact on survival in this study of early-stage cancer.
Most reports of superior survival with RTCT are from developed regions, although a majority of Several factors could contribute to patients not receiving CT, such as the lack of qualified/trained clinical oncologists and lack of awareness about the need for such therapy. The data on comorbid conditions and performance status do not suggest that renal insufficiency is a factor in not administering cisplatin. A majority of the patients in this study underwent treatment in established, wellequipped cancer hospitals in which administering chemotherapeutic agents on an inpatient or outpatient basis is a regular daily affair. The reasons for not administering cisplatin probably do not relate to the medical condition of the patient or the facilities available at a given center. Rather, those reasons probably relate to patients' problems such as cost of treatment, the difficulties involved in repeated hospital visits (including long-distance travel), and a false sense of doing well after initial treatment. Data from HBCRs 37 show that until the year 2000, less than 10% of patients with locally advanced cervical cancers received RTCT. As of 2011, the percentage was 57.9%, but in a medical college HBCR it was 10%.
Limitations
Pooled data from different types of institutions (cancer centers, medical colleges, private hospitals) has the advantage of representing the complete spectrum of patients and treatment but the disadvantage of having subjective information on some variables that cannot be adjusted in analysis. Standards of care can vary from center to center. There is no selection bias because all patients who received cancer-directed therapy in their respective institutions have been accounted for, and exclusion criteria are based on scientific logic. However, it is possible that a few patients received additional treatment elsewhere which, because of challenges in clinical follow-up, could not be quantified. There was no regular clinical follow-up per National Comprehensive Cancer Network guidelines. 38 Therefore, details of toxicity, recurrence, and disease-free survival and/or progression-free survival could not be accurately ascertained.
Strengths and Opportunities
This study is a foremost example of cancer registration because of its national program that evaluates and provides critical findings that could have an impact on patient care. Dynamic Internet-based data capture, data checks, and analysis had several advantages in ensuring standard quality data. NCDIR, a nonprofit organization with software experts as full-time faculty is unique, and along with its medical and statistical expertise, it has the strength of clinical neutrality, a distinct feature spelled out in its manifesto. The NCDIR research panel on cancer and scientific advisory committee have reputed oncologists from all subdisciplines. This POCCS is now an intramural activity of NCDIR. Thus, all 27 regional cancer centers and many other institutions have joined this study, providing an exceptional opportunity to examine, evaluate, and redesign treatment management in cervical and other cancers. A network of cancer hospitals linked to a central coordinating center with a system for accruing good clinical data through modern electronic information technology is in place. This number-38,771-is conservative because the results presented here are based on treatment provided in some renowned cancer treatment centers that are well equipped in terms of both facilities and skilled staff. In one participating medical college, the proportion of patients who received RTCT was less than 10%. If that proportion is used, then 90% of 75,873, or 68,286 cancers of the cervix, would not receive RTCT.
Perhaps this scenario in standards of care may be no different in other less developed regions of the world, where the majority 39 of cervical cancers (84.3%) occur. The global burden for 2012 was 527,600. 1 There seems to be a need to reiterate the NCI 28 alert that advised supplementing chemotherapy to radiation for locally advanced cancer of the cervix in the context of the developing world. If governments included cisplatin in the list of essential drugs, that would greatly increase its availability and at an affordable cost. | 2018-04-03T05:47:06.546Z | 2015-09-23T00:00:00.000 | {
"year": 2015,
"sha1": "07694060c306882489f1eafbd2861b81c6c8ef3b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1200/jgo.2015.000877",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07694060c306882489f1eafbd2861b81c6c8ef3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.